diff --git a/docs/chaos-engineering/chaos-faults/aws/ec2-cpu-hog.md b/docs/chaos-engineering/chaos-faults/aws/ec2-cpu-hog.md index 7843cb53e9f..0ea4aae95fa 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ec2-cpu-hog.md +++ b/docs/chaos-engineering/chaos-faults/aws/ec2-cpu-hog.md @@ -63,7 +63,7 @@ stringData: ## Fault Tunables
- Check the fault tunables + Check the Fault Tunables

Mandatory Fields

diff --git a/docs/chaos-engineering/chaos-faults/aws/ec2-io-stress.md b/docs/chaos-engineering/chaos-faults/aws/ec2-io-stress.md index 4837b502288..75cca15ff19 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ec2-io-stress.md +++ b/docs/chaos-engineering/chaos-faults/aws/ec2-io-stress.md @@ -65,7 +65,7 @@ stringData: ## Fault Tunables
-Check the fault tunables +Check the Fault Tunables

Mandatory Fields

diff --git a/docs/chaos-engineering/chaos-faults/aws/ec2-memory-hog.md b/docs/chaos-engineering/chaos-faults/aws/ec2-memory-hog.md index 67b922d7351..7bd24a12e7b 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ec2-memory-hog.md +++ b/docs/chaos-engineering/chaos-faults/aws/ec2-memory-hog.md @@ -65,7 +65,7 @@ stringData: ## Fault Tunables
-Check the fault tunables +Check the Fault Tunables

Mandatory Fields

diff --git a/docs/chaos-engineering/chaos-faults/aws/ecs-agent-stop.md b/docs/chaos-engineering/chaos-faults/aws/ecs-agent-stop.md index cf7866bfea2..dece9942801 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ecs-agent-stop.md +++ b/docs/chaos-engineering/chaos-faults/aws/ecs-agent-stop.md @@ -16,7 +16,7 @@ title: ECS Agent Stop ## Uses
-View the uses of the experiment +View the uses of the fault
Agent chaos stop is another very common and frequent scenario we find with ECS clusters that can break an agent that manages the task container on the ECS cluster and impacts their delivery. Such scenarios can still occur despite whatever availability aids docker provides. diff --git a/docs/chaos-engineering/chaos-faults/aws/ecs-container-cpu-hog.md b/docs/chaos-engineering/chaos-faults/aws/ecs-container-cpu-hog.md index fdf73b2dedb..820329da281 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ecs-container-cpu-hog.md +++ b/docs/chaos-engineering/chaos-faults/aws/ecs-container-cpu-hog.md @@ -20,7 +20,7 @@ title: ECS Container CPU Hog ## Uses
-View the uses of the experiment +View the uses of the fault
CPU hogs are another very common and frequent scenario we find with containers/applications that can result in the eviction of the application (task container) and impact its delivery. Such scenarios can still occur despite whatever availability aids docker provides. These problems are generally referred to as "Noisy Neighbour" problems. diff --git a/docs/chaos-engineering/chaos-faults/aws/ecs-container-io-stress.md b/docs/chaos-engineering/chaos-faults/aws/ecs-container-io-stress.md index 459eeb0b4e2..7f35df6fb2d 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ecs-container-io-stress.md +++ b/docs/chaos-engineering/chaos-faults/aws/ecs-container-io-stress.md @@ -20,7 +20,7 @@ title: ECS Container IO Hog ## Uses
-View the uses of the experiment +View the uses of the fault
Filesystem read and write is another very common and frequent scenario we find with conrainers/applications that can result in the eviction of the application (task container) and impact its delivery. Such scenarios that can still occur despite whatever availability aids docker provides. These problems are generally referred to as "Noisy Neighbour" problems. diff --git a/docs/chaos-engineering/chaos-faults/aws/ecs-container-memory-hog.md b/docs/chaos-engineering/chaos-faults/aws/ecs-container-memory-hog.md index 8c38a1d4205..b11895578bf 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ecs-container-memory-hog.md +++ b/docs/chaos-engineering/chaos-faults/aws/ecs-container-memory-hog.md @@ -20,7 +20,7 @@ title: ECS Container Memory Hog ## Uses
-View the uses of the experiment +View the uses of the fault
Memory usage within containers is subject to various constraints. If the limits are specified in their spec, exceeding them can cause termination of the container (due to OOMKill of the primary process, often pid 1) - the restart of the container by docker, subject to the policy specified. For containers with no limits placed, the memory usage is uninhibited until such time as the VM level OOM Behaviour takes over. In this case, containers on the Instance can be killed based on their oom_score. This eval is extended to all task containers running on the instance - thereby causing a bigger blast radius. diff --git a/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-latency.md b/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-latency.md index d83ba406a95..1f09dd7937d 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-latency.md +++ b/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-latency.md @@ -20,7 +20,7 @@ title: ECS Container Network Latency ## Uses
-View the uses of the experiment +View the uses of the fault
The fault causes network degradation of the task container without the container being marked unhealthy/unworthy of traffic from outside. The idea of this fault is to simulate issues within your ECS task network OR communication across services in different availability zones/regions etc. diff --git a/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-loss.md b/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-loss.md index 60724cc0c48..4e58263ace3 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-loss.md +++ b/docs/chaos-engineering/chaos-faults/aws/ecs-container-network-loss.md @@ -20,7 +20,7 @@ title: ECS Container Network Loss ## Uses
-View the uses of the experiment +View the uses of the fault
The fault causes network degradation of the task container without the container being marked unhealthy/unworthy of traffic from outside. The idea of this fault is to simulate issues within your ECS task network OR communication across services in different availability zones/regions etc. diff --git a/docs/chaos-engineering/chaos-faults/aws/ecs-instance-stop.md b/docs/chaos-engineering/chaos-faults/aws/ecs-instance-stop.md index d8e2bdcef31..60fd535a003 100644 --- a/docs/chaos-engineering/chaos-faults/aws/ecs-instance-stop.md +++ b/docs/chaos-engineering/chaos-faults/aws/ecs-instance-stop.md @@ -16,7 +16,7 @@ title: ECS Instance Stop ## Uses
-View the uses of the experiment +View the uses of the fault
EC2 instance chaos stop is another very common and frequent scenario we find with ECS clusters that can result in breaking of agent that manages task container on ECS cluster and impact its delivery. Such scenarios that can still occur despite whatever availability aids docker provides. diff --git a/docs/chaos-engineering/chaos-faults/aws/elb-az-down.md b/docs/chaos-engineering/chaos-faults/aws/elb-az-down.md new file mode 100644 index 00000000000..670d2213d48 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/elb-az-down.md @@ -0,0 +1,148 @@ +--- +id: elb-az-down +title: ELB AZ Down +--- + +## Introduction +- It takes AZ down chaos on a target ELB for a specified duration. It causes access restrictions for certain availability zones. +- It tests application sanity, availability, and recovery workflows of the application pod attached to the load balancer. + +:::tip Fault execution flow chart +![ELB AZ Down](./static/images/elb-az-down.png) +::: + +## Uses + +
+View the uses of the fault +
+AZ down is another very common and frequent scenario we find with ELB that can break the connectivity with the given zones and impacts their delivery. Such scenarios can still occur despite whatever availability aids AWS provides. + +Detaching the AZ from the load balancer will disrupt an application's performance and impact its smooth working. So this category of chaos fault helps build immunity in the application undergoing such scenarios. + +
+
+ +## Prerequisites + +:::info +- Kubernetes > 1.17 +- AWS access to attach or detach an AZ from ELB. +- Minimum number of AZ is attached to the ELB, else the fault fails to detach the given AZ. +- Kubernetes secret that has the AWS access configuration(key) in the `CHAOS_NAMESPACE`. A sample secret file looks like: +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-secret +type: Opaque +stringData: + cloud_config.yml: |- + # Add the cloud AWS credentials respectively + [default] + aws_access_key_id = XXXXXXXXXXXXXXXXXXX + aws_secret_access_key = XXXXXXXXXXXXXXX +``` +- If you change the secret key name (from `cloud_config.yml`), update the `AWS_SHARED_CREDENTIALS_FILE` environment variable value on `fault.yaml`with the same name. +::: + +## Default Validations + +:::info +- The ELB is attached to the given availability zones. +::: + +## Fault tunables + +
+ Check the Fault Tunables +

Mandatory Fields

+
+ + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
LOAD_BALANCER_NAME Provide the name of load balancer whose AZ has to be detached Eg. elb-name
ZONES Provide the target zones that have to be detached from ELB Eg. us-east-1a
REGION The region name for the target volumes Eg. us-east-1
+

Optional Fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
TOTAL_CHAOS_DURATION The time duration for chaos insertion (in seconds) Defaults to 30s
CHAOS_INTERVAL The time duration between the attachment and detachment of the volumes (sec) Defaults to 30s
SEQUENCE It defines sequence of chaos execution for multiple volumes Default value: parallel. Supported: serial, parallel
RAMP_TIME Period to wait before and after injection of chaos in sec Eg: 30
+
+ +## Fault Examples + +### Common and AWS specific tunables + +Refer to the [common attributes](../common-tunables-for-all-experiments) and [AWS specific tunable](./aws-experiments-tunables) to tune the common tunables for all faults and aws specific tunables. + +### Target Zones + +It contains comma separated list of target zones. It can be tuned via `ZONES` environment variable. + +Use the following example to tune it: + +[embedmd]:# (./static/manifests/elb-az-down/target-zones.yaml yaml) +```yaml +# contains elb az down for given zones +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: elb-az-down + spec: + components: + env: + # load balancer name for chaos + - name: LOAD_BALANCER_NAME + value: 'tes-elb' + # target zones for the chaos + - name: ZONES + value: 'us-east-1a,us-east-1b' + # region for chaos + - name: REGION + value: 'us-east-1' +``` diff --git a/docs/chaos-engineering/chaos-faults/aws/lambda-delete-event-source-mapping.md b/docs/chaos-engineering/chaos-faults/aws/lambda-delete-event-source-mapping.md new file mode 100644 index 00000000000..e0c5260fde1 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/lambda-delete-event-source-mapping.md @@ -0,0 +1,143 @@ +--- +id: lambda-delete-event-source-mapping +title: Lambda Delete Event Source Mapping +--- + +## Introduction + +- It removes the event source mapping from an AWS Lambda function for a certain chaos duration. +- It checks the performance of the running application/service without the event source mapping which can cause, for example, missing entries on a database. + +:::tip Fault execution flow chart +![Lambda Delete Event Source Mapping](./static/images/lambda-delete-event-source-mapping.png) +::: + +## Uses + +
+View the uses of the fault +
+Deleting an event source mapping from a lambda function is critical. It can lead to scenarios such as failure to update the database on an event trigger which can break the service and impact their delivery. Such scenarios can occur despite availability aids provided by AWS or determined by you. + +It helps understand if you have proper error handling or auto recovery configured for such cases. Hence, this category of chaos fault helps build the immunity of the application. +
+
+ +## Prerequisites + +:::info + +- Kubernetes >= 1.17 +- AWS Lambda event source mapping attached to the lambda function. +- Kubernetes secret that has AWS access configuration(key) in the `CHAOS_NAMESPACE`. A secret file looks like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-secret +type: Opaque +stringData: + cloud_config.yml: |- + # Add the cloud AWS credentials respectively + [default] + aws_access_key_id = XXXXXXXXXXXXXXXXXXX + aws_secret_access_key = XXXXXXXXXXXXXXX +``` + +- If you change the secret key name (from `cloud_config.yml`), update the `AWS_SHARED_CREDENTIALS_FILE` environment variable value on `experiment.yaml` with the same name. + +## Default Validations + +:::info + +- The AWS Lambda event source mapping is healthy and attached to the lambda function. + +::: + +## Fault Tunables + +
+ Check the Fault Tunables +

Mandatory Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
FUNCTION_NAME Function name of the target lambda function. It supports single function name. Eg: test-function
EVENT_UUIDS Provide the UUID for the target event source mapping. You can provide multiple values as (,) comma separated values. Eg: id1,id2
REGION The region name of the target lambda function Eg: us-east-2
+

Optional Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
TOTAL_CHAOS_DURATION The total time duration for chaos insertion in seconds Defaults to 30s
SEQUENCE It defines sequence of chaos execution for multiple instance Default value: parallel. Supported: serial, parallel
RAMP_TIME Period to wait before and after injection of chaos in sec Eg. 30
+
+ +## Fault Examples + +### Common and AWS specific tunables + +Refer to the [common attributes](../common-tunables-for-all-experiments) and [AWS specific tunable](./aws-experiments-tunables) to tune the common tunables for all faults and aws specific tunables. + +### Multiple Event Source Mapping + +It can delete multiple event source mappings for a certain chaos duration using `EVENT_UUIDS` environment variable that takes the UUID of the events as a comma separated value (CSV file). + +Use the following example to tune it: + +[embedmd]:# (./static/manifests/lambda-delete-event-source-mapping/multiple-events.yaml yaml) +```yaml +# contains the removal of multiple event source mapping +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-delete-event-source-mapping + spec: + components: + env: + # provide UUIDS of event source mapping + - name: EVENT_UUIDS + value: 'id1,id2' + # provide the function name for the chaos + - name: FUNCTION_NAME + value: 'chaos-function' +``` diff --git a/docs/chaos-engineering/chaos-faults/aws/lambda-toggle-event-mapping-state.md b/docs/chaos-engineering/chaos-faults/aws/lambda-toggle-event-mapping-state.md new file mode 100644 index 00000000000..5a838301266 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/lambda-toggle-event-mapping-state.md @@ -0,0 +1,143 @@ +--- +id: lambda-toggle-event-mapping-state +title: Lambda Toggle Event Mapping State +--- + +## Introduction + +- It toggles the event source mapping state to disable for a lambda function during a certain chaos duration. +- It checks the performance of the running application/service when the event source mapping is not enabled which can cause, for example, missing entries on a database. + +:::tip Fault execution flow chart +![Lambda Toggle Event Mapping State](./static/images/lambda-toggle-event-mapping-state.png) +::: + +## Uses + +
+View the uses of the fault +
+ Toggling between different states of event source mapping from a lambda function is critical. It can lead to scenarios such as failure to update the database on an event trigger which can break the service and impact their delivery. Such scenarios can occur despite availability aids provided by AWS or determined by you. + +It helps understand if you have proper error handling or auto recovery configured for such cases. Hence, this category of chaos fault helps build the immunity of the application. +
+
+ +## Prerequisites + +:::info + +- Kubernetes >= 1.17 +- AWS Lambda event source mapping attached to the lambda function. +- Kubernetes secret that has AWS access configuration(key) in the `CHAOS_NAMESPACE`. A secret file looks like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-secret +type: Opaque +stringData: + cloud_config.yml: |- + # Add the cloud AWS credentials respectively + [default] + aws_access_key_id = XXXXXXXXXXXXXXXXXXX + aws_secret_access_key = XXXXXXXXXXXXXXX +``` + +- If you change the secret key name (from `cloud_config.yml`), update the `AWS_SHARED_CREDENTIALS_FILE` environment variable value on `experiment.yaml` with the same name. + +## Default Validations + +:::info + +- The AWS Lambda event source mapping is healthy and attached to the lambda function. + +::: + +## Experiment Tunables + +
+ Check the Fault Tunables +

Mandatory Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
FUNCTION_NAME Function name of the target lambda function. It supports single function name. Eg: test-function
EVENT_UUIDS Provide the UUID for the target event source mapping. You can provide multiple values as (,) comma separated values. Eg: id1,id2
REGION The region name of the target lambda function Eg: us-east-2
+

Optional Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
TOTAL_CHAOS_DURATION The total time duration for chaos insertion in seconds Defaults to 30s
SEQUENCE It defines sequence of chaos execution for multiple instance Default value: parallel. Supported: serial, parallel
RAMP_TIME Period to wait before and after injection of chaos in sec Eg. 30
+
+ +## Fault Examples + +### Common and AWS specific tunables + +Refer to the [common attributes](../common-tunables-for-all-experiments) and [AWS specific tunable](./aws-experiments-tunables) to tune the common tunables for all faults and aws specific tunables. + +### Multiple Event Source Mapping + +It toggles between multiple event source mapping for a certain chaos duration using `EVENT_UUIDS` environment variable that takes the UUID of the events as a comma separated value (CSV file). + +Use the following example to tune it: + +[embedmd]:# (./static/manifests/lambda-toggle-event-mapping-state/multiple-events.yaml yaml) +```yaml +# contains the removal of multiple event source mapping +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-toggle-event-mapping-state + spec: + components: + env: + # provide UUIDS of event source mapping + - name: EVENT_UUIDS + value: 'id1,id2' + # provide the function name for the chaos + - name: FUNCTION_NAME + value: 'chaos-function' +``` diff --git a/docs/chaos-engineering/chaos-faults/aws/lambda-update-function-memory.md b/docs/chaos-engineering/chaos-faults/aws/lambda-update-function-memory.md new file mode 100644 index 00000000000..dfea56415b7 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/lambda-update-function-memory.md @@ -0,0 +1,149 @@ +--- +id: lambda-update-function-memory +title: Lambda Update Function Memory +--- + +## Introduction + +- It causes the memory of a lambda function to be updated to a specified value for a certain chaos duration. +- It checks the performance of the application/service running with a new memory limit and also helps to determine a safe overall memory limit value for the function. +- Smaller the memory limit higher will be the time taken by the lambda function under load. + +:::tip Fault execution flow chart +![Lambda Update Function Memory](./static/images/lambda-update-function-memory.png) +::: + +## Uses + +
+View the uses of the fault +
+Hitting a memory limit is a very common and frequent scenario we find with lambda functions that can slow down the service and impacts their delivery. Such scenarios can still occur despite whatever availability aids AWS provides or we determine. + +Running out of memory due to a smaller limit interrupts the flow of the given function. So this category of chaos fault helps you to build immunity on the application undergoing any such scenarios. +
+
+ +## Prerequisites + +:::info + +- Kubernetes >= 1.17 +- Access for operating AWS Lambda functions. +- Kubernetes secret that has AWS access configuration(key) in the `CHAOS_NAMESPACE`. A secret file looks like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-secret +type: Opaque +stringData: + cloud_config.yml: |- + # Add the cloud AWS credentials respectively + [default] + aws_access_key_id = XXXXXXXXXXXXXXXXXXX + aws_secret_access_key = XXXXXXXXXXXXXXX +``` + +- If you change the secret key name (from `cloud_config.yml`), update the `AWS_SHARED_CREDENTIALS_FILE` environment variable value on `experiment.yaml` with the same name. + +## Default Validations + +:::info + +- The Lambda function should be up and running. + +::: + +## Experiment Tunables + +
+ Check the Fault Tunables +

Mandatory Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
FUNCTION_NAME Function name of the target lambda function. It supports single function name. Eg: test-function
MEMORY_IN_MEGABYTES Provide the value of the memory limit of a function in megabytes. The minimum value of the memory limit on a lambda function is 128Mb and the maximum upto 10240Mb
REGION The region name of the target lambda function Eg: us-east-2
+

Optional Fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
TOTAL_CHAOS_DURATION The total time duration for chaos insertion in seconds Defaults to 30s
CHAOS_INTERVAL The interval (in seconds) between successive instance termination. Defaults to 30s
SEQUENCE It defines sequence of chaos execution for multiple instance Default value: parallel. Supported: serial, parallel
RAMP_TIME Period to wait before and after injection of chaos in seconds Eg. 30
+
+ +## Fault Examples + +### Common and AWS specific tunables + +Refer the [common attributes](../common-tunables-for-all-experiments) and [AWS specific tunable](./aws-experiments-tunables) to tune the common tunables for all faults and aws specific tunables. + +### Memory Limit + +It can update the lambda function memory limit to a newer value by using `MEMORY_IN_MEGABYTES` ENV as shown below. + +Use the following example to tune this: + +[embedmd]:# (./static/manifests/lambda-update-function-memory/function-memory.yaml yaml) +```yaml +# contains the memory limit value for the lambda function +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-update-function-memory + spec: + components: + env: + # provide the function memory limit + - name: MEMORY_IN_MEGABYTES + value: '10' + # provide the function name for memory limit chaos + - name: FUNCTION_NAME + value: 'chaos-function' +``` diff --git a/docs/chaos-engineering/chaos-faults/aws/lambda-update-function-timeout.md b/docs/chaos-engineering/chaos-faults/aws/lambda-update-function-timeout.md new file mode 100644 index 00000000000..e999578d394 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/lambda-update-function-timeout.md @@ -0,0 +1,148 @@ +--- +id: lambda-update-function-timeout +title: Lambda Update Function Timeout +--- + +## Introduction + +- It causes the timeout of a lambda function to be updated to a specified value for a certain chaos duration. +- It checks the performance of the application/service running with a new timeout and also helps to determine a safe overall timeout value for the function. + +:::tip Fault execution flow chart +![Lambda Update Function Timeout](./static/images/lambda-update-function-timeout.png) +::: + +## Uses + +
+View the uses of the fault +
+Hitting a timeout is a very common and frequent scenario we find with lambda functions that can break the service and impacts their delivery. Such scenarios can still occur despite whatever availability aids AWS provides or we determine. + +Getting timeout errors interrupts the flow of the given function. So this category of chaos fault helps you to build the immunity of the application undergoing any such scenarios. +
+
+ +## Prerequisites + +:::info + +- Kubernetes >= 1.17 +- Access to operate AWS Lambda service. +- Kubernetes secret that has AWS access configuration(key) in the `CHAOS_NAMESPACE`. A secret file looks like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-secret +type: Opaque +stringData: + cloud_config.yml: |- + # Add the cloud AWS credentials respectively + [default] + aws_access_key_id = XXXXXXXXXXXXXXXXXXX + aws_secret_access_key = XXXXXXXXXXXXXXX +``` + +- If you change the secret key name (from `cloud_config.yml`), update the `AWS_SHARED_CREDENTIALS_FILE` environment variable value on `experiment.yaml` with the same name. + +## Default Validations + +:::info + +- The Lambda function should be up and running. + +::: + +## Experiment Tunables + +
+ Check the Fault Tunables +

Mandatory Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
FUNCTION_NAME Function name of the target lambda function. It support single function name. Eg: test-function
FUNCTION_TIMEOUT Provide the value of function timeout in seconds. The minimum value is 1s and maximum upto 15mins that is 900s
REGION The region name of the target lambda function Eg: us-east-2
+

Optional Fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
TOTAL_CHAOS_DURATION The total time duration for chaos insertion in seconds Defaults to 30s
CHAOS_INTERVAL The interval (in seconds) between successive instance termination. Defaults to 30s
SEQUENCE It defines sequence of chaos execution for multiple instance Default value: parallel. Supported: serial, parallel
RAMP_TIME Period to wait before and after injection of chaos in seconds Eg. 30
+
+ +## Fault Examples + +### Common and AWS specific tunables + +Refer the [common attributes](../common-tunables-for-all-experiments) and [AWS specific tunable](./aws-experiments-tunables) to tune the common tunables for all faults and aws specific tunables. + +### Timeout Value + +It can update the lambda function timeout value to a newer value by using `FUNCTION_TIMEOUT` ENV as shown below. + +Use the following example to tune this: + +[embedmd]:# (./static/manifests/lambda-update-function-timeout/function-timeout.yaml yaml) +```yaml +# contains the timeout value for the lambda function +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-update-function-timeout + spec: + components: + env: + # provide the function timeout for 10seconds + - name: FUNCTION_TIMEOUT + value: '10' + # provide the function name for timeout chaos + - name: FUNCTION_NAME + value: 'chaos-function' +``` diff --git a/docs/chaos-engineering/chaos-faults/aws/rds-instance-delete.md b/docs/chaos-engineering/chaos-faults/aws/rds-instance-delete.md new file mode 100644 index 00000000000..c9e1128b466 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/rds-instance-delete.md @@ -0,0 +1,181 @@ +--- +id: rds-instance-delete +title: RDS Instance Delete +--- + +## Introduction + +- RDS Instance delete induces an RDS instance delete chaos on the AWS RDS cluster. It derives the instance under chaos from the RDS cluster. + + +:::tip Fault execution flow chart +![RDS Instance Delete](./static/images/rds-instance-delete.png) +::: + + +## Prerequisites + +:::info + +- Kubernetes >= 1.17 + +**AWS RDS Access Requirement:** + +- AWS access to delete RDS instances. + +- Kubernetes secret that has the AWS access configuration(key) in the `CHAOS_NAMESPACE`. A sample secret file looks like: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-secret +type: Opaque +stringData: + cloud_config.yml: |- + # Add the cloud AWS credentials respectively + [default] + aws_access_key_id = XXXXXXXXXXXXXXXXXXX + aws_secret_access_key = XXXXXXXXXXXXXXX +``` + +- If you change the secret key name (from `cloud_config.yml`), update the `AWS_SHARED_CREDENTIALS_FILE` environment variable value in the ChaosExperiment CR with the same name. + +## Default Validations + +:::info + +- The RDS instance should be in a healthy state. + +::: + +## Fault tunables + +
+ Check the Fault Tunables +

Mandatory Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
CLUSTER_NAME Name of the target RDS cluster Eg. rds-cluster-1
RDS_INSTANCE_IDENTIFIER Name of the target RDS Instances Eg. rds-cluster-1-instance
REGION The region name of the target RDS cluster Eg. us-east-1
+

Optional Fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
TOTAL_CHAOS_DURATION The total time duration for chaos insertion (sec) Defaults to 30s
INSTANCE_AFFECTED_PERC The Percentage of total RDS instance that are part of RDS cluster to target Defaults to 0 (corresponds to 1 instance), provide numeric value only
SEQUENCE It defines sequence of chaos execution for multiple instance Default value: parallel. Supported: serial, parallel
AWS_SHARED_CREDENTIALS_FILE Provide the path for aws secret credentials Defaults to /tmp/cloud_config.yml
RAMP_TIME Period to wait before and after injection of chaos in sec Eg. 30
+
+ +## Fault Examples + +### Common and AWS specific tunables + +Refer to the [common attributes](../common-tunables-for-all-experiments) and [AWS specific tunable](./aws-experiments-tunables) to tune the common tunables for all faults and aws specific tunables. + +### RDS_CLUSTER_NAME + +It defines the cluster name of the target RDS cluster. You can provide the `RDS_CLUSTER_NAME` using `CLUSTER_NAME` environment variable. If it hasn't been provided, the fault selects the Instance Identifier provided. + +Use the following example to tune it: + +[embedmd]:# (./static/manifests/rds-instance-delete/instance-delete-cluster.yaml yaml) +```yaml +# delete the RDS instance +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-delete + spec: + components: + env: + # provide the name of RDS cluster + - name: CLUSTER_NAME + value: 'rds-demo-cluster' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' +``` +### RDS_INSTANCE_IDENTIFIER + +It defines the RDS instance name. You can provide the RDS_INSTANCE_IDENTIFIER using `RDS_INSTANCE_IDENTIFIER` environment variable. + +Use the following example to tune it: + +[embedmd]:# (./static/manifests/rds-instance-delete/instance-delete-instance.yaml yaml) +```yaml +# delete the RDS instance +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-delete + spec: + components: + env: + # provide the RDS instance identifier + - name: RDS_INSTANCE_IDENTIFIER + value: 'rds-demo-instance-1,rds-demo-instance-2' + - name: INSTANCE_AFFECTED_PERC + value: '100' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' +``` diff --git a/docs/chaos-engineering/chaos-faults/aws/rds-instance-reboot.md b/docs/chaos-engineering/chaos-faults/aws/rds-instance-reboot.md new file mode 100644 index 00000000000..05e87f13bd6 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/rds-instance-reboot.md @@ -0,0 +1,182 @@ +--- +id: rds-instance-reboot +title: RDS Instance Reboot +--- + +## Introduction + +- RDS Instance Reboot can induce an RDS Instance Reboot chaos on AWS RDS cluster. It derives the instance under chaos from RDS cluster. + + +:::tip Fault execution flow chart +![RDS Instance Reboot](./static/images/rds-instance-reboot.png) +::: + + +## Prerequisites + +:::info + +- Kubernetes >= 1.17 + +**AWS RDS Access Requirement:** + +- AWS access to reboot RDS instances. + +- Kubernetes secret that has the AWS access configuration(key) in the `CHAOS_NAMESPACE`. A sample secret file looks like: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-secret +type: Opaque +stringData: + cloud_config.yml: |- + # Add the cloud AWS credentials respectively + [default] + aws_access_key_id = XXXXXXXXXXXXXXXXXXX + aws_secret_access_key = XXXXXXXXXXXXXXX +``` + +- If you change the secret key name (from `cloud_config.yml`), update the `AWS_SHARED_CREDENTIALS_FILE` environment variable value in the ChaosExperiment CR with the same name. + + +## Default Validations + +:::info + +- The RDS instance should be in a healthy state. + +::: + +## Fault tunables + +
+ Check the Fault Tunables +

Mandatory Fields

+ + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
CLUSTER_NAME Name of the target RDS cluster Eg. rds-cluster-1
RDS_INSTANCE_IDENTIFIER Name of the target RDS Instances Eg. rds-cluster-1-instance
REGION The region name of the target ECS cluster Eg. us-east-1
+

Optional Fields

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Variables Description Notes
TOTAL_CHAOS_DURATION The total time duration for chaos insertion (sec) Defaults to 30s
INSTANCE_AFFECTED_PERC The Percentage of total RDS instance that are part of RDS cluster to target Defaults to 0 (corresponds to 1 instance), provide numeric value only
SEQUENCE It defines sequence of chaos execution for multiple instance Default value: parallel. Supported: serial, parallel
AWS_SHARED_CREDENTIALS_FILE Provide the path for aws secret credentials Defaults to /tmp/cloud_config.yml
RAMP_TIME Period to wait before and after injection of chaos in sec Eg. 30
+
+ +## Experiment Examples + +### Common and AWS specific tunables + +Refer to the [common attributes](../common-tunables-for-all-experiments) and [AWS specific tunable](./aws-experiments-tunables) to tune the common tunables for all faults and aws specific tunables. + +### RDS_CLUSTER_NAME + +It defines the cluster name of the target RDS cluster. You can provide the RDS_CLUSTER_NAME using `CLUSTER_NAME` environment variable as well. If not provided, the fault selects the Instance Idenfier provided. + +Use the following example to tune it: + +[embedmd]:# (./static/manifests/rds-instance-reboot/instance-reboot-cluster.yaml yaml) +```yaml +# reboot the RDS instances +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-reboot + spec: + components: + env: + # provide the name of RDS cluster + - name: CLUSTER_NAME + value: 'rds-demo-cluster' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' +``` +### RDS_INSTANCE_IDENTIFIER + +It defines the RDS instance name. You can provide the RDS_INSTANCE_IDENTIFIER using `RDS_INSTANCE_IDENTIFIER` environment variable. + +Use the following example to tune it: + +[embedmd]:# (./static/manifests/rds-instance-reboot/instance-reboot-instance.yaml yaml) +```yaml +# reboot the RDS instances +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-reboot + spec: + components: + env: + # provide the RDS instance identifier + - name: RDS_INSTANCE_IDENTIFIER + value: 'rds-demo-instance-1,rds-demo-instance-2' + - name: INSTANCE_AFFECTED_PERC + value: '100' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' +``` diff --git a/docs/chaos-engineering/chaos-faults/aws/static/images/elb-az-down.png b/docs/chaos-engineering/chaos-faults/aws/static/images/elb-az-down.png new file mode 100644 index 00000000000..3b657a5d9d4 Binary files /dev/null and b/docs/chaos-engineering/chaos-faults/aws/static/images/elb-az-down.png differ diff --git a/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-delete-event-source-mapping.png b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-delete-event-source-mapping.png new file mode 100644 index 00000000000..ab284ffd9ae Binary files /dev/null and b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-delete-event-source-mapping.png differ diff --git a/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-toggle-event-mapping-state.png b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-toggle-event-mapping-state.png new file mode 100644 index 00000000000..246524ba7e0 Binary files /dev/null and b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-toggle-event-mapping-state.png differ diff --git a/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-update-function-memory.png b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-update-function-memory.png new file mode 100644 index 00000000000..8df50717dab Binary files /dev/null and b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-update-function-memory.png differ diff --git a/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-update-function-timeout.png b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-update-function-timeout.png new file mode 100644 index 00000000000..37feab923d6 Binary files /dev/null and b/docs/chaos-engineering/chaos-faults/aws/static/images/lambda-update-function-timeout.png differ diff --git a/docs/chaos-engineering/chaos-faults/aws/static/images/rds-instance-delete.png b/docs/chaos-engineering/chaos-faults/aws/static/images/rds-instance-delete.png new file mode 100644 index 00000000000..4eebd6bb65c Binary files /dev/null and b/docs/chaos-engineering/chaos-faults/aws/static/images/rds-instance-delete.png differ diff --git a/docs/chaos-engineering/chaos-faults/aws/static/images/rds-instance-reboot.png b/docs/chaos-engineering/chaos-faults/aws/static/images/rds-instance-reboot.png new file mode 100644 index 00000000000..4c9aea566ee Binary files /dev/null and b/docs/chaos-engineering/chaos-faults/aws/static/images/rds-instance-reboot.png differ diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/elb-az-down/target-zones.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/elb-az-down/target-zones.yaml new file mode 100644 index 00000000000..5bfb85a5cbe --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/elb-az-down/target-zones.yaml @@ -0,0 +1,22 @@ +# contains elb az down for given zones +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: elb-az-down + spec: + components: + env: + # load balancer name for chaos + - name: LOAD_BALANCER_NAME + value: 'test-elb' + # target zones for the chaos + - name: ZONES + value: 'us-east-1a,us-east-1b' + # region for chaos + - name: REGION + value: 'us-east-1' diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-delete-event-source-mapping/multiple-events.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-delete-event-source-mapping/multiple-events.yaml new file mode 100644 index 00000000000..16711729e05 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-delete-event-source-mapping/multiple-events.yaml @@ -0,0 +1,19 @@ +# contains the removal of multiple event source mapping +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-delete-event-source-mapping + spec: + components: + env: + # provide UUIDS of event source mapping + - name: EVENT_UUIDS + value: 'id1,id2' + # provide the function name for the chaos + - name: FUNCTION_NAME + value: 'chaos-function' diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-toggle-event-mapping-state/multiple-events.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-toggle-event-mapping-state/multiple-events.yaml new file mode 100644 index 00000000000..70d99c0aca9 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-toggle-event-mapping-state/multiple-events.yaml @@ -0,0 +1,19 @@ +# contains the removal of multiple event source mapping +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-toggle-event-mapping-state + spec: + components: + env: + # provide UUIDS of event source mapping + - name: EVENT_UUIDS + value: 'id1,id2' + # provide the function name for the chaos + - name: FUNCTION_NAME + value: 'chaos-function' diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-update-function-memory/function-memory.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-update-function-memory/function-memory.yaml new file mode 100644 index 00000000000..55bfd87ccfa --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-update-function-memory/function-memory.yaml @@ -0,0 +1,19 @@ +# contains the memory limit value for the lambda function +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-update-function-memory + spec: + components: + env: + # provide the function memory limit + - name: MEMORY_IN_MEGABYTES + value: '10' + # provide the function name for timeout chaos + - name: FUNCTION_NAME + value: 'chaos-function' diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-update-function-timeout/function-timeout.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-update-function-timeout/function-timeout.yaml new file mode 100644 index 00000000000..5b3fd28d009 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/lambda-update-function-timeout/function-timeout.yaml @@ -0,0 +1,19 @@ +# contains the timeout value for the lambda function +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + chaosServiceAccount: litmus-admin + experiments: + - name: lambda-update-function-timeout + spec: + components: + env: + # provide the function timeout for 10seconds + - name: FUNCTION_TIMEOUT + value: '10' + # provide the function name for timeout chaos + - name: FUNCTION_NAME + value: 'chaos-function' \ No newline at end of file diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-delete/instance-delete-cluster.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-delete/instance-delete-cluster.yaml new file mode 100644 index 00000000000..e2cfd2e9cfe --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-delete/instance-delete-cluster.yaml @@ -0,0 +1,21 @@ +# delete the RDS instance +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-delete + spec: + components: + env: + # provide the name of RDS cluster + - name: CLUSTER_NAME + value: 'rds-demo-cluster' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' \ No newline at end of file diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-delete/instance-delete-instance.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-delete/instance-delete-instance.yaml new file mode 100644 index 00000000000..442e7d429f2 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-delete/instance-delete-instance.yaml @@ -0,0 +1,23 @@ +# delete the RDS instance +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-delete + spec: + components: + env: + # provide the RDS instance identifier + - name: RDS_INSTANCE_IDENTIFIER + value: 'rds-demo-instance-1,rds-demo-instance-2' + - name: INSTANCE_AFFECTED_PERC + value: '100' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' \ No newline at end of file diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-reboot/instance-reboot-cluster.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-reboot/instance-reboot-cluster.yaml new file mode 100644 index 00000000000..6bd66e009a9 --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-reboot/instance-reboot-cluster.yaml @@ -0,0 +1,21 @@ +# reboot the RDS instances +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-reboot + spec: + components: + env: + # provide the name of RDS cluster + - name: CLUSTER_NAME + value: 'rds-demo-cluster' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' \ No newline at end of file diff --git a/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-reboot/instance-reboot-instance.yaml b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-reboot/instance-reboot-instance.yaml new file mode 100644 index 00000000000..126032710fd --- /dev/null +++ b/docs/chaos-engineering/chaos-faults/aws/static/manifests/rds-instance-reboot/instance-reboot-instance.yaml @@ -0,0 +1,23 @@ +# reboot the RDS instances +apiVersion: litmuschaos.io/v1alpha1 +kind: ChaosEngine +metadata: + name: engine-nginx +spec: + engineState: "active" + annotationCheck: "false" + chaosServiceAccount: litmus-admin + experiments: + - name: rds-instance-reboot + spec: + components: + env: + # provide the RDS instance identifier + - name: RDS_INSTANCE_IDENTIFIER + value: 'rds-demo-instance-1,rds-demo-instance-2' + - name: INSTANCE_AFFECTED_PERC + value: '100' + - name: REGION + value: 'us-east-2' + - name: TOTAL_CHAOS_DURATION + value: '60' \ No newline at end of file diff --git a/docs/chaos-engineering/chaos-faults/azure/azure-instance-cpu-hog.md b/docs/chaos-engineering/chaos-faults/azure/azure-instance-cpu-hog.md index 69ea7ae4f4a..cf2e726f42a 100644 --- a/docs/chaos-engineering/chaos-faults/azure/azure-instance-cpu-hog.md +++ b/docs/chaos-engineering/chaos-faults/azure/azure-instance-cpu-hog.md @@ -72,7 +72,7 @@ stringData: ## Fault Tunables
- Check the fault tunables + Check the Fault Tunables

Mandatory Fields

diff --git a/docs/chaos-engineering/chaos-faults/azure/azure-instance-io-stress.md b/docs/chaos-engineering/chaos-faults/azure/azure-instance-io-stress.md index 0d7fcb97c3d..2e1de4c6960 100644 --- a/docs/chaos-engineering/chaos-faults/azure/azure-instance-io-stress.md +++ b/docs/chaos-engineering/chaos-faults/azure/azure-instance-io-stress.md @@ -74,7 +74,7 @@ stringData: ## Fault Tunables
-Check the fault tunables +Check the Fault Tunables

Mandatory Fields

diff --git a/docs/chaos-engineering/chaos-faults/azure/azure-instance-memory-hog.md b/docs/chaos-engineering/chaos-faults/azure/azure-instance-memory-hog.md index fe1fd099d4b..600ffb8573a 100644 --- a/docs/chaos-engineering/chaos-faults/azure/azure-instance-memory-hog.md +++ b/docs/chaos-engineering/chaos-faults/azure/azure-instance-memory-hog.md @@ -74,7 +74,7 @@ stringData: ## Fault Tunables
-Check the fault tunables +Check the Fault Tunables

Mandatory Fields

diff --git a/docs/chaos-engineering/chaos-faults/chaos-faults.md b/docs/chaos-engineering/chaos-faults/chaos-faults.md index b0d7f80a64f..24d25844105 100644 --- a/docs/chaos-engineering/chaos-faults/chaos-faults.md +++ b/docs/chaos-engineering/chaos-faults/chaos-faults.md @@ -295,6 +295,16 @@ Following Platform Chaos faults are available: + + + + + + + + + +
Injects ECS instance stop chaos on target ECS cluster ecs-instance-stop
RDS Instance DeleteInjects RDS instance delete chaos on target RDS instance/clusterrds-instance-delete
RDS Instance RebootInjects RDS instance reboot chaos on target RDS instance/clusterrds-instance-reboot
### GCP diff --git a/docs/chaos-engineering/introduction/hce-beta-release-guide.md b/docs/chaos-engineering/introduction/hce-beta-release-guide.md index 9a01c4103ad..d816b79f750 100644 --- a/docs/chaos-engineering/introduction/hce-beta-release-guide.md +++ b/docs/chaos-engineering/introduction/hce-beta-release-guide.md @@ -1,14 +1,14 @@ --- sidebar_position: 1 -title: HCE Beta Release Guide +title: HCE Release Guide --- # Introduction -The Beta-1 release of the Harness Chaos Engineering (HCE) module provides a **functional chaos experimentation platform** that helps users simulate a wide variety of real-world failures observed on Kubernetes, AWS and other infrastructure. The module helps carry out end-to-end chaos engineering practices, that includes defining chaos as code, controlling blast radius, validating hypotheses, automating experiments through pipeline integrations and analyzing resilience trends via metrics. +The release of the Harness Chaos Engineering (HCE) module provides a **functional chaos experimentation platform** that helps users simulate a wide variety of real-world failures observed on Kubernetes, AWS and other infrastructure. The module helps carry out end-to-end chaos engineering practices, that includes defining chaos as code, controlling blast radius, validating hypotheses, automating experiments through pipeline integrations and analyzing resilience trends via metrics. The chaos module is built on the open source CNCF project LitmusChaos, with additional features to enhance the user experience. -Below is a list of actions that you (user) can perform as a part of the Beta-1 release (Nov 14th, 2022). +Below is a list of actions that you (user) can perform as a part of the release (Nov 14th, 2022). ## Managing Access to Chaos Resources diff --git a/docs/chaos-engineering/introduction/release-notes.md b/docs/chaos-engineering/introduction/release-notes.md new file mode 100644 index 00000000000..ee8c1eb7d0d --- /dev/null +++ b/docs/chaos-engineering/introduction/release-notes.md @@ -0,0 +1,83 @@ +--- +sidebar_position: 1 +title: HCE Release Notes +--- + +Harness Chaos Engineering is updated regularly. Review the notes below for details about recent changes. + +:::note +Harness deploys updates progressively to different Harness cluster hosting accounts. The features and fixes in the release notes may not be available in your cluster immediately. +::: + +# December 2, 2022, version 0.4.2 + +## What’s New + +1. Update feature for ChaosHub enabling users to update details like Git Connector, Repository Name, Branch name & Name for an already connected ChaosHub. +2. Adds CDN Support for Chaos module static artifacts, making UI load faster on client's devices. +3. Added version info in ChaosDriver & ChaosManager. Now, the versions will be available over provided endpoints `/chaos/driver/api/version` & `/chaos/manager/api/version` for ChaosDriver & ChaosManager respectively. +4. Adds a range filter dropdown in the Experiment Runs bar graph under Experiment overview allowing users to set the range on the last runs to be shown in the graph. +5. Adds support for all faults statuses in the Experiment Runs graph. Previously only `Failed` & `Passed` faults were getting shown, now faults in `Awaited`, `Stopped` & `N/A` states will also be available under the Experiment Runs graph. +6. Adds manifest download button in UI for Chaos Infrastructures enabling users to have seamless upgrade. +7. Adds consistent loaders for all components & screens in UI. + +## Early access features + +No early access features are available in this release. + +## Fixed Issues + +1. Fixes Enterprise ChaosHub shown irrespective of searched terms by the users. +2. Fixes httpProbe schema in UI to add support for new response timeout changes for HTTP probe. Now, probeTimeout for HTTP probes will be treated as response timeout & should be provided in seconds. +3. Fixes the issue when the details of previously connected chaos infrastructure were getting pre-filled while connecting new chaos infrastructure. +4. Fixes the Run button returning an error even when the Experiment run is already completed. +5. Fixes calendar on the Experiments & Experiment Runs page having a default selection of one week. Now, all experiments & runs will be shown by default. +6. Fixes panic error for k8sObjects and k8sLogs go-routines resulting in closed channel error. +7. Fixes cancel(X) button & back button missing in Enable Chaos Infrastructure screen +8. Fixes repeated error logs for ChaosHub in Chaos-Manager when it was not available to find some of the icons. +9. Fixes the Expected Resilience Score changing to NaN when trying to override the same completely. +10. Fixes resource-type not coming for aborting a Chaos Experiment in audit-trail. +11. Fixes Minor UI/UX Issues making the UI more user-friendly & more accessible. + +# November 14, 2022 + +## Early access features + +The Harness Chaos Engineering (HCE) module, which you can use to perform chaos experiments on your applications and infrastructure, is now available for testing. To be part of this testing, contact [Harness Support](mailto:support@harness.io). HCE documentation, which includes user guides and [tutorials](https://developer.harness.io/tutorials/run-chaos-experiments), is available on the Harness Developer Hub. Harness recommends that you gain familiarity with the chaos experimentation workflow in HCE by following the instructions in [Your First Chaos Experiment Run](https://developer.harness.io/tutorials/run-chaos-experiments/first-chaos-engineering). + + +### Known issues + +#### ChaosHub + +1. Github is the only Git provider for chaoshubs. +2. Details for an already connected chaoshub can’t be updated. + +#### Chaos Infrastructure + +1. Chaos infrastructure can't be installed through Harness Delegate. +2. Logs for chaos infrastructure can’t be viewed. +3. The properties of chaos infrastructure can’t be updated. You will need to provide blacklisted namespaces. +4. The properties of the environment to which the chaos infrastructure belongs can’t be updated. +5. Configuring chaos infrastructure doesn’t provide support for Linux and Windows. + +#### Chaos Experiments + +1. Experiments with parallel faults can’t be created. +2. Probe tunables can’t be updated or edited. +3. A cron or recurring chaos experiment can’t be suspended or resumed. +4. An individual fault in an experiment can’t be stopped through your input. +5. A chaos experiment can’t be pushed to Gitlab, Bitbucket, or Gerrit. +6. A chaos experiment can’t be pushed from Azure to Got +7. SCM experiment push logs can’t be audited. + +#### CI Pipeline integration + +1. Optional assertion for chaos step failure can’t be provided during pipeline integration. +2. The chaos error type(s) can’t be selected in a failure strategy. +3. Timeouts can’t be defined for experiment execution. +4. Access control can’t be gained for the chaos step addition. +5. Pipeline template support can’t be obtained with the chaos steps. +6. The experiment execution can’t be viewed from step output during the experiment run. +7. Propagation can’t be aborted from chaos step to experiment execution. +8. Information about propagation can’t be gained from pipeline to experiment (for audit purposes). \ No newline at end of file diff --git a/docs/chaos-engineering/technical-reference/architecture.md b/docs/chaos-engineering/technical-reference/architecture.md index 92f67112998..8e2e34e0f51 100644 --- a/docs/chaos-engineering/technical-reference/architecture.md +++ b/docs/chaos-engineering/technical-reference/architecture.md @@ -2,11 +2,18 @@ title: Architecture sidebar_position: 1 --- +Below is an overview of the HCE architecture. +![Overview](./static/architecture/overview.png) + +Harness Chaos Engineering is split into two parts: +1. **Harness Control Plane** +2. **Chaos Infrastructures** + +The diagram below gives a peek into the HCE architecture. ![Architecture](./static/architecture/architecture.png) -At large, Harness Chaos Engineering is split into two parts, the **Harness Control Plane** and **Chaos Infrastructures**. -Harness control plane is the single source for collaboratively creating, scheduling, and monitoring Chaos Experiments, a set of Chaos Faults defined in a definite sequence to achieve a desired chaos impact on the target resources upon execution. Users can log in to the Harness platform and leverage the interactive Chaos Studio to define their chaos experiments and target various aspects of their infrastructure. The experiments can be actively monitored for their status and logs as they execute, and upon conclusion the result of the experiment run can also be observed. +**Harness control plane** is the single source to collaboratively create, schedule, and monitor chaos experiments, a set of chaos faults defined in a sequence to achieve a desired chaos impact on the target resources on execution. You can log in to the Harness platform and leverage the interactive chaos studio to define your chaos experiments and target various aspects of the infrastructure. You can actively monitor the experiments for their status and logs as they execute. As a consequence, you can observe the run of the experiment one it is complete. -Chaos infrastructure is a service that runs within your target environment to aid HCE in accessing the target resources and injecting chaos at cloud-native scale. It can be either setup with a cluster-wide access or with a single namespace scope only. It maintains an active connection with the control plane and exchanges information such as the experiment logs and results, service health status, etc. Upon running an experiment from the control plane, chaos infrastructure executes it within the target environment. The execution of an experiment can be understood as the execution of individual faults and any other custom operations defined as part of the experiment. Multiple chaos infrastructures can exist as part of a single deployment environment, to target all the different resources present in an environment. +**Chaos infrastructure** is a service that runs within your target environment to aid HCE in accessing the target resources and injecting chaos at cloud-native scale. It can be setup with a cluster-wide access or with a single namespace scope. It maintains an active connection with the control plane and exchanges information such as the experiment logs and results, service health status, etc. Upon running an experiment from the control plane, chaos infrastructure executes it within the target environment. The experiment execution is the execution of individual faults and any other custom operations defined as part of the experiment. Multiple chaos infrastructures can exist as part of a single deployment environment, to target all the different resources present in an environment. diff --git a/docs/chaos-engineering/technical-reference/static/architecture/architecture.png b/docs/chaos-engineering/technical-reference/static/architecture/architecture.png index fff322e209d..8cce69b1604 100644 Binary files a/docs/chaos-engineering/technical-reference/static/architecture/architecture.png and b/docs/chaos-engineering/technical-reference/static/architecture/architecture.png differ diff --git a/docs/chaos-engineering/technical-reference/static/architecture/overview.png b/docs/chaos-engineering/technical-reference/static/architecture/overview.png new file mode 100644 index 00000000000..741da6e8ab1 Binary files /dev/null and b/docs/chaos-engineering/technical-reference/static/architecture/overview.png differ diff --git a/docs/continuous-integration/ci-quickstarts/ci-concepts.md b/docs/continuous-integration/ci-quickstarts/ci-concepts.md index e58c038b4bd..2132f6d82fd 100644 --- a/docs/continuous-integration/ci-quickstarts/ci-concepts.md +++ b/docs/continuous-integration/ci-quickstarts/ci-concepts.md @@ -24,7 +24,7 @@ This topic describes CI concepts and provides a summary of the benefits of CI. Before learning about Harness CI, you should have an understanding of the following: -* [Harness Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Harness Key Concepts](../../getting-started/learn-harness-key-concepts.md) * [Drone and Harness](drone-and-harness.md) ### Visual Summary diff --git a/docs/continuous-integration/ci-quickstarts/ci-pipeline-basics.md b/docs/continuous-integration/ci-quickstarts/ci-pipeline-basics.md index 127c42759d2..e89a0c358bc 100644 --- a/docs/continuous-integration/ci-quickstarts/ci-pipeline-basics.md +++ b/docs/continuous-integration/ci-quickstarts/ci-pipeline-basics.md @@ -12,7 +12,7 @@ helpdocs_is_published: true This topic covers CI Pipeline basics to get you ready to start building Pipelines easily. -For details on general Harness concepts, see [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts). +For details on general Harness concepts, see [Learn Harness' Key Concepts](../../getting-started/learn-harness-key-concepts.md). ### Pipelines diff --git a/docs/continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md b/docs/continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md index 19d41261345..70a12e3339d 100644 --- a/docs/continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md +++ b/docs/continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md @@ -87,7 +87,9 @@ Pipelines are a collection of one or more stages. They manage and automate build * Click **Pipelines** and then **Create a Pipeline**. * Enter the name **CI Pipeline** and click **Start**. -As you enter a name for the Pipeline, the ID for the Pipeline is created. A Pipeline name can change, but an ID is permanent. The ID is how you can reference subordinate elements of a Pipeline, such as the names of variables within the Pipeline.### Step 2: Set Up the Build Stage +As you enter a name for the Pipeline, the ID for the Pipeline is created. A Pipeline name can change, but an ID is permanent. The ID is how you can reference subordinate elements of a Pipeline, such as the names of variables within the Pipeline. + +### Step 2: Set Up the Build Stage The "work horse" of most CI Pipelines is the Build Stage. This is where you specify the end-to-end workflow for your build: the codebase to build, the infrastructure to build it, where to post the finished artifact, and any additional tasks (such as automated tests or validations) you want the build to run. diff --git a/docs/continuous-integration/ci-quickstarts/test-intelligence-concepts.md b/docs/continuous-integration/ci-quickstarts/test-intelligence-concepts.md index fd32ee300ae..b7ffa52e791 100644 --- a/docs/continuous-integration/ci-quickstarts/test-intelligence-concepts.md +++ b/docs/continuous-integration/ci-quickstarts/test-intelligence-concepts.md @@ -18,7 +18,7 @@ Test Intelligence is supported for Java and .NET Core codebases only at this tim Before learning about Test Intelligence, you should understand the following: -* [Harness Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Harness Key Concepts](../../getting-started/learn-harness-key-concepts.md) * [Run a script in CI Stage](../use-ci/run-ci-scripts/run-a-script-in-a-ci-stage.md) ### Visual Summary diff --git a/docs/continuous-integration/ci-technical-reference/background-step-settings.md b/docs/continuous-integration/ci-technical-reference/background-step-settings.md index 3f9adef5d11..cc281303db1 100644 --- a/docs/continuous-integration/ci-technical-reference/background-step-settings.md +++ b/docs/continuous-integration/ci-technical-reference/background-step-settings.md @@ -61,7 +61,9 @@ Enable this option to run the container with escalated privileges. This is the e The path to the file(s) that store results in the JUnit XML format. Regex is supported. -This variable must be set for the background step to publish test results.#### Environment Variables +This variable must be set for the background step to publish test results. + +#### Environment Variables You can inject environment variables into a container and use them in the **Command** script. You need to enter a **Name** and **Value** for each variable. diff --git a/docs/continuous-integration/ci-technical-reference/build-and-push-to-docker-hub-step-settings.md b/docs/continuous-integration/ci-technical-reference/build-and-push-to-docker-hub-step-settings.md index d651ea19f46..9289fbe574d 100644 --- a/docs/continuous-integration/ci-technical-reference/build-and-push-to-docker-hub-step-settings.md +++ b/docs/continuous-integration/ci-technical-reference/build-and-push-to-docker-hub-step-settings.md @@ -26,7 +26,9 @@ The Harness Docker Registry Connector to use for uploading the image. See [Docke The name of the Repository. For example, `/`. -When using private Docker registries, use a fully qualified repo name.### Tags +When using private Docker registries, use a fully qualified repo name. + +### Tags [Docker build tag](https://docs.docker.com/engine/reference/commandline/build/#tag-an-image--t) (`-t`). @@ -50,7 +52,9 @@ Context represents a directory containing a Dockerfile which kaniko will use to Kaniko requires root access to build the docker image. If you have not already enabled root access, you will receive the following error: -`failed to create docker config file: open/kaniko/ .docker/config.json: permission denied`#### Labels +`failed to create docker config file: open/kaniko/ .docker/config.json: permission denied` + +#### Labels [Docker object labels](https://docs.docker.com/config/labels-custom-metadata/) to add metadata to the Docker image. @@ -58,7 +62,9 @@ Kaniko requires root access to build the docker image. If you have not already e The [Docker build-time variables](https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg) (`--build-arg`). -![](./static/build-and-push-to-docker-hub-step-settings-11.png)#### Target +![](./static/build-and-push-to-docker-hub-step-settings-11.png) + +#### Target The [Docker target build stage](https://docs.docker.com/engine/reference/commandline/build/#specifying-target-build-stage---target) (`--target`). diff --git a/docs/continuous-integration/ci-technical-reference/built-in-cie-codebase-variables-reference.md b/docs/continuous-integration/ci-technical-reference/built-in-cie-codebase-variables-reference.md index 8303654b23a..51d3e075a99 100644 --- a/docs/continuous-integration/ci-technical-reference/built-in-cie-codebase-variables-reference.md +++ b/docs/continuous-integration/ci-technical-reference/built-in-cie-codebase-variables-reference.md @@ -336,7 +336,9 @@ User name of the Git account for the Push webhook event.  User avatar of the Git account for the Push webhook event. -For Bitbucket PR builds (whether by Trigger, Manual, or PR Number), the variable `<+codebase.commitSha>` returns a short sha. This is due to the Bitbucket webhook payload only sending short sha.### See Also +For Bitbucket PR builds (whether by Trigger, Manual, or PR Number), the variable `<+codebase.commitSha>` returns a short sha. This is due to the Bitbucket webhook payload only sending short sha. + +### See Also [Built-in Git Trigger Reference](https://docs.harness.io/article/rset0jry8q-triggers-reference#built_in_git_trigger_and_payload_expressions) diff --git a/docs/continuous-integration/ci-technical-reference/configure-run-tests-step-settings.md b/docs/continuous-integration/ci-technical-reference/configure-run-tests-step-settings.md index ae3697b8229..1fcf9e1f178 100644 --- a/docs/continuous-integration/ci-technical-reference/configure-run-tests-step-settings.md +++ b/docs/continuous-integration/ci-technical-reference/configure-run-tests-step-settings.md @@ -88,7 +88,9 @@ Enter the commands used for cleaning up the environment after running the tests. The path to the file(s) that store results in the JUnit XML format. You can enter multiple paths. [Glob](https://en.wikipedia.org/wiki/Glob_(programming)) is supported. -This variable must be set for the Run Tests tep to publish test results.### Environment Variables +This variable must be set for the Run Tests tep to publish test results. + +### Environment Variables Variables passed to the container as environment variables and used in the Commands. diff --git a/docs/continuous-integration/ci-technical-reference/run-step-settings.md b/docs/continuous-integration/ci-technical-reference/run-step-settings.md index 2bc6dd953a2..3ab13f036f4 100644 --- a/docs/continuous-integration/ci-technical-reference/run-step-settings.md +++ b/docs/continuous-integration/ci-technical-reference/run-step-settings.md @@ -112,7 +112,9 @@ The syntax for referencing output variables between steps in different stages lo `<+stages.[stageID].execution.steps.[stepID].output.outputVariables.[varName]>` -The subsequent build job fails when exit 0 is present along with output variables.##### Accessing Environment Variables Between Stages +The subsequent build job fails when exit 0 is present along with output variables. + +##### Accessing Environment Variables Between Stages If you would like to access environment variables between stages, use an expression similar to the example listed below. @@ -122,7 +124,9 @@ You may also output the step variable to the stage/pipeline variable as they are `<+pipeline.stages.[stage Id].variables.BUILD_NUM>` -Environment variables may also be accessed when selecting the auto-suggest/ auto-complete feature in the Harness UI.#### Image Pull Policy +Environment variables may also be accessed when selecting the auto-suggest/ auto-complete feature in the Harness UI. + +#### Image Pull Policy Select an option to set the pull policy for the image. diff --git a/docs/continuous-integration/ci-technical-reference/save-cache-to-gcs-step-settings.md b/docs/continuous-integration/ci-technical-reference/save-cache-to-gcs-step-settings.md index 49497f890ab..bbd39dbe20f 100644 --- a/docs/continuous-integration/ci-technical-reference/save-cache-to-gcs-step-settings.md +++ b/docs/continuous-integration/ci-technical-reference/save-cache-to-gcs-step-settings.md @@ -53,11 +53,15 @@ A list of the files/folders to cache. Add each file/folder separately. Select the archive format. -The default archive format is TAR.#### Override Cache +The default archive format is TAR. + +#### Override Cache Select this option to override the cache if the key already exists. -By default, the **Override Cache** option is set to False (unchecked).#### Run as User +By default, the **Override Cache** option is set to False (unchecked). + +#### Run as User Set the value to specify the user id for all processes in the pod, running in containers. See [Set the security context for a pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod). diff --git a/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push-to-gcr.md b/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push-to-gcr.md index 6daff4d9df2..af2a33b09fe 100644 --- a/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push-to-gcr.md +++ b/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push-to-gcr.md @@ -19,7 +19,7 @@ The following steps build an image and push it to GCR. * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [Delegates Overview](https://ngdocs.harness.io/article/2k7lnc7lvl-delegates-overview) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) -* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Step 1: Create the CI Stage diff --git a/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-upload-an-artifact.md b/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-upload-an-artifact.md index 77693b2664d..f9710a0e08b 100644 --- a/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-upload-an-artifact.md +++ b/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-upload-an-artifact.md @@ -37,7 +37,7 @@ You should be familiar with the following: * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) * [Set Up Build Infrastructure](https://docs.harness.io/category/set-up-build-infrastructure) -* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Visual Summary diff --git a/docs/continuous-integration/use-ci/build-and-upload-artifacts/modify-and-override-build-settings-before-a-build.md b/docs/continuous-integration/use-ci/build-and-upload-artifacts/modify-and-override-build-settings-before-a-build.md index 94f0ae68998..8402041587c 100644 --- a/docs/continuous-integration/use-ci/build-and-upload-artifacts/modify-and-override-build-settings-before-a-build.md +++ b/docs/continuous-integration/use-ci/build-and-upload-artifacts/modify-and-override-build-settings-before-a-build.md @@ -60,7 +60,9 @@ You can now run your Maven test as: `mvn test -s settings.xml` If you create `settings.xml` file in the `~/.m2/` folder, Maven can read the secrets from the default location and you don't need to run the test with `-s` flag. -For example: If you use `echo '<+secrets.getValue("account.settingsXML")>' >``~/.m2/settings.xml.`You can now run your test as: `mvn test`### See Also +For example: If you use `echo '<+secrets.getValue("account.settingsXML")>' >``~/.m2/settings.xml.`You can now run your test as: `mvn test` + +### See Also * [Run a Script in a CI Stage](../run-ci-scripts/run-a-script-in-a-ci-stage.md) diff --git a/docs/continuous-integration/use-ci/build-and-upload-artifacts/upload-artifacts-to-jfrog.md b/docs/continuous-integration/use-ci/build-and-upload-artifacts/upload-artifacts-to-jfrog.md index d35550597b0..6dd0e139161 100644 --- a/docs/continuous-integration/use-ci/build-and-upload-artifacts/upload-artifacts-to-jfrog.md +++ b/docs/continuous-integration/use-ci/build-and-upload-artifacts/upload-artifacts-to-jfrog.md @@ -20,7 +20,7 @@ The following steps run SSH commands and push the artifacts to JFrog Artifactory * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) * [Set Up Build Infrastructure](https://docs.harness.io/category/set-up-build-infrastructure) -* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Step 1: Create the CI Stage diff --git a/docs/continuous-integration/use-ci/caching-ci-data/save-cache-in-gcs.md b/docs/continuous-integration/use-ci/caching-ci-data/save-cache-in-gcs.md index a7edda74421..a92e4c761ee 100644 --- a/docs/continuous-integration/use-ci/caching-ci-data/save-cache-in-gcs.md +++ b/docs/continuous-integration/use-ci/caching-ci-data/save-cache-in-gcs.md @@ -26,7 +26,7 @@ You cannot share access credentials or other [Text Secrets](https://ngdocs.harne * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) * [Set Up Build Infrastructure](https://docs.harness.io/category/set-up-build-infrastructure) -* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Limitations diff --git a/docs/continuous-integration/use-ci/caching-ci-data/saving-cache.md b/docs/continuous-integration/use-ci/caching-ci-data/saving-cache.md index 9b04e1d4aa7..fd05140e8f7 100644 --- a/docs/continuous-integration/use-ci/caching-ci-data/saving-cache.md +++ b/docs/continuous-integration/use-ci/caching-ci-data/saving-cache.md @@ -19,12 +19,14 @@ In a Harness CI Pipeline, you can save the cache to an AWS S3 bucket in one Stag The topic explains how to configure the **Save Cache to S3** and **Restore Cache from S3** steps in CI using a two-stage Pipeline. -You cannot share access credentials or other [Text Secrets](https://ngdocs.harness.io/article/osfw70e59c-add-use-text-secrets) across Stages.### Before You Begin +You cannot share access credentials or other [Text Secrets](https://ngdocs.harness.io/article/osfw70e59c-add-use-text-secrets) across Stages. + +### Before You Begin * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) * [Set Up Build Infrastructure](https://docs.harness.io/category/set-up-build-infrastructure) -* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Limitations diff --git a/docs/continuous-integration/use-ci/caching-ci-data/share-ci-data-across-steps-and-stages.md b/docs/continuous-integration/use-ci/caching-ci-data/share-ci-data-across-steps-and-stages.md index 37582956d42..0e184d3bb93 100644 --- a/docs/continuous-integration/use-ci/caching-ci-data/share-ci-data-across-steps-and-stages.md +++ b/docs/continuous-integration/use-ci/caching-ci-data/share-ci-data-across-steps-and-stages.md @@ -20,7 +20,9 @@ You can declare Shared Paths for a Stage. Any Step in the Stage can create, retr To declare a Shared Path, open the Stage, go to the Overview tab, click **Shared Paths**, and add the subfolder such as `/root/.m2`. Once you do this, any Step can then access `/root/.m2`. -![](./static/share-ci-data-across-steps-and-stages-01.png)### Share Data Across Stages +![](./static/share-ci-data-across-steps-and-stages-01.png) + +### Share Data Across Stages You can share data across Stages using AWS or GCS buckets: diff --git a/docs/continuous-integration/use-ci/codebase-configuration/create-and-configure-a-codebase.md b/docs/continuous-integration/use-ci/codebase-configuration/create-and-configure-a-codebase.md index f74528efa1e..e596472c239 100644 --- a/docs/continuous-integration/use-ci/codebase-configuration/create-and-configure-a-codebase.md +++ b/docs/continuous-integration/use-ci/codebase-configuration/create-and-configure-a-codebase.md @@ -33,7 +33,7 @@ Editing the Codebase for a Pipeline: * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [Delegates Overview](https://docs.harness.io/article/2k7lnc7lvl-delegates-overview) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) -* [Learn Harness Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Create or Edit a Codebase Connector diff --git a/docs/continuous-integration/use-ci/run-ci-scripts/clone-and-process-multiple-codebases-in-the-same-pipeline.md b/docs/continuous-integration/use-ci/run-ci-scripts/clone-and-process-multiple-codebases-in-the-same-pipeline.md index b26d1975637..4b70240ef7c 100644 --- a/docs/continuous-integration/use-ci/run-ci-scripts/clone-and-process-multiple-codebases-in-the-same-pipeline.md +++ b/docs/continuous-integration/use-ci/run-ci-scripts/clone-and-process-multiple-codebases-in-the-same-pipeline.md @@ -27,7 +27,7 @@ To go through this workflow, you need the following: * A familiarity with basic Harness CI concepts: + [CI Pipeline Tutorial](../../ci-quickstarts/ci-pipeline-quickstart.md) - + [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + + [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) * A familiarity with Build Stage settings: + [CI Build Stage Settings](../../ci-technical-reference/ci-stage-settings.md) * A familiarity with how Pipelines use Codebases: @@ -98,7 +98,9 @@ Now the Dockerfile is in the correct location to build the image: Now that the files from your repos are in one common workspace, you can add a Build Step (in this case, Build and Push an Image to Docker Registry) to your Stage. -![](./static/clone-and-process-multiple-codebases-in-the-same-pipeline-03.png)### Step 5: Run the Pipeline +![](./static/clone-and-process-multiple-codebases-in-the-same-pipeline-03.png) + +### Step 5: Run the Pipeline Now you can run your Pipeline. diff --git a/docs/continuous-integration/use-ci/run-ci-scripts/run-a-script-in-a-ci-stage.md b/docs/continuous-integration/use-ci/run-ci-scripts/run-a-script-in-a-ci-stage.md index 4900040b120..7491fa87bd7 100644 --- a/docs/continuous-integration/use-ci/run-ci-scripts/run-a-script-in-a-ci-stage.md +++ b/docs/continuous-integration/use-ci/run-ci-scripts/run-a-script-in-a-ci-stage.md @@ -22,7 +22,7 @@ To go through this workflow, you need the following: * A familiarity with basic Harness CI concepts: + [CI Pipeline Tutorial](../../ci-quickstarts/ci-pipeline-quickstart.md) - + [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + + [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) * A familiarity with Build Stage settings: + [CI Build Stage Settings](../../ci-technical-reference/ci-stage-settings.md) * A familiarity with how Pipelines use Codebases: diff --git a/docs/continuous-integration/use-ci/run-ci-scripts/run-docker-in-docker-in-a-ci-stage.md b/docs/continuous-integration/use-ci/run-ci-scripts/run-docker-in-docker-in-a-ci-stage.md index 266c1d92dba..0477584209d 100644 --- a/docs/continuous-integration/use-ci/run-ci-scripts/run-docker-in-docker-in-a-ci-stage.md +++ b/docs/continuous-integration/use-ci/run-ci-scripts/run-docker-in-docker-in-a-ci-stage.md @@ -23,7 +23,7 @@ To go through this workflow, you need the following: * A familiarity with basic Harness CI concepts: + [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) - + [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + + [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) * A familiarity with Build Stage settings: + [CI Build Stage Settings](../../ci-technical-reference/ci-stage-settings.md) diff --git a/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-mac-os-build-infrastructure.md b/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-mac-os-build-infrastructure.md index 33b5338f821..8c3366123fa 100644 --- a/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-mac-os-build-infrastructure.md +++ b/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-mac-os-build-infrastructure.md @@ -23,7 +23,7 @@ This topic assumes you're familiar with the following: * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [Delegates Overview](https://ngdocs.harness.io/article/2k7lnc7lvl-delegates-overview) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) -* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) * [VM Runner](https://docs.drone.io/runner/vm/overview/) #### Prerequisites diff --git a/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-macos-build-infrastructure.md b/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-macos-build-infrastructure.md index 984442d0c31..0c49508c77e 100644 --- a/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-macos-build-infrastructure.md +++ b/docs/continuous-integration/use-ci/set-up-build-infrastructure/define-a-macos-build-infrastructure.md @@ -29,7 +29,7 @@ This topic assumes you're familiar with the following: * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [Delegates Overview](https://docs.harness.io/article/2k7lnc7lvl-delegates-overview) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) -* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) * [VM Runner](https://docs.drone.io/runner/vm/overview/) ### Step 1: Set up the MacOS EC2 Instance diff --git a/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-a-kubernetes-cluster-build-infrastructure.md b/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-a-kubernetes-cluster-build-infrastructure.md index 9e39524101a..6359020aa42 100644 --- a/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-a-kubernetes-cluster-build-infrastructure.md +++ b/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-a-kubernetes-cluster-build-infrastructure.md @@ -81,7 +81,7 @@ Autopilot might be cheaper than standard Kubernetes if you only run builds occas * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [Delegates Overview](https://docs.harness.io/article/2k7lnc7lvl-delegates-overview) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) -* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Visual Summary diff --git a/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-an-aws-vm-build-infrastructure.md b/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-an-aws-vm-build-infrastructure.md index db3c0fd08e8..af706b589c3 100644 --- a/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-an-aws-vm-build-infrastructure.md +++ b/docs/continuous-integration/use-ci/set-up-build-infrastructure/set-up-an-aws-vm-build-infrastructure.md @@ -29,7 +29,7 @@ This topic assumes you're familiar with the following: * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [Delegates Overview](https://ngdocs.harness.io/article/2k7lnc7lvl-delegates-overview) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) -* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) * [VM Runner](https://docs.drone.io/runner/vm/overview/) ### Alternate Workflow: Set Up using Terraform diff --git a/docs/continuous-integration/use-ci/set-up-test-intelligence/set-up-test-intelligence.md b/docs/continuous-integration/use-ci/set-up-test-intelligence/set-up-test-intelligence.md index 768f7ec336c..adbe0bfbac0 100644 --- a/docs/continuous-integration/use-ci/set-up-test-intelligence/set-up-test-intelligence.md +++ b/docs/continuous-integration/use-ci/set-up-test-intelligence/set-up-test-intelligence.md @@ -23,7 +23,9 @@ In this topic, we'll cover how to set up Test Intelligence in Harness CI Stage. Test Intelligence is supported for Java and .NET Core codebases only at this time. -Currently, Test Intelligence for .NET is behind the Feature Flag `TI_DOTNET`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.### Step 1: Create the CI Stage +Currently, Test Intelligence for .NET is behind the Feature Flag `TI_DOTNET`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +### Step 1: Create the CI Stage In your Harness Pipeline, click **Add Stage**, and then click Build. diff --git a/docs/continuous-integration/use-ci/use-drone-plugins/run-a-drone-plugin-in-ci.md b/docs/continuous-integration/use-ci/use-drone-plugins/run-a-drone-plugin-in-ci.md index 601e386359f..5089c569fbf 100644 --- a/docs/continuous-integration/use-ci/use-drone-plugins/run-a-drone-plugin-in-ci.md +++ b/docs/continuous-integration/use-ci/use-drone-plugins/run-a-drone-plugin-in-ci.md @@ -21,7 +21,7 @@ To install and run a plugin, you need the following: * A familiarity with basic Harness CI concepts: + [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) - + [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + + [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) * A build infrastructure and Delegate to run builds: + [Set Up Build Infrastructure](https://docs.harness.io/category/set-up-build-infrastructure) + [Install a Kubernetes Delegate](https://docs.harness.io/article/f9bd10b3nj) *or* [Install a Docker Delegate](https://docs.harness.io/article/cya29w2b99) diff --git a/docs/continuous-integration/use-ci/use-drone-plugins/run-a-git-hub-action-in-cie.md b/docs/continuous-integration/use-ci/use-drone-plugins/run-a-git-hub-action-in-cie.md index 1ada09b97e6..8dd99e88edc 100644 --- a/docs/continuous-integration/use-ci/use-drone-plugins/run-a-git-hub-action-in-cie.md +++ b/docs/continuous-integration/use-ci/use-drone-plugins/run-a-git-hub-action-in-cie.md @@ -26,7 +26,7 @@ In this topic, we cover using GitHub Actions in the Plugin step with one of the * [CI Pipeline Quickstart](../../ci-quickstarts/ci-pipeline-quickstart.md) * [CI Stage Settings](../../ci-technical-reference/ci-stage-settings.md) * [Set Up Build Infrastructure](https://docs.harness.io/category/set-up-build-infrastructure) -* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-) +* [Learn Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) ### Step 1: Create the CI Stage diff --git a/docs/feature-flags/1-ff-onboarding/1-cf-feature-flag-overview.md b/docs/feature-flags/1-ff-onboarding/1-cf-feature-flag-overview.md index ea6d191fe7c..c4c25aab4fb 100644 --- a/docs/feature-flags/1-ff-onboarding/1-cf-feature-flag-overview.md +++ b/docs/feature-flags/1-ff-onboarding/1-cf-feature-flag-overview.md @@ -21,7 +21,7 @@ if(HarnessFeatureFlag["newamazingfeature"] == true) { A Feature Flag is a decision point in your code that can change the behavior of your software. It can help you plan the following strategies: * Who gets access to the feature first -* Who can beta test the changes +* Who can test the changes * Progressive rollouts of the feature * Turn on a feature on a specific date @@ -53,5 +53,5 @@ In a percentage-based rollout, small numbers of users are selected to test the n ### User feedback -The ability to release changes to a limited set of users makes it much easier to gather feedback about the product. You can create a beta group of users and target feature flags specifically to that group. Testing new features with a subset of users allows developers to find and address the bugs before the major release. +The ability to release changes to a limited set of users makes it much easier to gather feedback about the product. You can create a group of users and target feature flags specifically to that group. Testing new features with a subset of users allows developers to find and address the bugs before the major release. diff --git a/docs/feature-flags/2-ff-using-flags/4-ff-target-management/2-add-target-groups.md b/docs/feature-flags/2-ff-using-flags/4-ff-target-management/2-add-target-groups.md index b953784e920..7d48f03b283 100644 --- a/docs/feature-flags/2-ff-using-flags/4-ff-target-management/2-add-target-groups.md +++ b/docs/feature-flags/2-ff-using-flags/4-ff-target-management/2-add-target-groups.md @@ -114,7 +114,9 @@ To add Targets based on conditions: ``` *Figure 6: Viewing the condition for adding a Target* -When you add Targets based on conditions, on the **Target Management:Targets** page, the Target Group is **not** displayed in the **Target Groups** column.### Add or exclude Targets from Target settings +When you add Targets based on conditions, on the **Target Management:Targets** page, the Target Group is **not** displayed in the **Target Groups** column. + +### Add or exclude Targets from Target settings You can use Target Settings to include or exclude Targets from a Target Group. Complete the following steps to include or exclude Targets using the Target Settings: diff --git a/docs/feature-flags/2-ff-using-flags/6-ff-build-pipeline/1-build-feature-flag-pipeline.md b/docs/feature-flags/2-ff-using-flags/6-ff-build-pipeline/1-build-feature-flag-pipeline.md index aa2d24ba97e..7fc9f8fd433 100644 --- a/docs/feature-flags/2-ff-using-flags/6-ff-build-pipeline/1-build-feature-flag-pipeline.md +++ b/docs/feature-flags/2-ff-using-flags/6-ff-build-pipeline/1-build-feature-flag-pipeline.md @@ -25,7 +25,7 @@ This topic explains how to build a Feature Flag Pipeline. ## Before you begin -You should be familiar with the [Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) and how to [Create Organizations and Projects](https://docs.harness.io/article/36fw2u92i4-create-an-organization). +You should be familiar with the [Harness' Key Concepts](../../../getting-started/learn-harness-key-concepts.md) and how to [Create Organizations and Projects](https://docs.harness.io/article/36fw2u92i4-create-an-organization). ## Create a Pipeline diff --git a/docs/feature-flags/2-ff-using-flags/8-harness-policy-engine.md b/docs/feature-flags/2-ff-using-flags/8-harness-policy-engine.md index 1a3e0237fe9..c728ad96859 100644 --- a/docs/feature-flags/2-ff-using-flags/8-harness-policy-engine.md +++ b/docs/feature-flags/2-ff-using-flags/8-harness-policy-engine.md @@ -38,7 +38,7 @@ This topic provides an overview of how Harness Policy Engine works with Feature Before using Harness Policy Engine, you should understand the following: -* [Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Harness' Key Concepts](../../getting-started/learn-harness-key-concepts.md) * [How to Write](https://www.openpolicyagent.org/docs/latest/policy-language/) [Rego for OPA](https://www.openpolicyagent.org/docs/latest/policy-language/) New to Rego? Use the following resources to learn it: diff --git a/docs/feature-flags/4-ff-sdks/1-sdk-overview/2-communication-sdks-harness-feature-flags.md b/docs/feature-flags/4-ff-sdks/1-sdk-overview/2-communication-sdks-harness-feature-flags.md index 4b7bb26dc8b..425e2d18aee 100644 --- a/docs/feature-flags/4-ff-sdks/1-sdk-overview/2-communication-sdks-harness-feature-flags.md +++ b/docs/feature-flags/4-ff-sdks/1-sdk-overview/2-communication-sdks-harness-feature-flags.md @@ -54,7 +54,9 @@ Streaming provides a persistent connection to the SDKs. Harness Feature Flags us In polling mode, you can define the interval of time at which you want to receive updates of Flag states from the Feature Flag. The SDK will then make HTTP requests to Feature Flags to retrieve flag state changes. -It is important to know that the Harness Feature Flag does not send any information as part of these requests, it is simply a query to update the status of a flag on the SDK side.#### Communication loop between Harness and the SDKs +It is important to know that the Harness Feature Flag does not send any information as part of these requests, it is simply a query to update the status of a flag on the SDK side. + +#### Communication loop between Harness and the SDKs diff --git a/docs/feature-flags/4-ff-sdks/2-client-sdks/1-android-sdk-reference.md b/docs/feature-flags/4-ff-sdks/2-client-sdks/1-android-sdk-reference.md index 73bacd8e4bd..3e4ec07e6a1 100644 --- a/docs/feature-flags/4-ff-sdks/2-client-sdks/1-android-sdk-reference.md +++ b/docs/feature-flags/4-ff-sdks/2-client-sdks/1-android-sdk-reference.md @@ -18,7 +18,9 @@ This topic describes how to use the Harness Feature Flags Android SDK for your A For getting started quickly, you can use our [sample code from the SDK README](https://github.com/harness/ff-android-client-sdk/blob/main/README.md). You can also [clone](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) and run a sample application from the [Android SDK GitHub Repository.](https://github.com/harness/ff-android-client-sdk) -The SDK caches your Feature Flags. If the cache can't be accessed, the `defaultValue` is used.### Before you begin +The SDK caches your Feature Flags. If the cache can't be accessed, the `defaultValue` is used. + +### Before you begin Make sure you read and understand: @@ -314,7 +316,9 @@ Alternatively, to use the Android [log class](https://developer.android.com/ref ``` CfLog.runtimeModeOn() ``` -Standard Android logging is the default logging strategy, so turning on runtime mode is not required.#### Use our public API methods +Standard Android logging is the default logging strategy, so turning on runtime mode is not required. + +#### Use our public API methods Our public API exposes the following methods that you can use: diff --git a/docs/feature-flags/4-ff-sdks/2-client-sdks/3-ios-sdk-reference.md b/docs/feature-flags/4-ff-sdks/2-client-sdks/3-ios-sdk-reference.md index d60b8597428..05c910c4145 100644 --- a/docs/feature-flags/4-ff-sdks/2-client-sdks/3-ios-sdk-reference.md +++ b/docs/feature-flags/4-ff-sdks/2-client-sdks/3-ios-sdk-reference.md @@ -17,7 +17,9 @@ This topic describes how to use the Harness Feature Flags iOS SDK for your iOS a For getting started quickly, you can use our [sample code from the SDK README](https://github.com/harness/ff-ios-client-sdk/blob/main/README.md). You can also [clone](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) and run a sample application from the [iOS SDK GitHub Repository.](https://github.com/harness/ff-ios-client-sdk) -The SDK caches your Feature Flags. If the cache can't be accessed, the `defaultValue` is used.### Before you begin +The SDK caches your Feature Flags. If the cache can't be accessed, the `defaultValue` is used. + +### Before you begin Make sure you read and understand: @@ -271,7 +273,9 @@ There are different methods for the different Variation types and for each metho * Identifier of the Flag you want to evaluate * The default Variation -The Flag is evaluated against the Target you pass in when initializing the SDK.#### Evaluate a string Variation +The Flag is evaluated against the Target you pass in when initializing the SDK. + +#### Evaluate a string Variation ``` diff --git a/docs/feature-flags/4-ff-sdks/3-server-sdks/3-integrate-feature-flag-with-java-sdk.md b/docs/feature-flags/4-ff-sdks/3-server-sdks/3-integrate-feature-flag-with-java-sdk.md index 6105e8f7e7d..0a7c92bb6f9 100644 --- a/docs/feature-flags/4-ff-sdks/3-server-sdks/3-integrate-feature-flag-with-java-sdk.md +++ b/docs/feature-flags/4-ff-sdks/3-server-sdks/3-integrate-feature-flag-with-java-sdk.md @@ -59,7 +59,9 @@ Add the following dependency in your project's pom.xml file:     1.1.5 ``` -If you are using the Harness Java sample application from the [Java SDK GitHub repository](https://github.com/harness/ff-java-server-sdk), do not add the Maven dependency in the `pom.xml` file as it has already been added.#### Install using Gradle +If you are using the Harness Java sample application from the [Java SDK GitHub repository](https://github.com/harness/ff-java-server-sdk), do not add the Maven dependency in the `pom.xml` file as it has already been added. + +#### Install using Gradle ``` diff --git a/docs/feature-flags/4-ff-sdks/3-server-sdks/5-node-js-sdk-reference.md b/docs/feature-flags/4-ff-sdks/3-server-sdks/5-node-js-sdk-reference.md index 4f7667f4aa5..842cb2df9db 100644 --- a/docs/feature-flags/4-ff-sdks/3-server-sdks/5-node-js-sdk-reference.md +++ b/docs/feature-flags/4-ff-sdks/3-server-sdks/5-node-js-sdk-reference.md @@ -285,7 +285,9 @@ To remove all listeners, use: ``` off(Event.READY); ``` -If you call `off()` without parameters it will close the client.### Test your app is connected to Harness +If you call `off()` without parameters it will close the client. + +### Test your app is connected to Harness When you receive a response showing the current status of your Feature Flag, go to the Harness Platform and toggle the Flag on and off. Then, check your app to verify if the Flag Variation displayed is updated with the Variation you toggled. diff --git a/docs/first-gen/_category_.json b/docs/first-gen/_category_.json new file mode 100644 index 00000000000..7c8e654cd54 --- /dev/null +++ b/docs/first-gen/_category_.json @@ -0,0 +1 @@ +{"label": "Continuous Delivery", "position": 30, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Continuous Delivery"}, "customProps": { "helpdocs_category_id": "1qtels4t8p", "helpdocs_parent_category_id": "yj3d4lvxn0"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/_category_.json b/docs/first-gen/continuous-delivery/_category_.json new file mode 100644 index 00000000000..49e37708e92 --- /dev/null +++ b/docs/first-gen/continuous-delivery/_category_.json @@ -0,0 +1,15 @@ +{ + "label": "Continuous Delivery", + "position": 10, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Continuous Delivery" + }, + "customProps": { + "helpdocs_category_id": "w51ys7qkag", + "helpdocs_parent_category_id": "1qtels4t8p" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/_category_.json b/docs/first-gen/continuous-delivery/aws-deployments/_category_.json new file mode 100644 index 00000000000..be3543749b5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "AWS Deployments and Provisioning", "position": 20, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "AWS Deployments and Provisioning"}, "customProps": { "helpdocs_category_id": "qjdydj4rcl", "helpdocs_parent_category_id": "1qtels4t8p"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/_category_.json b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/_category_.json new file mode 100644 index 00000000000..a3b9430ef61 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "AWS AMI Deployments", "position": 20, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "AWS AMI Deployments"}, "customProps": { "helpdocs_category_id": "mizega9tt6"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-blue-green.md b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-blue-green.md new file mode 100644 index 00000000000..4347bccfeba --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-blue-green.md @@ -0,0 +1,618 @@ +--- +title: AMI Blue/Green Deployment +description: Create a Blue/Green deployment for AMI instances. +# sidebar_position: 2 +helpdocs_topic_id: vw71c7rxhp +helpdocs_category_id: mizega9tt6 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This guide outlines a typical configuration and execution of an AMI (Amazon Machine Image) Blue/Green deployment, in the following sections. + +* [Before You Begin](#before_you_begin) +* [Overview](#overview) +* [Limitations](#limitations) +* [Blue/Green with Incremental Traffic Shift Summary](#blue_green_with_incremental_traffic_shift_summary) +* [Blue/Green with Instant Traffic Shift Summary](#blue_green_with_instant_traffic_shift_summary) +* [Prerequisites](#prerequisites) +* [AWS Setup (Example)](#aws_setup_example) +* [Define the Blue/Green Infrastructure](#define_the_blue_green_infrastructure) +* [Infrastructure Provisioners](#infrastructure_provisioners) +* [Blue/Green with Incremental Traffic Shift](#blue_green_with_incremental_traffic_shift) +* [Blue/Green with Instant Traffic Shift](#blue_green_with_instant_traffic_shift) +* [Rollbacks and Downsizing Old ASGs](#rollbacks_and_downsizing_old_as_gs) +* [Support for Scheduled Scaling](#support_for_scheduled_scaling) +* [Troubleshooting](#troubleshooting) + +### Before You Begin + +* [AMI Basic Deployment](ami-deployment.md) + +### Overview + +There are two Blue/Green deployment options for AMI, defined by the traffic-shifting strategy you want to use: + +* **Incrementally Shift Traffic** — In this Workflow strategy, you specify a Production Listener and Rule with two Target Groups for the new ASG to use. Next you add multiple **Shift Traffic Weight** steps. +Each Shift Traffic Weight step increments the percentage of traffic that shifts to the Target Group for the new ASG. +Typically, you add Approval steps between each Shift Traffic Weight to verify that the traffic may be increased. +* **Instantly Shift Traffic** — In this Workflow strategy, you specify Production and Stage Listener Ports and Rules to use, and then a **Swap Production with Stage** step swaps all traffic from Stage to Production. + +You specify the traffic shift strategy when you create the Harness Blue/Green Workflow for your AMI deployment. What steps are available in the Workflow depend on the strategy you select. + +### Limitations + +* If your base Auto Scaling Group is configured in AWS with [scaling policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types), Harness will apply those policies in your Workflow's *final* **Upgrade AutoScaling Group** step. + + Harness does not support copying ASG scaling policies with **Metric Type** value **Application Load Balancer request count per target**. +* Harness specifically supports AWS *target* tracking scaling policies. For details, see AWS' [Dynamic Scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types) topic. + +### Blue/Green with Incremental Traffic Shift Summary + +This deployment method lets you add Workflow steps to incrementally shift traffic from the Target Group used by the previous ASG to the Target Group used by the new ASG you are deploying. + +With this strategy, you are not shifting traffic from stage and production environments. You are shifting traffic incrementally for a production environment. In this way, it is similar to a Canary strategy. + +However, in a Canary deployment, the percentage of traffic that goes to the new ASG is determined by the number of instances (for example, 25% of 4 instances) or the forwarding policy of the load balancer. + +With this Incremental Traffic Shift strategy, you are controlling the  percentage of traffic sent to the new ASG. For example, 25% of all traffic. + +In this topic, we will review the requirements, and then describe how the traffic shifting works. + +Next, we will walk through building the Workflow for the deployment strategy. + +### Blue/Green with Instant Traffic Shift Summary + +In this strategy, you specify Production and Stage Listener Ports and Rules to use, and then a **Swap Production with Stage** step swaps **all** traffic from Stage to Production. + +A Blue/Green deployment reliably deploys your AMI(s) by maintaining new and old versions of Auto Scale Groups (ASGs) that are built using these AMIs. The ASGs run behind an Application Load Balancer (ALB) using two listeners, Stage and Prod. These listeners forward respectively to two Target Groups (TGs), Stage and Prod, where the new and old ASGs are run.  + +In the first stage of deployment, the new ASG—created using the new AMI you are deploying—is attached to the Stage Target Group: + +![](./static/ami-blue-green-44.png) + + +Blue/Green deployments are achieved by swapping routes between the Target Groups—always attaching the new ASG first to the Stage Target Group, and then to the Prod Target Group: + +![](./static/ami-blue-green-45.png) + + +In Amazon Web Services, you configure a base Launch Configuration that Harness will use when it creates new Launch Configurations; a base Auto Scaling Group that uses the base Launch Configuration; and the Stage and Prod Target Groups. In Harness, you identify the Region, base Auto Scaling Group, and Stage and Prod Target Groups that you've configured in AWS. + +This guide outlines the required setup in both AWS and Harness. + +### Prerequisites + +An AMI Blue/Green deployment requires you to set up the following resources up within AWS (example setup [below](#aws_setup_bg)): + +* A working AMI that Harness will use to create the instances in the new ASGs that Harness creates. +* An AWS [Launch Configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html), whose security group allows inbound access to your Application Load Balancer's listener ports. +* An [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg.html) (ASG), which Harness uses as a template for the ASGs that Harness creates. +* A pair of [Target Groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html)—typically staging (Stage) and production (Prod)—both with the **instance** target type. +* An [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) (ALB), with listeners for both your Target Groups' ports. + +Within Harness, you'll need to set up the following resources (some of which you might have already created for an AMI [Basic deployment](ami-deployment.md#basic-deploy)): + +* A Delegate [installed and running](ami-deployment.md#install-and-run-the-harness-delegate) in an AWS instance. +* An AWS [Cloud Provider](ami-deployment.md#cloud-provider) configured to assume the Delegate's IAM role for the connection to AWS. +* An AMI-based Service (which can be any Service you've already set up for an [AMI Basic deployment](ami-deployment.md#basic-deploy)). +* An Environment with an Infrastructure Definition that specifies your ASG and your Stage and Prod Target Groups. + +You do not need to register instances for your Target Groups. Harness will perform that step during deployment. + +#### Cloud Provider Requirements for Blue/Green Deployments + +Ensure that the IAM role applied to the AWS access key user or Harness Delegate host has the policies described in [Policies Required: AWS AMI/ASG Deployments](https://docs.harness.io/article/wt1gnigme7-add-amazon-web-services-cloud-provider#policies_required_aws_ami_asg_deployments). + +### AWS Setup (Example) + +For the Workflows demonstrated in this topic, we set up the following AWS resources: + +* [Launch Configuration and Launch Template Support](#launch_configuration_and_launch_template_support) +* [Launch Configuration](#launch_configuration) +* [Auto Scaling Group](#auto_scaling_group) +* [Target Groups](#target_groups) +* [Application Load Balancer (ALB)](#application_load_balancer_alb) + +#### Launch Configuration and Launch Template Support + +In Harness AMI deployments, the base ASG you select in your Infrastructure Definition (**Auto Scaling Groups** drop-down) is used to create the new ASG Harness deploys for the AMI's EC2 instances. + +AWS ASGs use either a [launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html) or a [launch template](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchTemplates.html) as a configuration template for its EC2 instances. Both contain information such as instance type, key pair, security groups, and block device mapping for your instances. The difference is that launch templates support versioning. AWS recommends that you use launch templates instead of launch configurations to ensure that you can use the latest features of Amazon EC2.Here's how Harness uses the base ASG launch configuration or launch template when it creates the new ASG: + +* **Launch configurations:** If your base ASG uses a launch configuration, Harness uses that launch configuration when creating the new ASG. + +* **Launch templates:** If your base ASG uses a launch template (default, latest, or specific version), Harness use that launch template when creating the new ASG. + +Harness creates a new Launch Template *version* when it creates the new ASG. This applies to existing base ASGs and base ASGs provisioned via Terraform or CloudFormation. + +For more information on launch templates, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) from AWS. + +In this tutorial, we will use a launch configuration. + +##### Launch Configuration + +This defines a base Launch Configuration, from which Harness will create new Launch Configurations for new Auto Scaling Groups. This Launch Configuration's security group allows inbound HTTP traffic on port 80 (which we'll use for the Prod Target Group's instance listener). + +![](./static/ami-blue-green-46.png) + +#### Auto Scaling Group + +The Auto Scaling Group that you define in AWS must use the base [Launch Configuration](#launch_config_bg) created above or a launch template. When you later select this ASG in your Harness Infrastructure Definition, it becomes the base ASG from which Harness will create new ASGs to deploy new AMIs. + +Our example specifies three subnets and modest scaling policies: one instance to start, then **Keep this group at its initial size**. + +![](./static/ami-blue-green-47.png) + +Note that if you choose to instead configure scaling policies for your base ASG, Harness will apply these scaling policies in your Workflow's final  [Upgrade AutoScaling Group step](#upgrade_asg_bg). (Harness will also oblige these policies in any rollback steps.) + +![](./static/ami-blue-green-48.png) + +Harness specifically supports AWS *target* tracking scaling policies. For details, see AWS' [Dynamic Scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types) topic. + +#### Target Groups + +We have a pair of identically configured Target Groups. We've arbitrarily named one TG to indicate production, and the other to indicate staging. This naming convention is a convenience to help us select these TGs when we later assign them to a Harness Infrastructure Definition. + +![](./static/ami-blue-green-49.png) + +Both Target Groups are configured as **Target type: Instance**. They provide HTTP access for the load balancer on port 80, and specify the Default VPC. + +![](./static/ami-blue-green-50.png) + +#### Application Load Balancer (ALB) + +The Application Load Balancer is configured with the same Default VPC, and the same subnets, used for the base ASG and the Target Groups. + +![](./static/ami-blue-green-51.png) + +It has two listeners, forwarding to the two Target Groups: one for production traffic, pointing to the Prod TG; and one for staging traffic, pointing to the Stage TG. In our example, the production Target Group's listener uses port 80, matching the access we configured for that TG. + +![](./static/ami-blue-green-52.png) + +### Define the Blue/Green Infrastructure + +1. Within your Harness Application, select an existing [Environment](ami-deployment.md#create-environment), or create a new one. +2. In the Environment's **Infrastructure Definition** section, click **Add Infrastructure Definition**. +3. In the resulting **Infrastructure Definition** dialog, enter a **Name** that will identify this Infrastructure Definition when you [add it to a Workflow](#workflow_bg). +4. In **Cloud Provider Type**, select **Amazon Web Services**. +5. In **Deployment Type**, select **Amazon Machine Image**. This expands the **Infrastructure Definition** dialog to look something like this: + ![](./static/ami-blue-green-53.png) +6. Select a **Cloud Provider** that references your Delegate by Tag, as [outlined earlier](ami-deployment.md#cloud-provider) for Basic deployment. +7. Select the **Region** and base **Auto Scaling Group** that you [configured in AWS](#asg_bg) for Blue/Green. +8. If you want this Infrastructure Definition to create a new ASG numbering series based on each new selection in the above **Auto Scaling Groups** drop-down, enable the check box labeled **Reset ASG revision numbers each time a new base ASG is selected**. For details on this option, see [Reset ASG Revision Numbers](#reset_asg_rev). +9. In the upper **Target Groups (for ALB)** field, select the Target Group that you [configured in AWS](#target_groups_bg) as your production group. +10. In the lower **Temporary Routes** > **Target Groups** field, select the Target Group that you configured in AWS as your staging group. (Harness uses this Target Group for initial deployment of your service. Upon successful deployment, it swaps this group's route with the production Target Group's route.) +11. Enable **Scope to Specific Services**, and use the adjacent drop-down to select the appropriate Harness Service. This can be any Service you've already set up for an [AMI Basic deployment](ami-deployment.md#basic-deploy). + +(This scoping will make this Infrastructure Definition available whenever a Workflow, or Phase, is set up for this Service.) + +When you are done, the dialog will look something like this: +![](./static/ami-blue-green-54.png) + + +12. Click **SUBMIT** to add the new Infrastructure Definition to your Harness Environment. + +You're now ready to create a Blue/Green deployment [Workflow](#workflow_bg). + +#### Reset ASG Revision Numbers + +Within an Infrastructure Definition, you can direct Harness to start a new ASG numbering series each time you select a new base ASG. You do so by enabling the check box labeled **Reset ASG revision numbers each time a new base ASG is selected**. + +![](./static/ami-blue-green-55.png) + +When you deploy, this option resets ASG numbering even for the same combination of AMI Service and Infrastructure Definition—if you select a new ASG. Each newly selected ASG might represent:  + +* A separate brand. +* A separate franchisee. +* A separate level of your SaaS product offering, each with its own configuration, permissions, and pricing. + +Deploying the new ASG with a new numbering series prevents existing, unrelated ASGs from being downscaled. You achieve this independence without having to create duplicate Infrastructure Definitions. Within Harness, each combination of a Service with a new base ASG creates a new  [Service Infrastructure Mapping](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions#service_infrastructure_mapping). + +If you select the **Use Already Provisioned Infrastructure** option along with the **Reset ASG revision numbers...** option, Harness will start a new ASG numbering series each time you manually select a new base ASG in the **Auto Scaling Group** drop-down. + +If you instead select the **Map Dynamically Provisioned Infrastructure** option along with the **Reset ASG revision numbers...** option, Harness will automatically create a new numbering series based on the new value for the base ASG value that your  [Infrastructure Provisioner](#infrastructure_provisioners) outputs in a variable expression. + +Once you **Submit** an Infrastructure Definition with this **Reset** check box enabled, the check box will be locked. You will not be able to disable this option later within the same Infrastructure Definition. + +### Infrastructure Provisioners + +Harness Terraform and CloudFormation Infrastructure Provisioners support Blue/Green deployments for AMI. + +When you set up the Infrastructure Definition for your Blue/Green deployment, you simply select the option to dynamically provision, then select the Terraform or CloudFormation Infrastructure Provisioner you have set up in your Harness Application. Next, you map outputs from your provisioner template or script to the fields Harness requires. + +In the following example, we show: + +* Required outputs. +* The outputs used for the optional Target Group and Application Load Balancer. +* The stage Target Group and Application Load Balancer used for Blue/Green deployments.![](./static/ami-blue-green-56.png) + +### Blue/Green with Incremental Traffic Shift + +This section contains the steps for the deployment option described above in [Blue/Green with Incremental Traffic Shift Summary](#blue_green_with_incremental_traffic_shift_summary). + +#### Harness Delegate, Service, and Infrastructure Definition Requirements + +There are no specific Harness Delegate, Service, and Infrastructure Definition requirements beyond the standard setup described in [Prerequisites](#prerequisites). + +#### AWS ELB Listener Requirements + +You need the following AWS ELB setup: + +* AWS Application Load Balancer configured with one  [Listener](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-listener.html). +* The Listener must have a  [Rule](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html) redirecting to two  [Target Groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html) (TGs). +* You will need registered target instances for the Target Groups. + +Here is an example of an ALB Listeners rule redirecting to two TGs: + +![](./static/ami-blue-green-57.png) + +This example uses the default rule, but in most cases you will have several rules for redirecting traffic to your services. Typically, the default rule is used as the last rule to catch actions that the other rules do not.#### Target Group Weight Shifting + +The ALB Listener has two TGs with the following weights: + +* One TG has a weight of 100 (100%) — This TG is used for the **existing ASG** (pre-deployment). +* The other TG has a weight of 0 — This TG is used for the **new ASG** you are deploying. + +![](./static/ami-blue-green-58.png) + +When Harness first creates the new ASG it adjusts its instances to the TG with a weight of 0. + +Later in the Workflow, you add **Shift Traffic Weight** step(s) to adjust the weight for this TG. For example, here is a **Shift Traffic Weight** step adjust the weight to 10%: + +![](./static/ami-blue-green-59.png) + +The weight for the **other** TG is automatically set to the remaining percentage. In this case, 90%. + +You keep adding **Shift Traffic Weight** steps until the weight of the TG for the new ASG is 100. + +You can manipulate traffic shifting using as many **Shift Traffic Weight** steps as you like.Typically, you add  [Approval](https://docs.harness.io/article/0ajz35u2hy-approvals) steps between each **Shift Traffic Weight** step to ensure that everything is running smoothly. For example, you can test the new feature(s) of your app before approving. This is a simple way to incorporate A/B testing into your Workflow. + +Approval steps are very useful because they enable you to cancel a deployment and return to the pre-deployment traffic weighting with a single step.The Workflow looks something like the following. Here the names of the **Shift Traffic Weight** steps have been changed to describe the weights they are assigning (10%, 100%): + +![](./static/ami-blue-green-60.png) + +#### Create the Blue/Green Workflow + +1. In the Harness Application containing the Service and Infrastructure Definition you want to use, click **Workflows**. +2. Click **Add Workflow**. +3. Enter a name for the Workflow. +4. In **Workflow Type**, select **Blue/Green Deployment**. +5. Select an **Environment** and **Service**, and the **Infrastructure Definition**. +6. In **Traffic Shift Strategy**, select **Incrementally Shift Traffic using ELB**. +7. Click **Submit**. + +Harness creates the Workflow and automatically adds the steps for deployment. + +![](./static/ami-blue-green-61.png) + +By default, only one **Shift Traffic Weight** step is added. Unless you want to shift the traffic in one step, you will likely add more **Shift Traffic Weight** steps to incrementally shift traffic. + +Let's walk through each step. + +#### ASG AMI ALB Shift Setup + +This step creates the new ASG. In this step, you name the new ASG, specify how instances its uses, and then identity the production load balancer, listener, and Rule to use. + +![](./static/ami-blue-green-62.png) + +1. Once you have named and defined the number of instances for the ASG, in **Load Balancer Details**, click **Add**. +2. In **Elastic Load Balancer**, select the ELB to use for production traffic. +3. In **Production Listener ARN**, select the Listener to use. This is the listener containing the rule whose weights you will adjust. +4. In **Production Listener Rule ARN**, select the ARN for the rule to use. You can find the ARN by its number in the AWS console. +5. Click **Submit**. + +Most of the settings support  [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). You can use these to template this step and then allow its values to be specified at deployment runtime. You can even pass in the values using a Harness  [Trigger](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows).When you deploy this Workflow, the output for the step will show the ASG creation and load balancer assignments. + + +``` +Starting AWS AMI Setup +Loading Target group data for Listener: [...] at port: [null] of Load Balancer: [null] +Rule Arn: [...] +Target group: [...] is Prod, and [...] is Stage +Starting AWS AMI Setup +Getting base auto scaling group +Getting base launch configuration +Getting all Harness managed autoscaling groups +Getting last deployed autoscaling group with non zero capacity +# Downsizing older ASGs to 0 +# Not changing Most Recent Active ASG: TrafficShiftASG__1 +Using workflow input min: [1], max: [1] and desired: [1] +Creating new launch configuration [TrafficShiftASG__2] +Creating new AutoScalingGroup [TrafficShiftASG__2] +Extracting scaling policy JSONs from: [asgAmi-SG] +Found scaling policy: [...] +Extracting scaling policy JSONs from: [TrafficShiftASG__1] +Found scaling policy: [...] +Completed AWS AMI Setup with new autoScalingGroupName [TrafficShiftASG__2] +``` +You can see how it identifies both of the TGs for production and stage: + + +``` +Target group: [...] is Prod, and [...] is Stage +``` +You selected the rule to use and Harness automatically selected the TG with a weight of 0 for production and the TG with a weight of 100 for stage. + +Later, in the **Shift Traffic Weight** step(s), these weights are what you will be adjusting. + +#### Upgrade Traffic Shift AutoScaling Group + +This step simply deploys the new ASG you created. It brings the new ASG to steady state with the number of instances you selected in the previous ASG ALB Shift Setup step. + +There is nothing to configure in this step. + +#### Shift Traffic Weight + +This is the step where you shift traffic from the TG for the previous ASG to the new ASG you are deploying. + +![](./static/ami-blue-green-63.png) + +1. In **Name**, it can helpful to name the step after the traffic shift percentage it will apply, such as **10%**. You might also choose to name it according to its position, like **Shift Step 1**. +2. In **New Autoscaling Group Weight**, enter the percentage of traffic you want shifted from the previous ASG to the new ASG you are deploying. + +Most of the settings support  [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). You can use these to template this step and then allow its values to be specified at deployment runtime. You can even pass in the values using a Harness  [Trigger](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows).Here is an example of what this step looks like when it shifts traffic 10% during deployment. + + +``` +Starting to switch routes in AMI ASG traffic shift deploy +Starting traffic shift between routes +New AutoScaling Group service will get: [10] weight. TargetGroup: [asg-tg2] +Old AutoScaling Group service will get: [90] weight. TargetGroup: [asg-tg1] +Editing rule: [arn:aws:elasticloadbalancing:us-east-1:xxxxxx:listener-rule/app/asgAmiALB/ddf9ee159b1343f6/2a5c03e91e78600f/c005cf6f58c140b3] +Traffic shift route updated successfully +``` +You can see that the New AutoScaling Group is receiving 10% of traffic and the Old AutoScaling Group is receiving 90%. + +Next, you will likely want to follow the Shift Traffic Weight step with an  [Approval step](https://docs.harness.io/article/0ajz35u2hy-approvals). This way you can test the new ASG before shifting more traffic to it. + +Add more **Shift Traffic Weight** and **Approval** steps until you shift traffic to 100. + +![](./static/ami-blue-green-64.png) + +Now your Workflow is ready for deployment. + +When you deploy, the final **Shift Traffic Weight** step will look something like this: + +![](./static/ami-blue-green-65.png) + +#### Downsize Old ASG at 0% weight + +The **Downsize Old ASG at 0% weight** setting should only be selected for the **Shift Traffic Weight** step that shifts traffic to **100%** in its **New ASG Weight** setting. + +When this setting is enabled, the old ASG is downsized. + +#### Shift Traffic Weight Rollback + +In the Workflow **Rollback Steps**, Harness adds a **Shift Traffic Weight Rollback** step automatically. If rollback occurs, Harness rolls back to the pre-deployment ASG and TG assignments. + +If no Spotinst service setup is found, Harness skips rollback. + +In many cases, Harness users place an Approval step in Rollback Steps also: + +### Blue/Green with Instant Traffic Shift + +This section contains the steps for the deployment option described above in  [Blue/Green with Instant Traffic Shift Summary](#blue_green_with_instant_traffic_shift_summary). + +By default, Harness AMI Blue/Green Workflows have five steps: + +1. [Setup AutoScaling Group](#setup_asg_bg): Specify how many instances to launch, their resizing order, and their steady state timeout. +2. [Deploy Service](#upgrade_asg_bg): Specify the number or percentage of instances to deploy within the ASG you've configured in [Setup AutoScaling Group](#setup_asg_bg). +3. Verify Staging: Optionally, specify Verification Providers or Collaboration Providers. +4. [Swap Routes](#swap_routes_bg): Re-route requests to the newest stable version of your ASG. +5. Wrap Up: Optionally, specify post-deployment commands, notifications, or integrations. + +Harness pre-configures the **Setup**, **Deploy**, and **Swap Routes** steps. Below, we outline those steps' defaults and options, with examples of the deployment logs' contents at each step. + +The **Verify Staging** and **Wrap Up** steps are placeholders, to which you can add integrations and commands. For details on adding **Verify Staging** integrations, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + +#### Create the Blue/Green Workflow + +1. In your Application, click **Workflows** > **Add Workflow**. The **Workflow** dialog appears. +2. Enter a **Name**, and (optionally) enter a **Description** of this Workflow's purpose. +3. In **Workflow Type**, select **Blue/Green Deployment**. +4. Select the **Environment** and **Service that** you created for your AMI deployment. +5. Select the **Infrastructure Definition** you [configured earlier](#svc_infra_bg) for AMI Blue/Green deployment. The dialog will now look something like this: + ![](./static/ami-blue-green-66.png) + + +6. Click **SUBMIT**. The new Blue/Green Workflow for AMI is preconfigured. + +Next, we will examine options for configuring the Blue/Green deployment's **Setup**, **Deploy**, and **Swap Routes** steps. + +#### Step 1: Setup AutoScaling Group + +In Step 1, select **AWS AutoScaling Group Setup** to open a dialog where you can fine-tune the new Auto Scaling Group (ASG) that Harness creates for the AMI Service you are deploying: + +![](./static/ami-blue-green-67.png) + +The **Instances** settings support [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template).For most settings here, see the corresponding [AMI Basic Workflow instructions](ami-deployment.md#basic-setup-asg). However: + +Harness recommends setting the **Auto Scaling Steady State Timeout (mins)** field to at least **20** minutes, as shown above. This is a safe interval to prevent failed deployments while the [Swap Routes](#swap_routes_bg) step's Blue/Green switchover completes. + +##### Setup AutoScaling Group in Deployment + +Let's look at an example where the AWS AutoScaling Group Setup—configured as shown above—is deployed. Here is the step in the Harness Deployments page: + +![](./static/ami-blue-green-68.png) + +Here's partial output, showing a successful setup: + + +``` +Starting AWS AMI Setup +Starting AWS AMI Setup +Getting base auto scaling group +Getting base launch configuration +Getting all Harness managed autoscaling groups +Getting last deployed autoscaling group with non zero capacity +Creating new launch configuration [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +Creating new AutoScalingGroup [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +Sending request to delete old auto scaling groups to executor +Completed AWS AMI Setup with new autoScalingGroupName [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +``` +#### Step 2: Deploy Service + +In Step 2, select **Upgrade AutoScaling Group** to open a dialog where you can define how many instances to deploy in the Auto Scaling Group, as either a count or a percentage. + +For general information on customizing this dialog's settings, and on how they correspond to AWS parameters, see the corresponding [AMI Basic Workflow section](ami-deployment.md#upgrade-asg). This deployment example uses percentage scaling, with a desired target of 100%. + +If your base Auto Scaling Group is configured in AWS with [scaling policies](#scaling_policies), Harness will apply those policies in your Workflow's final **Upgrade AutoScaling Group** step. + +##### Deploy Service Step in Deployment + +At this point, Harness deploys the new ASG—containing the instances created using your new AMI—to the Stage Target Group: + +![](./static/ami-blue-green-69.png) + +Using the **Upgrade AutoScaling Group** configuration shown above, here is the **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-blue-green-70.png) + +Here is partial output, showing the new Auto Scaling Group successfully resized and at steady state: + + +``` +Starting AWS AMI Deploy +Getting existing instance Ids +Resizing Asgs +Resizing AutoScaling Group: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] to [1] +Set AutoScaling Group: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] desired capacity to [1] +Successfully set desired capacity +AutoScalingGroup [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] activity [Launching a new EC2 instance: i-05aace750dbed65b3] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-e962248d","Availability Zone":"us-east-1a"}] +Waiting for instances to be in running state. pending=1 +AutoScalingGroup [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] activity [Launching a new EC2 instance: i-05aace750dbed65b3] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-e962248d","Availability Zone":"us-east-1a"}] +AutoScaling group reached steady state +Setting min capacity of Asg[AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] to [1] +``` +Next, the staging Target Group is attached to the new ASG, and its target instances are registered: + + +``` +Waiting for Target Group: [arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/bg-doc-stage/8e281748dd2c6344] to have all instances of Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +[0] out of [1] targets registered and in healthy state +[...] +AutoScaling Group resize operation completed with status:[SUCCESS] +[1] out of [1] targets registered and in healthy state +All targets registered for Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +``` +##### Approval Sub-Step + +This example shows an (optional) Approval added to Step 2. It requests manual approval, following successful registration of the staging group, and prior to the Blue/Green (staging/production) switchover in the next step. + +![](./static/ami-blue-green-71.png) + +#### Step 4: Swap Routes + +This is the heart of a Blue/Green deployment. Here, Harness directs the Application Load Balancer to: + +* Detach your staging Target Group from the new ASG. +* Attach your production Target Group to the new ASG, to handle incoming requests. +* Detach your production Target group from the old ASG. + +When this step is complete, the new ASG—containing the instances created using your new AMI—are deployed to the production TG: + +![](./static/ami-blue-green-72.png) + +In Step 4, open the **Switch Auto Scaling Group Route** dialog if you want to toggle the **Downsize Old Auto Scaling Group** setting. When enabled, this check box directs AWS to free up resources from the old ASG once the new ASG registers its targets and reaches steady state. + +![](./static/ami-blue-green-73.png) + +##### Switch Auto Scaling Group Route Step in Deployment + +Using the configuration shown above, here is the **Switch Auto Scaling Group Route** step in the Harness Deployments page: + +![](./static/ami-blue-green-74.png) + +Here's partial output, showing successful swapping of the two Target Groups' routes. First, the staging Target Group is detached from new ASG 2: + + +``` +Starting to switch routes in AMI Deploy +Starting Ami B/G swap +Sending request to detach target groups:[arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/bg-doc-stage/8e281748dd2c6344] from Asg:[AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +Waiting for Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] to de register with target group: [arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/bg-doc-stage/8e281748dd2c6344] +[1] out of [1] targets still registered +[...] +All targets de-registered for Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +``` +Next, the production Target Group is attached to ASG 2. Then, its targets are verified healthy and registered. This new ASG now handles incoming requests: + + +``` +Sending request to attach target groups:[arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/bg-doc-prod/28cfdfd155415a62] to Asg:[AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +Waiting for Target Group: [arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/bg-doc-prod/28cfdfd155415a62] to have all instances of Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +[1] out of [1] targets registered and in healthy state +All targets registered for Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__2] +``` +Next, the production group is detached from ASG 1: + + +``` +Sending request to detach target groups:[arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/bg-doc-prod/28cfdfd155415a62] from Asg:[AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__1] +Waiting for Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__1] to deregister with target group: [arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/bg-doc-prod/28cfdfd155415a62] +All targets de-registered for Asg: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__1] +``` +Finally, the old ASG is downsized to 0 instances. Had the Workflow's [Step 4 (Swap Routes)](#swap_routes_bg) *not* specified **Downsize Old Auto Scaling Group,** these resources would not be explicitly freed up: + + +``` +Downscaling autoScaling Group [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__1] +Set AutoScaling Group: [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__1] desired capacity to [0] +Successfully set desired capacity +AutoScalingGroup [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__1] activity [Terminating EC2 instance: i-01dc003477e21fcb6] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-e962248d","Availability Zone":"us-east-1a"}] +AutoScalingGroup [AMI__Blue__Green__Application_AMI__Blue__Green__Service_AWS__Blue__Green__Doc__1] activity [Launching a new EC2 instance: i-01dc003477e21fcb6] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-e962248d","Availability Zone":"us-east-1a"}] +AutoScaling group reached steady state +Completed switch routes +``` +#### Blue/Green Workflow Deployment + +As with the [AMI Basic deployment](ami-deployment.md#deployment-basic), once your setup is complete, you can click the Workflow's **Deploy** button to start the Blue/Green deployment. + +![](./static/ami-blue-green-75.png) + +In the resulting **Start New Deployment** dialog, select the AMI to deploy, and click **SUBMIT**. + +The Workflow deploys. The Deployments page displays details about the deployed instances. + +![](./static/ami-blue-green-76.png) + +To verify the completed deployment, log into your AWS Console and locate the newly deployed instance(s). + +![](./static/ami-blue-green-77.png) + +### Rollbacks and Downsizing Old ASGs + +For details on how previous ASGs are downsized and what happens during rollback, see [How Does Harness Downsize Old ASGs?](../../concepts-cd/deployment-types/aws-ami-deployments-overview.md#how-does-harness-downsize-old-as-gs) + +### Support for Scheduled Scaling + +Currently, this feature is behind the Feature Flag `AMI_ASG_CONFIG_COPY`.The Base ASG you provide to Harness for creating the new ASG can use [AWS Scheduled Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html) (scheduled scaling with scheduled actions). + +There are a few important considerations: + +* When configuring the base ASG, the `ScheduledActions` process must be suspended so that it won’t scale the base ASG. Once Harness creates the new ASG from the base, Harness will enable the `ScheduledActions` process in the new ASG (if the base ASG had it). See [Suspending and resuming a process for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html) from AWS. +* Harness currently supports only the UTC timezone format in scheduled actions. Time values in scheduled actions in the base ASG need to be configured in UTC. + +### Troubleshooting + +The following errors might occur when you run an AMI Blue/Green deployment in Harness. + +#### Valid Blue/Green Deployment Failed and Rolled Back in Harness + +This can occur when Harness' steady state timeout setting is too restrictive, compared to the time AWS requires to swap your Target Groups' routes. + +To resolve the rollbacks: In your Blue/Green Workflow's [Step 1](#setup_asg_bg) (**Setup AutoScaling Group**), try raising the **Auto Scaling Steady State Timeout (mins)** setting to at least match the switchover interval you observe in the AWS Console. + +#### Rollbacks and Old ASGs + +When a rollback occurs, Harness detects if there were multiple versions running before the deployment began. If there were, Harness will rollback to that state. + +Harness supports multiple versions of the ASG running at the same time. When Harness deploys a new version, it upscales a new ASG and downscales the oldest ASG first, and so on. + +For example, if there were multiple ASGs of the series having active instances before deployment, Harness will rollback to the previous state of all the ASGs with active instances. + +### Next Steps + +* Add monitoring to your AMI deployment and running instances: see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) and [24/7 Service Guard Overview](https://docs.harness.io/article/dajt54pyxd-24-7-service-guard-overview). +* [AMI Canary Deployment](ami-canary.md). + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-canary.md b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-canary.md new file mode 100644 index 00000000000..d7d7a013f7e --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-canary.md @@ -0,0 +1,399 @@ +--- +title: Create an AMI/ASG Canary Deployment +description: Configure and execute AMI (Amazon Machine Image) Canary deployments in Harness. +# sidebar_position: 2 +helpdocs_topic_id: agv5t7d156 +helpdocs_category_id: mizega9tt6 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This guide will walk you through configuring and executing an AMI (Amazon Machine Image) Canary deployment in Harness. You will create a multi-phase Workflow that progressively deploy your new instances to a new Auto Scaling Group incrementally. + + + +### Before You Begin + +* [AWS AMI Quickstart](https://docs.harness.io/article/wfk9o0tsjb-aws-ami-deployments) +* [AMI Basic Deployment](ami-deployment.md) + + +### Overview + +A Harness AMI Canary Workflow provides a framework for progressively deploying and verifying your Amazon Machine Image (AMI) instances via Auto Scaling Groups. As shown in our [example Workflow](#workflow) below, you'll typically build out this framework to create a structure like the following: + +1. A Canary phase, containing steps that define your Auto Scaling Group (ASG), deploy a percentage or partial count of the ASG's instances, and verify this partial deployment. +2. (Optionally:) Further Canary phases that expand the partial deployment, with further verification. +3. A Primary phase that deploys your image to the full count of instances defined in your ASG. + + +### Prerequisites + +Before creating an AMI Canary Workflow, you'll need to set up the same AWS and Harness resources that Harness requires for an [AMI Basic deployment](ami-deployment.md). If you've already set up those resources, you can proceed to define your Canary [Workflow](#workflow_bg). + +Otherwise, please use the following links to the [AMI Basic deployment prerequisites](ami-deployment.md#prerequisites). Within AWS, you'll need: + +* A working AMI that Harness will use to create your instances. +* A base [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg.html) (ASG) that Harness will use as a template for the Auto Scaling Group it will create and deploy. For information on launch configuration and launch template support, see [Launch Configuration and Launch Template Support](ami-deployment.md#launch-configuration-and-launch-template-support). +* An AWS instance or ECS cluster in which to install the Harness Delegate(s). +* IAM role for the Harness Cloud Provider connection to AWS. Typically, you will set up the Harness Cloud Provider to assume the roles used by the installed Harness Delegate, whether that's an ECS or a Shell Script Delegate. The required policies for an ECS connection are listed in [ECS (Existing Cluster)](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#ecs_existing_cluster). + +Within Harness, you'll need the following resources: + +* A Delegate [installed and running](ami-deployment.md#delegate) in an AWS instance or ECS cluster. +* An AWS [Cloud Provider](ami-deployment.md#cloud-provider) that is configured either to provide account credentials, or to assume the Delegate's IAM role for the connection to AWS. +* A Harness [Application](ami-deployment.md#application). +* An AMI-based [Service](ami-deployment.md#service). +* An [Environment](ami-deployment.md#environment) with an Infrastructure Definition that specifies your base ASG. + +Harness specifically supports AWS *target* tracking scaling policies. For details, see AWS' [Dynamic Scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types) topic. + +### Limitations + +* If your base Auto Scaling Group is configured in AWS with [scaling policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types), Harness will apply those policies in your Workflow's *final* **Upgrade AutoScaling Group** step. + + Harness does not support copying ASG scaling policies with **Metric Type** value **Application Load Balancer request count per target**. +* Harness specifically supports AWS *target* tracking scaling policies. For details, see AWS' [Dynamic Scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types) topic. + + +### Create a Canary Workflow + +To set up a Canary Workflow: + +1. In your Application, click **Workflows** > **Add Workflow**. The **Workflow** dialog appears. +2. Enter a **Name**, and (optionally) enter a **Description** of this Workflow's purpose. +3. In **Workflow Type**, select **Canary Deployment**. +4. Select the **Environment** that you [configured earlier](ami-deployment.md#environment). (This Environment defines your base ASG.) + +The dialog will now look something like this:![](./static/ami-canary-157.png) +5. Click **SUBMIT**. You've now created your new Canary Workflow.![](./static/ami-canary-158.png) + + +#### Default Structure + +As you can see, a Harness AMI Canary Workflow's default structure is very simple: + + + +| | | +| --- | --- | +| | An empty placeholder for Deployment Phases is surrounded by two empty placeholders, for Pre- and Post-deployment Steps. | +| | The Workflow also contains a default Notification Strategy and Failure Strategy.You can edit each of these Strategies, and add further Strategies. | + + +#### Example Structure + +In this guide's remaining sections, we will expand only the Workflow's **Deployment Phases**—adding multiple phases, each deploying a portion of the instance count specified in the first phase. We will demonstrate how to build the following structure: + +![](./static/ami-canary-159.png) + +Here are the phases and steps we'll build: + +1. [Phase 1: Canary](#phase_1) + * [Set Up AutoScaling Group](#setup_asg): Specify how many EC2 instances to launch in the ASG that Harness deploys at the end of the Workflow. This step also specifies their resizing order and their steady state timeout. + * [Deploy Service](#upgrade_asg_1): Specify the percentage of instances to deploy in this phase. When you add additional phases, each phase automatically includes a Deploy Service step, which you must configure with the count or percentage of instances you want deployed in that phase. + * [Verify Service](#verify_service_1): This example uses CloudWatch verification. (You can add any [Verification Provider](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) that Harness supports.) + * [Rollback Steps](#rollback_1): Roll back the ASG if deployment fails. (Rollback steps are automatically added here, and to each of the remaining phases. This guide covers them only in this first phase.) +2. [Phase 2: Canary](#phase_2) + * [Deploy Service](#upgrade_asg_2): Upgrade the ASG to a higher percentage of instances. + * [Verify Service](#verify_service_2): This example uses a second round of CloudWatch tests. +3. [Phase 3: Primary](#phase_3) + * [Deploy Service](#upgrade_asg_3): Upgrade the ASG to its full target capacity. + +Ready to deploy? Let's examine the configuration and execution of each of the Workflow's three phases. + + +### Phase 1: Canary + +This example Workflow's first phase defines your Auto Scaling Group, upgrades it to a 25% Canary deployment, and evaluates this partial deployment using (in this example) [CloudWatch](https://docs.harness.io/article/q6ti811nck-cloud-watch-verification-overview) verification. + +To add a Canary Phase: + +1. In **Deployment Phases**, click **Add Phase**. The **Workflow Phase** dialog appears. + ![](./static/ami-canary-160.png) +2. In **Service**, select the Service you previously [set up](ami-deployment.md#service) for this AMI. +3. Select the Infrastructure Definition that specifies your base Auto Scaling Group +4. In **Service Variable Overrides**, you can add values to overwrite any variables in the Service you selected. Click **Add**, then enter the **Name** of the variable to override, and the override **Value**. (For details, see [Workflow Phases](https://docs.harness.io/article/m220i1tnia-workflow-configuration#workflow_phases).) +5. Click **SUBMIT**. The new Phase is created. + + ![](./static/ami-canary-161.png) + +6. Click **Phase 1** to define this Phase's Steps. + +On the resulting page, we'll fill in the predefined structure for Steps 1 and 2, and add a Verification provider in Step 3.![](./static/ami-canary-162.png) + +You can give each Phase a descriptive name by clicking the pencil icon at the top right. + + +#### Step 1: Setup AutoScaling Group + +In Step 1, select **AWS AutoScaling Group Setup** to open a dialog where you define major settings of the Auto Scaling Groups (ASGs) that Harness will create to deploy your AMI instances: + +![](./static/ami-canary-163.png) + +The **Instances** settings support [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template).For details about this dialog's fields, see the corresponding [AMI Basic Workflow instructions](ami-deployment.md#basic-setup-asg). For this Workflow, we've selected **Fixed Instances**, and have set **Max Instances** to **10** and **Desired Instances** to **4**. + +All Canary counts or percentages specified later in the Workflow are based on the **Desired Instances** setting. So, when we later deploy **25%** in this phase's [Upgrade Autoscaling Group](#upgrade_asg_1) step, that will be 25% of this **Desired Instances** setting. +##### Setup AutoScaling Group in Deployment + +Let's look at an example of deploying the AWS AutoScaling Group Setup we configured above. Here's the step in the Harness Deployments page: + +![](./static/ami-canary-164.png) + +Here's partial output, showing a successful setup: + + +``` +Starting AWS AMI Setup +Getting base auto scaling group +Getting base launch configuration +Getting all Harness managed autoscaling groups +Getting last deployed autoscaling group with non zero capacity +Using workflow input min: [0], max: [10] and desired: [4] +Creating new launch configuration [AMI__Application_AMI__Deployment__Service_AMI__Env__7] +Creating new AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] +Extracting scaling policy JSONs from: [AMI__ASG__test__XXXXXXXXXXXXX__172__1] +No policies found +Extracting scaling policy JSONs from: [AMI__Application_AMI__Deployment__Service_AMI__Env__6] +No policies found +... +Sending request to delete old auto scaling groups to executor +Completed AWS AMI Setup with new autoScalingGroupName [AMI__Application_AMI__Deployment__Service_AMI__Env__7] +``` +The new ASG is set up, but no instances are deployed yet. Instances will be deployed in this phase's [following](#upgrade_asg_1) **Upgrade AutoScaling Group** step, and in future phases' similar steps. + +If your base Auto Scaling Group is configured in AWS with [scaling policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types), Harness will apply those policies in your Workflow's *final* **Upgrade AutoScaling Group** step. +#### Step 2: Deploy Service + +In Step 2, select **Upgrade AutoScaling Group** to open a dialog where you can define how many (by **Count** or **Percent**) of the ASG's instances to deploy: + +![](./static/ami-canary-165.png) + +In this example, we've selected **Percent** units, and **25** percent of the **Desired Instances** we set in the [previous step](#setup_asg)'s **AWS AutoScaling Group Setup**. + +For general information on customizing this dialog's settings, and on how they correspond to AWS parameters, see the corresponding [AMI Basic Workflow section](ami-deployment.md#upgrade-asg). +##### Deploy Service Step in Deployment + +Using the **Upgrade AutoScaling Group** configuration shown above, here is the **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-canary-166.png) + +Here is partial output, showing the new Auto Scaling Group successfully resized and at steady state. We requested **25 Percent** of **4 Desired Instances**, and indeed, the log shows that AWS has set the `desired capacity to [1]`: + + +``` +Starting AWS AMI Deploy +Getting existing instance Ids +Resizing Asgs +Clearing away all scaling policies for Asg: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] +No policies found +Resizing AutoScaling Group: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] to [1] +Set AutoScaling Group: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] desired capacity to [1] +Successfully set desired capacity +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-0bb063712063f6fe3] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-e7c283ac","Availability Zone":"us-east-1c"}] +Waiting for instances to be in running state. pending=1 +... +AutoScaling group reached steady state +... +AutoScaling Group resize operation completed with status:[SUCCESS] +``` +Any **percent** references that appear in such AWS log data refer *only* to percentages of pending AWS tasks. They're unrelated to the Canary percentage targets we've set in any Harness **Upgrade AutoScaling Group** steps. +#### Step 3: Verify Service + +In Step 3, select **Add Verification** to open a dialog where you can add Harness [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) monitoring for your Canary phase. + +In this example, we've selected [CloudWatch](https://docs.harness.io/article/q6ti811nck-cloud-watch-verification-overview) verification, with monitoring for a single EC2 metric: + +![](./static/ami-canary-167.png) + +Within a Canary Workflow, Canary phases are the ideal places to add verification steps, using the [Canary Analysis strategy](https://docs.harness.io/article/0avzb5255b-cv-strategies-and-best-practices#canary_analysis). It's pointless to defer verification until the Primary (final) phase—because if the Canary phases are verified, you can assume that the Primary phase will proceed successfully. +##### Verify Service Step in Deployment + +Using the configuration shown above, here is the **Verify Service** step in the Harness Deployments page: + +![](./static/ami-canary-168.png) +The **Details** panel shows the selected verification provider's analysis, with logging over the **Analysis Time (duration)** we specified: + + +``` +Triggered data collection for 15 minutes, Data will be collected for time range 5:46:00 PM to 6:01:00 PM waiting for 2 Minutes before starting data collection. +Metrics data has been collected for minute 0 +Metrics data has been collected for minute 1 +Task is successfully enqueued for analysis for final phase. Analysis minute 4:01:00 PM +Metrics data has been collected for minute 2 +Task picked up by learning engine for final phase. Analysis minute 4:01:00 PM +Analysis completed for final phase. Analysis minute 4:01:00 PM +... +``` + +#### Rollback Steps + +By default, each AMI Canary phase includes a **Rollback Steps** section, containing a **Rollback AutoScaling Group** step. + +For details about this step's **Rollback all phases at once** option, see the corresponding [AMI Basic Deployment](ami-deployment.md#rollback-steps) section: + +![](./static/ami-canary-169.png) + +The Rollback step's default presence here is unlike the default for other Harness Canary Workflows, such as [Kubernetes Canary](https://docs.harness.io/article/wkvsglxmzy-kubernetes-canary-workflows). If an AMI Canary phase fails to deploy, its Rollback step will roll back the whole Workflow to its state prior to this deployment. This will delete its newly created instances, conserving AWS resources and costs. + +![](./static/ami-canary-170.png) + +Here is partial output, showing the shutdown and deletion of this failed phase's ASG: + + +``` +Asg: [AMI__Application_AMI__Deployment__Service_AMI__Env__8] being deleted after shutting down to 0 instances + +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__8] activity [Terminating EC2 instance: i-0cf3ecb7b561eafb4] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-e962248d","Availability Zone":"us-east-1a"}] +... +AutoScaling Group resize operation completed with status:[SUCCESS] +``` +#### Rollbacks and Downsizing Old ASGs + +For details on how previous ASGs are downsized and what happens during rollback, see [How Does Harness Downsize Old ASGs?](../../concepts-cd/deployment-types/aws-ami-deployments-overview.md#how-does-harness-downsize-old-as-gs) + + +### Phase 2: Canary + +In this example Workflow, we'll add a second Canary phase. Here, we'll define a second **Upgrade AutoScaling Group** step, and add a second **Verify Service** step. To add the second phase: + +1. In **Deployment Phases**, again click **Add Phase**.![](./static/ami-canary-171.png) +2. In the resulting **Workflow Phase** dialog, select the same **Service**, **Infrastructure Definition**, and any **Service Variable Overrides** that you selected in [Phase 1](#phase_1). +3. Click **Submit** to create the new Phase. + + +#### Step 1: Deploy Service + +Since we already [set up the ASG](#setup_asg) in Phase 1, this new phase's Step 1 defaults directly to **Upgrade AutoScaling Group**. + +Click the **Upgrade AutoScaling Group** link to open this dialog, where we're again using **Percent** scaling, but doubling the percentage to **50****Percent** of the ASG's **Desired Instances** before clicking **SUBMIT**: + +![](./static/ami-canary-172.png) + +To review: This means we're requesting 50 percent of the **4** Desired Instances that we specified in Phase 1's [Setup AutoScaling Group](#setup_asg) step. + +![](./static/ami-canary-173.png) + + +##### Deploy Service Step in Deployment + +Using the **Upgrade AutoScaling Group** configuration shown above, here is the **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-canary-174.png) + +Here is partial output, showing the Auto Scaling Group successfully resized and at steady state. The upgrade of the `desired capacity to [2]` corresponds to our request for **50 Percent** of **4 Desired Instances**: + + +``` +Starting AWS AMI Deploy +Getting existing instance Ids +Resizing Asgs +Clearing away all scaling policies for Asg: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] +No policies found +Resizing AutoScaling Group: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] to [2] +Set AutoScaling Group: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] desired capacity to [2] +Successfully set desired capacity +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-093c5fbf07709fc93] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-e962248d","Availability Zone":"us-east-1a"}] +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-0c6c0e87fa790822e] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-5c0fe853","Availability Zone":"us-east-1f"}] +Waiting for instances to be in running state. running=1,pending=1 +... +AutoScaling group reached steady state +... +AutoScaling Group resize operation completed with status:[SUCCESS] +``` + +#### Step 2: Verify Service + +In this example Workflow, we've defined Phase 2's **Verify Service** step identically to [Phase 1's Step 3](#verify_service_1). (See specific instructions there.) This specifies a second round of CloudWatch monitoring. + +In Workflows that you build, you could select other Harness [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) providers for monitoring, or you could choose to omit verification in this Canary phase. + + +##### Verify Service Step in Deployment + +Using the configuration shown above, here is the **Verify Service** step in the Harness Deployments page: + +![](./static/ami-canary-175.png) + +If the 50% deployment is as healthy as the 25% deployment, the **Details** panel's output should resemble the display in [Phase 1's Verification step](#verify_step_1_log). + + +### Phase 3: Primary + +If prior Canary phases succeed, the Workflow's final phase runs the actual deployment—creating an Auto Scaling Group with the full number of instances you specified in the [AWS AutoScaling Group Setup](#setup_asg) step. + +To add this final phase: + +1. In **Deployment Phases**, below your two existing Phases, again click **Add Phase**.![](./static/ami-canary-176.png) +2. In the resulting **Workflow Phase** dialog, select the same **Service**, **Infrastructure Definition**, and any **Service Variable Overrides** that you selected in [Phase 1](#phase_1). +3. Click **SUBMIT** to create the new Phase. + +The resulting **Phase 3** page provides structure only for an **Upgrade AutoScaling Group** step, and that's the only step we'll define.![](./static/ami-canary-177.png) + + +#### Step 1: Deploy Service + +To define this phase's scaling: + +1. In Step 1, select **Upgrade AutoScaling Group**. +2. In the resulting dialog, again select **Percent** scaling, and set the **Desired Instances** to **100** percent of the ASG's Desired Instances**:**![](./static/ami-canary-178.png) +3. Click **SUBMIT** to complete this Workflow's three-phase configuration.![](./static/ami-canary-179.png) + + +##### Deploy Service Step in Deployment + +Using the **Upgrade AutoScaling Group** configuration shown above, here is this final **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-canary-180.png) + +Here is partial output, showing the Auto Scaling Group fully increasing its `desired capacity to [4]`. Note that AWS retains the two instances that it created in prior phases: + + +``` +Resizing AutoScaling Group: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] to [4] +Set AutoScaling Group: [AMI__Application_AMI__Deployment__Service_AMI__Env__7] desired capacity to [4] +Successfully set desired capacity +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-0c6c0e87fa790822e] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-5c0fe853","Availability Zone":"us-east-1f"}] +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-093c5fbf07709fc93] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-e962248d","Availability Zone":"us-east-1a"}] + +``` +Remember that each `percent` entry in these logs indicates AWS' percentage completion of its pending tasks—not to the scaling percentages that we've specified per Phase.Next, AWS adds two more instances, and scales them up, eventually reaching steady state: + + +``` +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-069b78ca18ecd7799] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-09229d54","Availability Zone":"us-east-1d"}] +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-09e51a8924a09fcf9] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-ed0a96d2","Availability Zone":"us-east-1e"}] +Waiting for instances to be in running state. running=2,pending=2 +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-069b78ca18ecd7799] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-09229d54","Availability Zone":"us-east-1d"}] +AutoScalingGroup [AMI__Application_AMI__Deployment__Service_AMI__Env__7] activity [Launching a new EC2 instance: i-09e51a8924a09fcf9] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-ed0a96d2","Availability Zone":"us-east-1e"}] +AutoScaling group reached steady state +... +AutoScaling Group resize operation completed with status:[SUCCESS] +``` +And here...our AMI is fully deployed. + + +### Deploy the Workflow + +As with the [AMI Basic deployment](ami-deployment.md#deployment-basic), once your setup is complete, you can click the Workflow's **Deploy** button to start the Canary deployment. + +![](./static/ami-canary-181.png) + +In the resulting **Start New Deployment** dialog, select the AMI to deploy, and click **SUBMIT**. + +The Workflow deploys. The Deployments page displays details about the deployed instances. + +![](./static/ami-canary-182.png) + +To verify the completed deployment, log into your AWS Console and locate the newly deployed instance(s). + +![](./static/ami-canary-183.png) + +### Support for Scheduled Scaling + +Currently, this feature is behind the Feature Flag `AMI_ASG_CONFIG_COPY`.The Base ASG you provide to Harness for creating the new ASG can use [AWS Scheduled Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html) (scheduled scaling with scheduled actions). + +There are a few important considerations: + +* When configuring the base ASG, the `ScheduledActions` process must be suspended so that it won’t scale the base ASG. Once Harness creates the new ASG from the base, Harness will enable the `ScheduledActions` process in the new ASG (if the base ASG had it). See [Suspending and resuming a process for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html) from AWS. +* Harness currently supports only the UTC timezone format in scheduled actions. Time values in scheduled actions in the base ASG need to be configured in UTC. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-deployment.md b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-deployment.md new file mode 100644 index 00000000000..e14b11627fa --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-deployment.md @@ -0,0 +1,484 @@ +--- +title: AMI Basic Deployment +description: Explains how to use existing AMIs and ASGs to deploy new instances and ASGs to EC2. +# sidebar_position: 2 +helpdocs_topic_id: rd6ghl00va +helpdocs_category_id: mizega9tt6 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This guide explains how to use existing Amazon Machine Images (AMIs) and AWS Auto Scaling Groups (ASGs) to deploy new ASGs and instances to Amazon Elastic Compute Cloud (EC2) via Harness. + + +### Deployment Overview + +For a general overview of how Harness works, see [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +This guide will cover the following major steps: + +1. Install and run a Harness (Shell Script or ECS) Delegate. +2. Add an AWS Cloud Provider. +3. Create a Harness Application. +4. Create a Harness Service using the **Amazon Machine Image** artifact type. +5. Create an Environment and Infrastructure Definition. +6. Create a Harness Workflow for a basic deployment. +7. Deploy the Workflow. + + +### Before You Begin + +For a Basic deployment, you'll need: + +* A working AMI that Harness will use to create your instances. +* A working Auto Scaling Group (ASG), as a template for the Auto Scaling Group that Harness will create. +* An AWS Instance in which to install a Harness Delegate (covered in the next section). +* IAM Role for the Harness Cloud Provider connection to AWS. The required policy is **AmazonEC2FullAccess** and it is listed in [AWS EC2](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#aws_ec2). + +If the User Data you define in Harness or your launch configurations is going to perform actions that require permissions beyond those covered by **AmazonEC2FullAccess**, ensure that the IAM role assigned to the Harness Delegate(s) has the required roles and policies.We will walk you through setting up a Harness Delegate, connections to your AWS account (using the Harness AWS Cloud Provider), Harness Services, Infrastructure Definition, and Workflows. + + +### Limitations + +* If your base Auto Scaling Group is configured in AWS with [scaling policies](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types), Harness will apply those policies in your Workflow's *final* **Upgrade AutoScaling Group** step. + + Harness does not support copying ASG scaling policies with **Metric Type** value **Application Load Balancer request count per target**. +* Harness specifically supports AWS *target* tracking scaling policies. For details, see AWS' [Dynamic Scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types) topic. + +### Install and Run the Harness Delegate + +In your AWS instance, install and run either a Harness Shell Script Delegate (the simplest option) or a Harness ECS Delegate. For basic installation steps, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#ecs_delegate). For simplicity, Harness further recommends: + +* Run the Delegate in the same subnet as your Auto Scaling Group, using the same security group and the same key pair. +* Once the Delegate shows up in Harness Manager's **Delegates** page, assign it a [Selector](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_tags) (for example, **AMI-Delegate**). You will use this Delegate Selector when you set up the AWS Cloud Provider to assume the IAM role used by the Delegate. + + +### AWS Cloud Provider Setup + +Add an AWS Cloud Provider as follows: + +1. In Harness Manager, click **Setup**. +2. Click **Cloud Providers**. The **Cloud Providers** page appears. +3. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears. (You will override some default entries shown below.)![](./static/ami-deployment-03.png) +4. In **Type**, select **Amazon Web Services**. +5. In **Display Name**, enter a name for the Cloud Provider, such as **aws-ami-example**. +6. Enable the **Assume IAM Role on Delegate** option. +7. In **Delegate Selector**, enter the Selector that you gave your Delegate in Harness Manager's **Delegates** page. +8. Click **TEST** to ensure that your credentials work.![](./static/ami-deployment-04.png) +9. Click **SUBMIT**. The Cloud Provider is added, with a Selector matching your Delegate. + + +### Harness Application Setup + +The following procedure creates a Harness Application for your AMI deployments. + +An Application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. For more about Applications, see [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +To create a new Application: + +1. In Harness Manager, click **Setup**. +2. In the **Applications** section, click **Add Application**. The **Application** dialog appears.![](./static/ami-deployment-05.png) +3. Give your Application a name, such as **AMI Application**. +4. Optionally, add a **Description** of this Application's purpose.![](./static/ami-deployment-06.png) +5. Click **SUBMIT**. The new Application is added to the **Applications** list. +6. Click your new Application's name. The Application's list of entities appears, initially empty.![](./static/ami-deployment-07.png) + +In the following sections, we will define this Application's Service, Environment, and Infrastructure Definition. We'll then define and execute deployment Workflows. + + +### AMI Service Setup + +Different types of Harness Services are available for different deployment platforms. The AMI type includes AMI-specific settings. To add an AMI Service: + +1. In your new Application, click **Services**. The **Services** page appears. +2. In the **Services** page, click **Add Service**. The **Add** **Service** dialog appears—initially empty.![](./static/ami-deployment-08.png) +3. In **Name**, enter a name for your Service, such as **AMI Deployment Service**. +4. In **Description**, (optionally) enter a description for your service. +5. In **Deployment Type**, select **Amazon Machine Image**. Your dialog will now look something like this:![](./static/ami-deployment-09.png) +6. Click **SUBMIT**. The new Service is displayed.![](./static/ami-deployment-10.png) + +Next, we will set up the Artifact Source, User Data, and Configuration options. + + +#### Add Artifact Sources + +A Service's Artifact Source is the AMI you want to use to create instances. In this guide, we specify our Artifact Source for deployment by AWS Region and (optionally) Tags and AmiResource Filters. To add an Artifact Source to this Service: + +1. From the **Service Overview** section, click **Add Artifact Source**, then click **Amazon AMI**.![](./static/ami-deployment-11.png) + +1. In the resulting Artifact Source dialog, select the **Cloud Provider** you set up earlier under [AWS Cloud Provider Setup](#cloud_provider).![](./static/ami-deployment-12.png) +2. Select the AWS **Region** where your AMI is located.![](./static/ami-deployment-13.png) +3. Add any **AWS Tags** that you are using to identify your AMI. (For details on these key/value pairs, see Amazon's [Tagging Your Amazon EC2 Resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) topic.) +4. Optionally, in the **AmiResource Filters** field, add AMI filters to locate the AMI resource. These are key/value pairs that identify the AMI ID.![](./static/ami-deployment-14.png) +5. Click **SUBMIT** to add the Artifact Source. + +You can see the results of your Artifact Source settings clicking **Artifact History**. + +![](./static/ami-deployment-15.png) +#### Deployment Specification (User Data) + +In the Service's **Deployment Specification** section, you can select the **User Data** link to enter configuration scripts and directives that your AWS instance will run upon launch. + +![](./static/ami-deployment-16.png) + +The resulting **User Data** container corresponds to the AWS Launch Instance wizard's **Advanced Details** > **User data** container. + +![](./static/ami-deployment-17.png) + +##### What Can I Add in User Data? + +You can enter the same shell scripts and cloud-init directives that AWS will accept through its own UI. For details about scripting requirements, formatting, and options, see Amazon's EC2 [User Data and Shell Scripts](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts) documentation. When Harness creates a new instance, it will apply your defined User Data. + +##### Permissions for Your User Data + +If your User Data is going to perform actions that require permissions beyond those covered by **AmazonEC2FullAccess**, ensure that the IAM role assigned to the Harness Delegate(s) has the required roles and policies. + + +#### Configuration Variables + +**Config Variables** is supported for the AMI Service, but **Config Files** is not supported.In the Service's **Configuration** section, you can add Service-level variables and files. For details about the options here, see Harness' [Configuration Variables and Files](https://docs.harness.io/article/eb3kfl8uls-service-configuration#configuration_variables_and_files) topic. + +![](./static/ami-deployment-18.png) + +#### Referencing Config Variables in User Data + +You can define variables in the Service's **Config Variables** section and reference them in [User Data](#user_data) scripts. Type the prefix: `${serviceVariable.` to prompt Harness to automatically display existing variables. Here is an example: + +![](./static/ami-deployment-19.png) +### Environment and Infrastructure Definition + +Once you've defined your Application's Service, you define Environments where your Service can be deployed. In an Environment's Infrastructure Definition settings, you specify: + +* A Harness Service—for AMI, a Service with an AMI artifact you configured. +* A deployment type, such as Basic or Blue/Green. +* An AWS Cloud Provider, such as the **aws-ami-example** provider that you added in [AWS Cloud Provider Setup](#cloud_provider). + +An Environment represents one of your deployment infrastructures—such as Dev, QA, or Production. You can deploy one or many Services to each Environment. + + +#### Create a New Harness Environment + +The following procedure creates an Environment for the AMI Service you've configured. + +1. In your Harness Application, click **Environments**. The **Environments** page appears. +2. Click **Add Environment**. The **Environment** dialog appears.![](./static/ami-deployment-20.png) +3. In **Name**, enter a name that describes the deployment Environment—for example, **AMI-Env**. +4. Optionally, enter a **Description**. +5. In **Environment Type**, select **Non-Production**. +6. Click **SUBMIT**. In the resulting Environment Details page, you'll define your new Environment's contents.![](./static/ami-deployment-21.png) + + +#### Add an Infrastructure Definition + +An [Infrastructure Definition](https://docs.harness.io/article/n39w05njjv-environment-configuration#add_an_infrastructure_definition) specifies a target infrastructure for deployments. When you create a Harness Workflow, you will pick the Infrastructure Definition you want to use as the target deployment environment. + +For AMI deployments, you build your Infrastructure Definition using an AWS Auto Scaling Group. To add the Infrastructure Definition: + +1. In your Environment's **Infrastructure Definition** section, click **Add Infrastructure Definition**. The **Infrastructure Definition** dialog appears.![](./static/ami-deployment-22.png) +2. In **Name**, enter the name that will identify this Infrastructure Definition when you [add it to a Workflow](#basic_workflow_and_deployment). +3. In **Cloud Provider Type**, select **Amazon Web Services**. +4. In **Deployment Type**, select **Amazon Machine Image**. This expands the **Infrastructure Definition** dialog to look something like this:![](./static/ami-deployment-23.png) +5. For this example, accept the default **Use Already Provisioned Infrastructure** option. +:::note +(If you have configured an [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) in Harness, you can use that configuration by instead selecting the **Map** **Dynamically Provisioned Infrastructure** option. For details, see our AMI [CloudFormation](ami-blue-green.md#infrastructure-provisioners) and [Terraform](../../terraform-category/terrform-provisioner.md#ami-and-auto-scaling-group-2) examples.) +::: +6. In **Cloud Provider**, select the Cloud Provider you added earlier in [AWS Cloud Provider Setup](#cloud_provider). +7. Select the **Region** where your Auto Scaling Group (ASG) is located. +:::note +After you select your **Cloud Provider** and **Region**, the dialog's remaining drop-down lists take a few seconds to populate. +::: +8. In the **Auto Scaling Groups** drop-down, select an existing ASG in your EC2 setup that Harness will clone as it creates a new ASG to use for the deployment. +We typically call the ASG you select the *base ASG*. It is not used in the deployment. It is simply cloned in order for Harness to create a new ASG. Harness will use the existing ASG as a template, but it will not resize it all. +The newly created ASG will have unique name, Min and Max instances, and Desired Capacity. +9. **Reset ASG revision numbers each time a new base ASG is selected:** If you want to create a new ASG numbering series when you select a new base ASG in **Auto Scaling Groups**, select **Reset ASG revision numbers each time a new base ASG is selected**. For details on this option, see [Reset ASG Revision Numbers](#reset_asg_rev). +10. If you want to use Application Load Balancers, use **Target Groups (for ALB)** to select one or more [Target Groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html) that will route requests to the ASG you will deploy. +11. If you want to use Classic Load Balancers, use **Classic Load Balancers** to select one or more [Classic Load Balancers](https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html) for the ASG you will deploy. +12. Enable **Scope to Specific Services**, and use the adjacent drop-down to select the Harness Service you created in [AMI Service Setup](#service). + + (This scoping will make this Infrastructure Definition available whenever a Workflow, or Phase, is set up for this Service. You can also select additional Services in this field—and you can do that later, by editing the Infrastructure Definition to match newly added Services.) + + When you are done, the dialog's **Configuration** section will look something like this: + + ![](./static/ami-deployment-24.png) + +13. Click **Submit**. The new Infrastructure Definition is added to your Harness Environment. + +:::note +Harness will register the ASGs it creates with whatever Target Groups and Classic Load Balancers you enter. If you delete the ASG that you've specified here, Workflows using this Infrastructure Definition will fail to deploy. +::: + +This is the last required step to set up the deployment Environment in Harness. With both the Service and Environment set up, you can now proceed to [creating a deployment Workflow](#basic_deploy). + +:::note +Harness specifically supports AWS *target* tracking scaling policies. For details, see AWS' [Dynamic Scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types) topic. +::: + +#### Reset ASG Revision Numbers + +Harness enables you to use ASGs to uniquely identify and track independent releases. Each ASG might represent:  + +* A separate brand. +* A separate franchisee. +* A separate level of your SaaS product offering, each with its own configuration, permissions, and pricing. + +You achieve this independence without having to create duplicate Services or Infrastructure Definitions, by instead selecting different base ASGs when a Workflow is executed. + +For example, let's say you have a Freemium and Pro version of a microservice, and you want each deployed using a separate base ASG, because each base ASG manages resources in a way targeted to each microservice. + +To support this scenario, you have a single Harness Service for the microservice, a single Infrastructure Definition, and two Workflows, one for Freemium and one for Pro. + +When the Freemium Workflow deploys, you want it to select the base ASG that manages resources for the Freemium microservice. Likewise for the Pro Workflow. + +To accomplish this, each Workflow uses an Infrastructure Provisioner to supply the right base ASG name to the Infrastructure Definition: + +![](./static/ami-deployment-25.png) + +For Infrastructure Provisioner details, see our AMI [CloudFormation](ami-blue-green.md#infrastructure-provisioners) and [Terraform](../../terraform-category/terrform-provisioner.md#ami-and-auto-scaling-group-2) examples.Since you are using the same Infrastructure Definition with multiple base ASGs, you will likely want to reset the revision numbers applied to the new ASGs that are created each time a new base ASG is used. Otherwise, the revision numbers will be applied to all new ASGs, in sequence. + +You direct Harness to start a new ASG numbering series each time you select a new base ASG by enabling **Reset ASG revision numbers each time a new base ASG is selected**. + +Deploying the new ASG with a new numbering series prevents existing, unrelated ASGs from being downscaled. Within Harness, each combination of a Service with a new base ASG creates a new [Service Infrastructure Mapping](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions#service_infrastructure_mapping). + +If you select the **Map Dynamically Provisioned Infrastructure** option along with the **Reset ASG revision numbers...** option, Harness will automatically create a new numbering series based on the new value for the base ASG that your [Infrastructure Provisioner](ami-blue-green.md#infrastructure-provisioners) outputs in a variable expression. This is the most common option. + +Once you **Submit** an Infrastructure Definition, the check box is locked. You will not be able to enable or disable this option later within the same Infrastructure Definition.In addition, the new ASGs you create from different base ASGs should have unique names. You can achieve this by using the Infrastructure Provisioner output expression in the **AWS Auto Scaling Group Setup** Workflow step: + +![](./static/ami-deployment-26.png) + +If you select the **Use Already Provisioned Infrastructure** option along with the **Reset ASG revision numbers...** option, Harness will start a new ASG numbering series each time you manually select a new base ASG in the **Auto Scaling Group** drop-down. + + +#### Launch Configuration and Launch Template Support + +In Harness AMI deployments, the base ASG you select in your Infrastructure Definition (**Auto Scaling Groups** drop-down) is used to create the new ASG Harness deploys for the AMI's EC2 instances. + +AWS ASGs use either a [launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html) or a [launch template](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchTemplates.html) as a configuration template for its EC2 instances. Both contain information such as instance type, key pair, security groups, and block device mapping for your instances. The difference is that launch templates support versioning. AWS recommends that you use launch templates instead of launch configurations to ensure that you can use the latest features of Amazon EC2.Here's how Harness uses the base ASG launch configuration or launch template when it creates the new ASG: + +* **Launch configurations:** If your base ASG uses a launch configuration, Harness uses that launch configuration when creating the new ASG. + +* **Launch templates:** If your base ASG uses a launch template (default, latest, or specific version), Harness uses that launch template when creating the new ASG. + +Harness creates a new Launch Template *version* when it creates the new ASG. This applies to existing base ASGs and base ASGs provisioned via Terraform or CloudFormation. + +**Use the Latest Version:** If you want Harness to use the latest version of your Launch Template, ensure that you select **Latest** in the **Version** when you created the launch template.For more information on launch templates, see [Creating a Launch Template for an Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) from AWS. + +#### Override Service Settings + +Optionally, your Environment can override Config Variables and Config Files set in [Services](#service) that use the Environment. This enables you to maintain each Service's stored settings, but change them when using the Service with this Environment. + +![](./static/ami-deployment-27.png) + +As an example, you might use a single Service across separate Environments for QA versus Production, and vary Service path variables depending on the Environment. For details, see [Override a Service Configuration](https://docs.harness.io/article/n39w05njjv-environment-configuration#override_a_service_configuration). + +You can also overwrite Service variables at the Phase level of a multiple-Phase Workflow, such as Canary. +### Basic Workflow and Deployment + +This section walks you through creating an AMI Basic Workflow in Harness. By default, Harness AMI Basic Workflows have four deployment steps: + +1. [Setup AutoScaling Group](#basic_setup_asg): Specify how many instances to launch, their resizing order, and their steady state timeout. +2. [Deploy Service](#upgrade_asg): Specify the number or percentage of instances to deploy, within the ASG you've set up. +3. Verify Service: Optionally, specify Verification Providers or Collaboration Providers. +4. Wrap Up: Optionally, specify post-deployment commands, notifications, or integrations. + +Harness preconfigures only the first two steps. Below, we outline those steps' defaults and options, with examples of the deployment logs' contents at each step. + +The remaining two steps are placeholders, to which you can add integrations and commands. For details on adding **Verify Service** integrations, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + +Your Workflows can use Harness' built-in `${artifact.metadata.tag}` variable to refer to tagged AMIs. For example, if an AMI has an AWS tag named `harness`, you can refer to that AMI within Harness as `${artifact.metadata.harness}`. For details about this convention, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables#variables_list). This can be useful in [triggering Workflows and Pipelines](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2#add_a_trigger). +#### Create a Basic Workflow + +To create a Basic Workflow for AMI deployment, do the following: + +1. In your Application, click **Workflows**. +2. Click **Add Workflow**. The **Workflow** dialog appears. + +If you are using Infrastructure Definitions, the **Workflow** dialog will look like this:![](./static/ami-deployment-28.png) +3. In **Name**, enter a name for your Workflow, such as **AMI Basic Workflow**. +4. Optionally, add a **Description** of this Workflow's purpose. +5. In **Workflow Type**, select **Basic Deployment**. +6. Select the **Environment** you created for your AMI Basic deployment. +7. Select the **Service** you created for your AMI Basic deployment. +8. Select the Infrastructure Definition you created for your AMI Basic deployment. + The dialog will now look something like this: + ![](./static/ami-deployment-29.png) +9. Click **Submit**. The new Basic Workflow for AMI is preconfigured. + ![](./static/ami-deployment-30.png) + +Next, we will examine options for configuring the Basic deployment's first two steps. + + +#### Step 1: Setup AutoScaling Group + +In Step 1, select **AWS AutoScaling Group Setup** to open a dialog where you can fine-tune the Auto Scaling Group (ASG) that Harness creates for the AMI Service you are deploying. + +![](./static/ami-deployment-31.png) + +Many of the ASG's settings are mirrored from the ASG selected in the Workflow's Infrastructure Definition. (This ASG is also called the *base Auto Scaling Group.*) However, this setup dialog enables you to provide the remaining settings, using the following options: + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Auto Scaling Group Name** | Either enter a name for the ASG that Harness will create (e.g., `MyApp_MyAmiService_MyEnv.`), or accept the name that Harness automatically generates. Entering a custom name will make your ASG easier to identify when you add it to an Infrastructure Definition. | +| **Instances** | Select **Fixed** to enforce a Max, Min, and Desired number of instances.Select **Same as already running Max Instances** to use scaling settings on the last ASG deployed by this Harness Workflow.If this is the first deployment and you select **Same as already running Max Instances**, Harness uses a default of Min 0, Desired 6, and Max 10. Harness does not use the Min, Max, and Desired settings of the base ASG. | +| **Max Instances** | This field is displayed only if you have selected **Fixed** Instances above—in which case, an entry is required. Enter the maximum number of instances that the ASG collection should have at any time. This number corresponds to the AWS ASG's **Max** setting, and also constrains the **Desired Capacity**. | +| **Min Instances** | Optionally, enter the minimum number of instances that the ASG should have at any time. This number corresponds to the AWS ASG's **Min** setting, and can be `0`. (Field is displayed only if you have selected **Fixed** Instances above.) | +| **Desired Instances** | Optionally, set the target number of instances for the ASG to maintain. This number corresponds to the AWS ASG's **Desired Capacity** setting. (Field is displayed only if you have selected **Fixed** Instances above.) | +| **Resize Strategy** | Select whether to resize new ASGs upward first, or to resize old ASGs downward first. The typical production selection is **Resize New First**, to maintain the highest availability. The **Downsize Old First** option constrains usage and costs, especially during testing. | +| **Auto Scaling Steady State Timeout (mins)** | Enter how long Harness should wait for ASGs to register and reach steady state. This setting (which is internal to Harness) also defines the interval that Harness will wait before downsizing old ASGs and deregistering them from the Target Group(s). | + +The **Instances** settings support [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template).Certain settings in this dialog correspond to AWS Console options, as shown here: + +![](./static/ami-deployment-32.png) +##### Setup AutoScaling Group in Deployment + +Let's look at an example where the AWS AutoScaling Group Setup—configured as shown above—is deployed. Here is the step in the Harness Deployments page: + +![](./static/ami-deployment-33.png) + +Here's the output, showing a successful setup: + + +``` +INFO 2019-06-04T19:03:16.561+0000 Starting AWS AMI Setup +INFO 2019-06-04T19:03:18.121+0000 Starting AWS AMI Setup +Getting base auto scaling group +Getting base launch configuration +Getting all Harness managed autoscaling groups +Getting last deployed autoscaling group with non zero capacity +Creating new launch configuration [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] +INFO 2019-06-04T19:03:19.926+0000 Creating new AutoScalingGroup [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] +Sending request to delete old auto scaling groups to executor +Completed AWS AMI Setup with new autoScalingGroupName [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] +AutoScalingGroup [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__795] activity [Terminating EC2 instance: i-0d4ad1a03aee7f6dc] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-3669906a","Availability Zone":"us-east-1a"}] +AutoScalingGroup [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__795] activity [Launching a new EC2 instance: i-0d4ad1a03aee7f6dc] progress [100 percent] , statuscode [Successful] details [{"Subnet ID":"subnet-3669906a","Availability Zone":"us-east-1a"}] +``` + +#### Step 2: Deploy Service + +In Step 2, select **Upgrade AutoScaling Group** to define how many instances to deploy in the Auto Scaling Group, as either a count or a percentage. + +Every new AMI/ASG deployment creates a new ASG. The instances in ASGs used by previous deployments are downsized to a max count of 3. Additional instances are detached.![](./static/ami-deployment-34.png) + +This dialog provides the following options: + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Desired Instances (cumulative)** | Set the number of Amazon EC2 instances that the Auto Scaling Group will attempt to deploy and maintain. This field corresponds to the ASG's **Desired Capacity** setting, and interacts with the adjacent **Instance Unit Type** field:* Where **Instance Unit Type** is set to **Count**, enter the actual number of instances. +* Where **Instance Unit Type** is set to **Percent**, enter a percentage of the available capacity. + +Either way, your setting here cannot exceed your **Max Instance** capacity setting—which is a count—in the Workflow's preceding [Setup AutoScaling Group](#asg_setup_step) step. | +| **Instance Unit Type (Count/Percent)** | Set the unit of measure, as either **Count** or **Percent**. | + +This diagram illustrates the relationship among Upgrade settings: + +![](./static/ami-deployment-35.png) + +##### Deploy Service Step in Deployment + +Using the **Upgrade AutoScaling Group** configuration shown above—requesting a modest **Desired Instances** count of **1**—here is the **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-deployment-36.png) + +Here is partial output, showing a successful resizing and deployment: + + +``` +INFO 2019-06-04T19:03:23.916+0000 Starting AWS AMI Deploy +Getting existing instance Ids +Resizing Asgs +Resizing AutoScaling Group: [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] to [1] +Set AutoScaling Group: [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] desired capacity to [1] +Successfully set desired capacity +INFO 2019-06-04T19:03:55.916+0000 AutoScalingGroup [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] activity [Launching a new EC2 instance: i-0dca2187fac9b6e0f] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-3669906a","Availability Zone":"us-east-1a"}] +Waiting for instances to be in running state. pending=1 +INFO 2019-06-04T19:04:10.530+0000 AutoScalingGroup [Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] activity [Launching a new EC2 instance: i-0dca2187fac9b6e0f] progress [30 percent] , statuscode [PreInService] details [{"Subnet ID":"subnet-3669906a","Availability Zone":"us-east-1a"}] +AutoScaling group reached steady state +Setting min capacity of Asg[Harness__Verification_AMI__Service__test__Quality__Assurance__Setup_Virginia__799] to [1] +[...] +AutoScaling group reached steady state +INFO 2019-06-04T19:06:13.937+0000 AutoScaling Group resize operation completed with status:[SUCCESS] +``` + +#### Basic Workflow Deployment + +Now that the setup is complete, you can click **Deploy** in the Workflow to deploy the artifact to your Auto Scaling Group. + +![](./static/ami-deployment-37.png) + +Next, select the AMI you want to deploy. (Harness populates this list from the Artifact Source settings in the AMI Service you created.) Then click **SUBMIT**. + +![](./static/ami-deployment-38.png) + +The Workflow deploys. Note that the Deployments page displays details about the deployed instances. + +![](./static/ami-deployment-39.png) + +To verify the completed deployment, log into your AWS Console and locate the newly deployed instance(s). + +![](./static/ami-deployment-40.png) +### Rollback Steps + +When you create an AMI Workflow, its **Rollback Steps** section automatically includes a **Rollback Service** step. This step will execute when Harness needs to roll back your deployment and restore the previous working version. + +![](./static/ami-deployment-41.png) + +The configuration options available here depend on the deployment type. For general information about Rollback options, see [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration#rollback_steps). + + +#### Multi-Service Rollback + +In an AMI Multi-Service Workflow's **Rollback Service** step, click **Rollback AutoScaling Group** to open the dialog shown below: + +![](./static/ami-deployment-42.png) + +Enable the single option here, **Rollback all phases at once**, if you want to simultaneously roll back all of the AMI Workflow's Phases, up to the Phase where deployment failed. + +For example, if a Workflow's Phase 2 fails to deploy, both Phase 2 and Phase 1 will be rolled back simultaneously. (Harness will ignore any Phase 1 rollback strategy settings.) + +If this check box is not enabled, Harness will roll back Phase 2 and then Phase 1, according to each phase's rollback strategy. + +#### Rollbacks and Downsizing Old ASGs + +For details on how previous ASGs are downsized and what happens during rollback, see [How Does Harness Downsize Old ASGs?](../../concepts-cd/deployment-types/aws-ami-deployments-overview.md#how-does-harness-downsize-old-as-gs) + +### Support for Scheduled Scaling + +Currently, this feature is behind the Feature Flag `AMI_ASG_CONFIG_COPY`.The Base ASG you provide to Harness for creating the new ASG can use [AWS Scheduled Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html) (scheduled scaling with scheduled actions). + +There are a few important considerations: + +* When configuring the base ASG, the `ScheduledActions` process must be suspended so that it won’t scale the base ASG. Once Harness creates the new ASG from the base, Harness will enable the `ScheduledActions` process in the new ASG (if the base ASG had it). See [Suspending and resuming a process for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html) from AWS. +* Harness currently supports only the UTC timezone format in scheduled actions. Time values in scheduled actions in the base ASG need to be configured in UTC. + + +### Troubleshooting + +The following errors might occur when setting up and deploying AMIs in Harness. + +#### Auto Scaling Group Not Showing Up + +When you [configure](#add_a_service_infrastructure) an **Infrastructure Definition**, the **Infrastructure Definition** dialog's **Auto Scale Group** drop-down will initially be empty. This is expected behavior. Simply allow a few seconds for the drop-down to populate. + +#### Couldn't Find Reference AutoScalingGroup + +If a Workflow's [Setup AutoScaling Group](#asg_setup_step) step fails with a message of the following form, this indicates that at least one **Infrastructure Definition** in the Workflow's Environment is configured with an ASG that is not currently available on AWS: + +`Couldn't find reference AutoScalingGroup: [ECS\_\_QA\_\_Application\_AMI\_QA\_\_245] in region: [us‑east-1]` + +To correct this: + +1. In Harness Manager, navigate to your Application's **Environments** details page. +2. Open each Infrastructure Definition used by the Workflow that failed, and navigate to the dialog's lower configuration section. Ensure that the **Auto Scaling Groups** field points to an ASG to which you currently have access in the AWS Console. +3. If this does not allow your deployment to proceed, you might also need to toggle the **Host Name Convention** field's entry between the `publicDnsName` and `privateDnsName` primitives. (This depends on whether the Launch Configuration that created your ASG template was configured to create a public DNS name.) For details, see AWS' [IP Addressing in a VPC](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html#as-vpc-ipaddress) topic.![](./static/ami-deployment-43.png) + +Harness Manager will prevent you from simply removing a misconfigured Infrastructure Definition, if it's referenced by any of your Application's Workflows. So in some cases, you might find it easiest to create a new Infrastructure Definition, reconfigure your Workflow to use that new infrastructure, and then delete the broken Infrastructure Definition(s). +### Next Steps + +* See [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) and [24/7 Service Guard Overview](https://docs.harness.io/article/dajt54pyxd-24-7-service-guard-overview) to add Verification Providers to your AMI deployment and running services. +* [AMI Blue/Green Deployment](ami-blue-green.md). +* [AMI Canary Deployment](ami-canary.md). + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-deployments-overview.md b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-deployments-overview.md new file mode 100644 index 00000000000..5718b19bdc9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-deployments-overview.md @@ -0,0 +1,26 @@ +--- +title: AMI/ASG Deployments How-tos +description: Links to Harness' deployment guides covering AMI (Amazon Machine Image) Basic, Canary, and Blue/Green deployments. +# sidebar_position: 2 +helpdocs_topic_id: ox5ewy2sf4 +helpdocs_category_id: mizega9tt6 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness' AMI deployment guides walk you through deploying Amazon Machine Images to AWS (Amazon Web Services). + +1. [AMI Basic Deployment](ami-deployment.md) +2. [AMI Canary Deployment](ami-canary.md) +3. [AMI Blue/Green Deployment](ami-blue-green.md) + +For an overview of AWS AMI deployments, see [AWS AMI Deployments Overview](../../concepts-cd/deployment-types/aws-ami-deployments-overview.md). + +For steps on using provisioners as part of the deployment, see [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner). + +### Only Private AMIs are Supported + +Harness only supports private AMIs. + +AWS EC2 allows you to share an AMI so that all AWS accounts can launch it. AMIs shared this way are called public AMIs. Harness does not support public AMIs. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-elastigroup.md b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-elastigroup.md new file mode 100644 index 00000000000..6713882fcfa --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/ami-elastigroup.md @@ -0,0 +1,1021 @@ +--- +title: AMI Spotinst Elastigroup Deployment +description: Configure and execute AMI (Amazon Machine Image) Basic, Blue/Green, and Canary deployments via the Spotinst Elastigroup cloud-cost optimization platform. +# sidebar_position: 2 +helpdocs_topic_id: bkxhdsur2z +helpdocs_category_id: mizega9tt6 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This guide outlines how to configure and execute AMI (Amazon Machine Image) deployments—using Blue/Green and Canary strategies—via the Spotinst Elastigroup management platform. + +Currently, Harness integrates with Spotinst only for deployments to AWS (Amazon Web Services) via Elastigroups. + +## Before You Begin + +* [AMI Basic Deployment](ami-deployment.md) +* [AMI Blue/Green Deployment](ami-blue-green.md) +* [AMI Canary Deployment](ami-canary.md) + + +## Overview + +Harness can integrate AMI deployments with Spotinst's [Elastigroup](https://spotinst.com/products/elastigroup/) cloud-cost optimization platform. Elastigroup predicts the availability of AWS' discounted excess capacity (Spot Instances), and automatically reserves new capacity to maintain your applications' availability at reduced cost. + +This guide outlines how to set up and orchestrate your AWS, Spotinst, and Harness resources for AMI Blue/Green or Canary deployments. Here is an example of a completed Blue/Green deployment on the Harness Deployments page: + +![](./static/ami-elastigroup-78.png) + +The deployed instances in your corresponding Elastigroup will appear in the Spotinst Console: + +![](./static/ami-elastigroup-79.png) + + +## Prerequisites: AWS, Spotinst, and Harness Resources + +Before creating your Harness [Infrastructure Definition](#infrastructure_definition) and [Blue/Green](#blue_green) or [Canary Workflows](#canary), you will need to set up the following resources. + +Several of these resources are also prerequisites for Harness AMI [Basic](ami-deployment.md), [Blue/Green](ami-blue-green.md), or [Canary](ami-canary.md) deployments that do not rely on Elastigroups. To minimize duplication, we link out to the corresponding deployment guides for some details. +### AWS Prerequisites + +For an AMI Canary deployment, you must set up the following resources up within Amazon Web Services: + +* A working AMI that Harness will use to create your instances. +* At least one [Application Load Balancer](https://docs.aws.amazon.com/en_pv/elasticloadbalancing/latest/application/introduction.html) (ALB) or [Classic Load Balancer](https://docs.aws.amazon.com/en_pv/elasticloadbalancing/latest/classic/introduction.html). (See the [Spotinst documentation](https://docs.spot.io/elastigroup/tools-integrations/aws-load-balancers-elb-alb) for Load Balancer support.) + +An AMI Blue/Green deployment has these further requirements: + +* A pair of [Target Groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html)—typically staging (Stage) and production (Prod)—both with the **instance** target type. +* A Load Balancer with listeners for both your Target Groups' ports. + + +### Spotinst Prerequisites + +Within Spotinst, you must configure at least one Elastigroup cluster that matches your AWS configuration's AMI, VPC (virtual private cloud), Load Balancer(s), security groups, availability zones, and allowed instance types. + +![](./static/ami-elastigroup-80.png) + +For details, see [Spotinst tutorials](https://docs.spot.io/elastigroup/tutorials/). + + +### Harness Prerequisites + +Within Harness, you'll need to set up the following resources. (Some of these are covered in detail in Harness' [AMI Basic Deployment](ami-deployment.md#basic-deploy) Guide. Spotinst-specific setup is covered in the sections below.) + +* A Delegate [installed and running](ami-deployment.md#install-and-run-the-harness-delegate) in an AWS instance or ECS Cluster. +* An AWS Cloud Provider, preferably configured to assume the Delegate’s IAM role for the connection to AWS. (See [Set Up Cloud Providers](#cloud_providers) below.) +* A [Spotinst Cloud Provider](#spotinst_cloud_provider), which connects to your Spotinst account using your credentials on that account. +* A Harness Application, +* An AMI-based [Service](#service). +* An Environment with an [Infrastructure Definition](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) that specifies your Load Balancer(s). + +For Blue/Green Deployments, the Infrastructure Definition will also specify Stage and Prod Target Groups. You import this configuration from your Elastigroup, as outlined below in [Define the Infrastructure](#infrastructure_definition).At deploy time, Harness will override much of the above base configuration with the instance and capacity targets that you specify in your Workflow. + + +#### Harness Service + +Set up your Harness Service as we document for a Harness AMI Basic Deployment in [AMI Service Setup](ami-deployment.md#ami-service-setup). + +After you create your Service, you have the option of using the [User Data](ami-deployment.md#deployment-specification-user-data) link to enter configuration scripts and directives that your AWS instance will run upon launch. + +You can also specify [Config Variables](https://docs.harness.io/article/eb3kfl8uls-service-configuration#config_variables), and then reference them in your [Infrastructure Definitions](#infrastructure_definition). + + +## Set Up Cloud Providers + +Although most Harness deployments rely on a single Cloud Provider, Elastigroup deployments require both a Spotinst Cloud Provider and an AWS Cloud Provider. (You will need to reference both when you create your [Infrastructure Definition](#infrastructure_definition).) + + +### AWS Cloud Provider + +Follow the instructions in [AWS Cloud Provider Setup](ami-deployment.md#aws-cloud-provider-setup) to create a Cloud Provider that references the Delegate you created to manage your AWS credentials. In this example, select the **Assume IAM Role on Delegate** option, and reference your Delegate via **Delegate Tag**. + +The Delegate must be in the same AWS VPC and subnet that you plan to use for your AWS resources.Once filled in, the AWS Cloud Provider dialog will look something like this: + +![](./static/ami-elastigroup-81.png) + + +### Spotinst Cloud Provider + +To set up the Spotinst Cloud Provider, follow the steps in [Spotinst Cloud Provider](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +Keep the Spotinst Console open to [copy its configuration](#add_elastigroup_config) into your Harness Infrastructure Definition. + + +### Define the Infrastructure + +The [Infrastructure Definition](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) is where you specify the target infrastructure for your deployments. You'll configure your Infrastructure Definition for Elastigroup in this section (working in both Harness Manager and the Spotinst Console). You'll then select this Infrastructure Definition as your target deployment environment when you later create a [Blue/Green](#blue_green) or [Canary](#canary) Workflow. + + +### Add the Infrastructure Definition + +1. Within your Harness Application, select an existing **Environment**, or create a new one. +2. In the Environment's **Infrastructure Definition** section, click **Add Infrastructure Definition**. +3. In the resulting **Infrastructure Definition** dialog, enter a **Name** to identify this Infrastructure Definition when you add it to a Workflow. +4. In **Cloud Provider Type**, select **Amazon Web Services**. +5. In **Deployment Type**, select **Amazon Machine Image**. This expands the **Infrastructure Definition** dialog's top section to look something like this:![](./static/ami-elastigroup-82.png) +6. Select the check box labeled **Use Spotinst Elastigroup to Manage Infrastructure**. +7. For this example, accept the default **Use** **Already Provisioned Infrastucture** option. +If you have configured an [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) in Harness, you can use that configuration by instead selecting the **Dynamically Provisioned** option. For details, see our AMI Blue/Green Deployment Guide's [Infrastructure Provisioners](#infrastructure_provisioners), and our AMI [CloudFormation](ami-blue-green.md#infrastructure-provisioners) and [Terraform](../../terraform-category/terrform-provisioner.md#ami-and-auto-scaling-group) examples. +8. Select the **AWS** **Cloud Provider** that you [created earlier](#aws_cloud_provider). +9. Select the **Spotinst** **Cloud Provider** that you [created earlier](#spotinst_cloud_provider). +10. Select the **Region** that you configured earlier in [AWS](#aws_prereq) and [Spotinst](#spotinst_prereq). The dialog's center section will now look something like this:![](./static/ami-elastigroup-83.png) + +Next, you'll complete the Infrastructure Definition by copying your Elastigroup configuration from Spotinst and pasting it into this dialog. + + +### Add Elastigroup Configuration + +To populate the **Elastigroup Configuration** field with your configuration JSON from Spotinst: + +1. In the Spotinst Console, click the [Elastigroups tab](https://console.spotinst.com/#/aws/ec2/elastigroup/list). +2. Click the Elastigroup you configured earlier in [Spotinst Prerequisites](#spotinst_prerequisites). +3. From the top-right **Actions** menu, select **View Configuration**. + ![](./static/ami-elastigroup-84.png) +4. In the resulting **Configuration** modal, select all the JSON and copy it to your clipboard. + ![](./static/ami-elastigroup-85.png) + If you prefer, click **EXPORT** to save the JSON to a file. +5. Back in Harness' Infrastructure Definition dialog, paste the JSON into the **Elastigroup Configuration** field. This will make your Elastigroup infrastructure available to Harness deployments. +6. Select **Scope to Specific Services**, and—in the adjacent drop-down—select the AMI [Service](#service) that you created earlier and will deploy to this infrastructure. (This scoping ensures that this Infrastructure Definition will be available whenever a Workflow, or Pipeline Phase, is set up for this Service.) + ![](./static/ami-elastigroup-86.png) + The **Infrastructure Definition** dialog's lower section will now look something like this: + ![](./static/ami-elastigroup-87.png) +7. Click **Submit** to add the new Infrastructure Definition to your Harness Environment. + +You can now proceed to create a [Blue/Green](#blue_green) or [Canary](#canary) Workflow. The next few sections cover additional Infrastructure Definition details and options. + +### Elastigroup Configuration Overrides + +When you deploy your Workflows, Harness will override much of the Elastigroup Configuration JSON that you imported. Harness will replace the configured `imageId` element with the actual AMI artifact you choose to deploy in the Workflow. + +In Blue/Green deployments, Harness will also override the `loadBalancers` element, substituting the Load Balancers that you specify in the Workflow. Even if the `loadBalancers` element were absent from the **Elastigroup Configuration** field, you could still deploy a properly configured Workflow using this Infrastructure Definition. + + +### Service Variables in Elastigroup Configuration + +You also have the option to use Harness variables within your Infrastructure Definition's **Elastigroup Configuration** field. + +Infrastructure Definitions do not currently support auto-fill for expressions. So you must manually type Service variables into the JSON, in this format: `${serviceVariable.`}Create the variables in your [Service](#service)’s **Config Variables** section. + +Then insert them in the **Elastigroup Configuration** JSON using the format above. For further details, see [Add Service Config Variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables). + + +## Basic Workflow and Deployment + +Assuming that you've set up all [prerequisites](#prerequisites), the following sections outline how to create a Basic Workflow and deploy your AMI. To avoid duplication, they focus on Elastigroup-specific configuration and deployment. For background and details, please refer to the following [AMI Basic Deployment Guide](ami-deployment.md#basic-deploy) sections: + +* [Basic Workflow and Deployment](ami-deployment.md#basic-deploy) +* [Rollback Steps](ami-deployment.md#rollback-steps) + +Elastigroups perform the functions that Auto Scaling Groups perform in standard AMI deployments.By default, Harness AMI Basic Workflows have four deployment steps: + +1. [Elastigroup Setup](#basic_setup_asg): Specify how many instances to launch, and their steady state timeout. +2. [Elastigroup Deploy](#upgrade_asg): Specify how many instances to deploy, as a number or percentage of the Elastigroup parameters you've set up. +3. **Verify Staging**: Optionally, specify Verification Providers or Collaboration Providers. +4. **Wrap Up**: Optionally, specify post-deployment commands, notifications, or integrations. + +Harness preconfigures only the first two steps. Below, we outline those steps' defaults and options, with examples of the deployment logs' contents at each step. + +The remaining two steps are placeholders, to which you can add integrations and commands. For details on adding **Verify Staging** integrations, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + +Your Workflows can use Harness' built-in `${artifact.metadata.tag}` variable to refer to tagged AMIs. For example, if an AMI has an AWS tag named `harness`, you can refer to that AMI within Harness as `${artifact.metadata.harness}`. For details about this convention, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables#variables_list). This can be useful in [triggering Workflows and Pipelines](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2#add_a_trigger). + +### Create a Basic Workflow + +To create a Basic Workflow for AMI Elastigroup deployment: + +1. In your Application, click **Workflows**. +2. Click **Add Workflow**. The **Workflow** dialog appears.![](./static/ami-elastigroup-88.png) +3. In **Name**, enter a name for your Workflow, such as **Elastigroup Basic**. +4. Optionally, add a **Description** of this Workflow's purpose. +5. In **Workflow Type**, select **Basic Deployment**. +6. Select the **Environment** and [Service](#service) **that** you created for your AMI Elastigroup deployments. +7. Select the **Infrastructure Definition** that you [configured earlier](#add_infra_def) for AMI Elastigroup deployments. The dialog will now look something like this:![](./static/ami-elastigroup-89.png) +8. Click **Submit**. The new Basic Workflow is preconfigured.![](./static/ami-elastigroup-90.png) + +Next, we will examine options for configuring the Basic deployment's first two steps. + + +### Step 1: Elastigroup Setup + +In Step 1, select **Elastigroup Setup** to open a dialog where you can configure the Elastigroup that Harness will create for the AMI you are deploying. + +![](./static/ami-elastigroup-91.png) + +If you select **Fixed** Instances, the dialog expands as shown here: + +![](./static/ami-elastigroup-92.png) + +This expanded Setup dialog provides the following fields, all of which require entries: + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Elastigroup Name** | Either enter a name for the Elastigroup that Harness will create (e.g., `MyApp_MyAmiService_MyInfra`), or accept the default name that Harness automatically generates. Entering a custom name will make your Elastigroup easier to identify. | +| **Min Instances** | Enter the minimum number of instances that the Elastigroup should have at any time. This number can be `0`. | +| **Target Instances** | Set the target number of instances for the Elastigroup to maintain. | +| **Max Instances** | Enter the maximum number of instances that the Elastigroup collection should have at any time. | +| **Service Steady State Wait Timeout** | Enter how many minutes Harness should wait for Elastigroups to register instances and reach steady state. (This setting is internal to Harness.) | + +When you are done, the dialog will look something like this: + +![](./static/ami-elastigroup-93.png) + +This dialog's instance settings correspond directly to the resulting Elastigroup's settings, as shown here: + +![](./static/ami-elastigroup-94.png) + + +#### Elastigroup Setup in Deployment + +Let's look at an example where the Elastigroup Setup—configured as shown above—is deployed. Here is the step in the Harness Deployments page: + +![](./static/ami-elastigroup-95.png) + +Here is partial output, showing a successful setup: + + +``` +Querying Spotinst for existing Elastigroups with prefix: [AMI_Elastigroup_Basic__] +Sending request to create Elastigroup with name: [AMI_Elastigroup_Basic__1] +Created Elastigroup with ID: [sig-1da775dc] +Completed setup for Spotinst +``` + +### Step 2: Elastigroup Deploy + +In Step 2, select **Elastigroup Deploy** to open a dialog where you can define how many instances to deploy in the Elastigroup, as either a count or a percentage: + +![](./static/ami-elastigroup-96.png) + +The right-hand drop-down offers two options that govern the values that you enter in the adjacent field: + +* **Percent:** Specify a percentage of the **Target Instances** that you set in [Step 1: Elastigroup Setup](#basic_setup_asg). +* **Count:** Specify an exact number of instances. (This cannot exceed the **Max Instances** that you set in [Step 1: Elastigroup Setup](#basic_setup_asg).) + +For this example, we'll use a **Count** of **2**: + +![](./static/ami-elastigroup-97.png) + +#### Elastigroup Deploy Step in Deployment + +Using the **Elastigroup Deploy** configuration shown above—requesting a modest **Desired Instances** count of **2**—here is the **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-elastigroup-98.png) + +Here's partial output, showing a successful resizing and deployment: + + +``` +Current state of Elastigroup: [sig-1da775dc], min: [0], max: [0], desired: [0], ID: [sig-1da775dc] +Sending request to Spotinst to update Elastigroup: [sig-1da775dc] with min: [1], max: [4] and target: [2] +Request Sent to update Elastigroup +... +Waiting for Elastigroup: [sig-1da775dc] to reach steady state +Desired instances: [2], Total instances: [2], Healthy instances: [0] for Elastigroup: [sig-1da775dc] +... +Desired instances: [2], Total instances: [2], Healthy instances: [2] for Elastigroup: [sig-1da775dc] +Elastigroup: [sig-1da775dc] reached steady state +``` + +### Elastigroup Rollback Steps + +By default, each Elastigroup Basic Workflow includes a **Rollback Steps** section, containing an **Elastigroup Rollback** step. There's nothing to configure in this step. + +![](./static/ami-elastigroup-99.png) + +To see how the default behavior works, see the [Canary > Rollback Steps](#rollback_1) section. + + +### Basic Workflow Deployment + +Now that the setup is complete, you can click **Deploy** in the Workflow to deploy the artifact to your Elastigroup. + +![](./static/ami-elastigroup-100.png) + +In the resulting dialog, use the **Build / Version** drop-down to select the AMI you want to deploy. (Harness populates this list from the Artifact Source settings in the [Service](#service) you created.) Then click **Submit**. + +![](./static/ami-elastigroup-101.png) + +Once the Workflow deploys, the Deployments page confirms success. + +![](./static/ami-elastigroup-102.png) + +To verify the completed deployment, log into your Spotinst Console and display the newly deployed instance(s). + +![](./static/ami-elastigroup-103.png) + + +### Blue/Green Deployments Overview + +Assuming that you've set up all [prerequisites](#prerequisites), the following sections outline how to create a Blue/Green Workflow and deploy your AMI. + +There are two Blue/Green deployment options for Spotinst, defined by the traffic-shifting strategy you want to use: + +* **Incrementally Shift Traffic** — In this Workflow, you specify a Production Listener and Rule with two Target Groups for the new Elastigroup to use. Next you add multiple **Shift Traffic Weight** steps. +Each Shift Traffic Weight step increments the percentage of traffic that shifts to the Target Group for the new Elastigroup. +Typically, you add Approval steps between each Shift Traffic Weight to verify that the traffic may be increased. +* **Instantly Shift Traffic** — In this Workflow, you specify Production and Stage Listener Ports and Rules to use, and then a **Swap Production with Stage** step swaps all traffic from Stage to Production. + +You specify the traffic shift strategy when you create the Harness Blue/Green Workflow for your AMI Spotinst deployment. What steps are available in the Workflow depend on the strategy you select. + +### Blue/Green with Incremental Traffic Shift + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature.This deployment method lets you add Workflow steps to incrementally shift traffic from the Target Group used by the previous Elastigroup to the Target Group used by the new Elastigroup you are deploying. + +With this strategy, you are not shifting traffic from stage and production environments. You are shifting traffic incrementally for a production environment. In this way, it is similar to a Canary strategy. + +However, in a Canary deployment, the percentage of traffic that goes to the new Elastigroup is determined by the number of instances or the forwarding policy of the load balancer. + +With this Incremental Traffic Shift strategy, you are controlling the percentage of traffic sent to the new Elastigroup. + +In this section, we will review the requirements, and then describe how the traffic shifting works. + +Next, we will walk through building the Workflow for the deployment strategy. + +#### Harness Delegate, Service, and Infrastructure Definition Requirements + +There are no specific Harness Delegate, Service, and Infrastructure Definition requirements beyond the standard setup described in [Harness Prerequisites](#harness_prerequisites) and [Define the Infrastructure](#define_the_infrastructure) above. + +#### Spotinst Requirements + +These are described in [Spotinst Prerequisites](#spotinst_prerequisites) above. + +#### AWS ELB Listener Requirements + +You need the following AWS ELB setup: + +* AWS Application Load Balancer configured with one [Listener](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-listener.html). +* The Listener must have a [Rule](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html) redirecting to two [Target Groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html) (TGs). +* You will need registered target instances for the Target Groups. + +Here is an example of an ALB Listeners rule redirecting to two TGs: + +![](./static/ami-elastigroup-104.png) + +This example uses the default rule, but in most cases you will have several rules for redirecting traffic to your services. Typically, the default rule is used as the last rule to catch actions that the other rules do not. + +#### Target Group Weight Shifting + +The ALB Listener has two TGs with the following weights: + +* One TG has a weight of 100 (100%) — This TG is used for the **existing Elastigroup** (pre-deployment). +* The other TG has a weight of 0 — This TG is used for the **new Elastigroup** you are deploying. + +![](./static/ami-elastigroup-105.png) + +When Harness first creates the new Elastigroup it adjusts its instances to the TG with a weight of 0. + +Later in the Workflow, you add **Shift Traffic Weight** step(s) to adjust the weight for this TG. For example, here is a **Shift Traffic Weight** step adjust the weight to 10%: + +![](./static/ami-elastigroup-106.png) + +The weight for the **other** TG is automatically set to the remaining percentage. In this case, 90%. + +You keep adding **Shift Traffic Weight** steps until the weight of the TG for the new Elastigroup is 100. + +You can manipulate traffic shifting using as many **Shift Traffic Weight** steps as you like.Typically, you add [Approval](https://docs.harness.io/article/0ajz35u2hy-approvals) steps between each **Shift Traffic Weight** step to ensure that everything is running smoothly. For example, you can test the new feature(s) of your app before approving. This is a simple way to incorporate A/B testing into your Workflow. + +Approval steps are very useful because they enable you to cancel a deployment and return to the pre-deployment traffic weighting with a single step.The Workflow looks something like the following. Here the names of the **Shift Traffic Weight** steps have been changed to describe the weights they are assigning (10%, 100%): + +![](./static/ami-elastigroup-107.png) + +When you deploy the Workflow, you can see the traffic shift: + +![](./static/ami-elastigroup-108.png) + +Let's walk through an example. + +#### Create the Blue/Green Workflow + +1. In the Harness Application containing the Service and Infrastructure Definition you want to use, click **Workflows**. +2. Click **Add Workflow**. +3. Enter a name for the Workflow. +4. In **Workflow Type**, select **Blue/Green Deployment**. +5. Select an **Environment** and **Service**, and the **Infrastructure Definition** containing your imported Elastigroup JSON Configuration. +6. In **Traffic Shift Strategy**, select **Incrementally Shift Traffic using ELB**. +7. Click **Submit**. + +Harness creates the Workflow and automatically adds the steps for deployment. + +![](./static/ami-elastigroup-109.png) + +By default, only one **Shift Traffic Weight** step is added. Unless you want to shift the traffic in one step, you will likely add more **Shift Traffic Weight** steps to incrementally shift traffic. + +Let's walk through each step. + +#### Elastigroup ALB Shift Setup + +This step creates the new Elastigroup. In this step, you name the new Elastigroup, specify how instances its uses, and then identity the production load balancer, listener, and Rule to use. + +![](./static/ami-elastigroup-110.png) + +1. Once you have named and defined the number of instances for the Elastigroup, in **Load Balancer Details**, click **Add**. +2. In **Elastic Load Balancer**, select the ELB to use for production traffic. +3. In **Production Listener ARN**, select the Listener to use. This is the listener containing the rule whose weights you will adjust. +4. In **Production Listener Rule ARN**, select the ARN for the rule to use. You can find the ARN by its number in the AWS console. +5. Click **Submit**. + +Most of the settings support [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). You can use these to template this step and then allow its values to be specified at deployment runtime. You can even pass in the values using a Harness [Trigger](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows).When you deploy this Workflow, the output for the step will show the Elastigroup creation and load balancer assignments: + + +``` +Loading Target group data for Listener: [arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxxx:listener/app/satyam-lb/980c9f831d52b33c/f6e4d6f0f276b87f] at port: [null] of Load Balancer: [null] +Rule Arn: [arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxxx:listener-rule/app/satyam-lb/980c9f831d52b33c/f6e4d6f0f276b87f/f315cc7bdf7adfcb] +Target group: [arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxxx:targetgroup/satyam-tg-01/0e3345a650f122f4] is Prod, and [arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxxxx:targetgroup/satyam-tg-00/1f832e1015afe03d] is Stage +Deleting Elastigroup with name: [satyam__STAGE__Harness] if it exists +Creating new Elastigroup with name: [satyam__STAGE__Harness] +Id of new Elastigroup: [sig-f99216ae] +Getting data for Prod elastigroup with name: [satyam] +Completed Blue green setup for Spotinst Traffic Shift. +``` +You can see how it identifies both of the TGs for production and stage. + +You selected the rule to use and Harness automatically selected the TG with a weight of 0 for production and the TG with a weight of 100 for stage. + +Later, in the **Shift Traffic Weight** step(s), these weights are what you will be adjusting. + +#### Elastigroup ALB Shift Deploy + +This step simply deploys the new Elastigroup you created. It brings the new Elastigroup to steady state with the number of instances you selected in the previous **Elastigroup ALB Shift Setup** step. + +There is nothing to configure in this step. You can see its output in the deployment details: + +![](./static/ami-elastigroup-111.png) + +#### Shift Traffic Weight + +This is the step where you shift traffic from the TG for the previous Elastigroup to the new Elastigroup you are deploying. + +![](./static/ami-elastigroup-112.png) + +1. In **Name**, it can helpful to name the step after the traffic shift percentage it will apply, such as **10%**. You might also choose to name it according to its position, like **Shift Step 1**. +2. In **New Elastigroup Weight**, enter the percentage of traffic you want shifted from the previous Elastigroup to the new Elastigroup you are deploying. + +Most of the settings support [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). You can use these to template this step and then allow its values to be specified at deployment runtime. You can even pass in the values using a Harness [Trigger](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows).Here is an example of what this step looks like when it shifts traffic 10% during deployment: + +![](./static/ami-elastigroup-113.png) + +You can see that the New Elastigroup is receiving 10% of traffic and the old Elastigroup is receiving 90%. + +Next, you will likely want to follow the Shift Traffic Weight step with an [Approval step](https://docs.harness.io/article/0ajz35u2hy-approvals). This way you can test the new Elastigroup before shifting more traffic to it. + +Add more **Shift Traffic Weight** and **Approval** steps until you shift traffic to 100. + +![](./static/ami-elastigroup-114.png) + +Now your Workflow is ready for deployment. + +When you deploy, the final **Shift Traffic Weight** step will look something like this: + +![](./static/ami-elastigroup-115.png) + +##### Downsize Old Elastigroup at 0% weight + +The **Downsize Old Elastigroup at 0% weight** setting should only be selected for the **Shift Traffic Weight** step that shifts traffic to **100%** in its **New Elastigroup Weight** setting. + +When this setting is enabled, the old Elastigroup is downsized. + +##### Shift Traffic Weight Rollback + +In the Workflow **Rollback Steps**, Harness adds a **Shift Traffic Weight Rollback** step automatically. If rollback occurs, Harness rolls back to the pre-deployment Elastigroup and TG assignments. + +If no Spotinst service setup is found, Harness skips rollback. + +In many cases, Harness users place an Approval step in Rollback Steps also: + +![](./static/ami-elastigroup-116.png) + +### Blue/Green with Instant Traffic Shift + +In this scenario, a Blue/Green deployment reliably deploys your AMI(s) by maintaining new and old versions of Elastigroups that using these AMIs. The Elastigroups run behind an Application Load Balancer (ALB) using two listeners, Stage and Prod. These listeners forward respectively to two Target Groups (TGs), Stage and Prod, where the new and old Elastigroups are run.  + +Elastigroups perform the functions that Auto Scaling Groups perform in standard AMI deployments.In the first stage of deployment, the new Elastigroup—created using the new AMI you are deploying—is attached to the Stage Target Group: + +![](./static/ami-elastigroup-117.png) + +Blue/Green deployments are achieved by swapping routes between the Target Groups—always attaching the new Elastigroup first to the Stage Target Group, and then to the Prod Target Group: + +![](./static/ami-elastigroup-118.png) +#### Workflow Overview + +By default, AMI Elastigroup Blue/Green Workflows in Harness have five steps: + +1. [Elastigroup Setup](#setup_asg_bg): Specify how many instances to launch, their resizing order, and their steady state timeout. +2. [Elastigroup Deploy](#upgrade_asg_bg): Specify the number or percentage of instances to deploy within the Elastigroup you've configured in the preceding step. +3. **Verify Staging:** Optionally, specify Verification Providers or Collaboration Providers. +4. [Route Update](#swap_routes_bg): Re-route requests to the Elastigroup that contains the newest stable version of your AMI. +5. **Wrap Up:** Optionally, specify post-deployment commands, notifications, or integrations. + +Harness preconfigures the **Setup**, **Deploy**, and **Swap Routes** steps. Below, we outline those steps' defaults and options, with examples of the deployment logs' contents at each step. + +The **Verify Staging** and **Wrap Up** steps are placeholders, to which you can add integrations and commands. For details on adding **Verify Staging** integrations, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + + +#### Create the Blue/Green Workflow + +In your Application, click **Workflows** > **Add Workflow**. The **Workflow** dialog appears. + +1. Enter a **Name**, and (optionally) enter a **Description** of this Workflow's purpose. +2. In **Workflow Type**, select **Blue/Green Deployment**. +3. Select the **Environment** and [Service](#service) **that** you created for your AMI Elastigroup deployments. +4. Select the **Infrastructure Definition** you [configured earlier](#add_infra_def) for AMI Elastigroup deployments. The dialog will now look something like this:![](./static/ami-elastigroup-119.png) +5. Click **Submit**. The new Blue/Green Workflow is preconfigured.![](./static/ami-elastigroup-120.png) + +Next, we will examine options for configuring the Blue/Green deployment's **Elastigroup Setup**, **Deploy Elastigroup**, and **Route Update** steps. + + +#### Step 1: Elastigroup Setup + +In Step 1, select **Elastigroup Setup** to open a dialog where you can fine-tune the Elastigroup clusters and Load Balancer configuration for this deployment: + +![](./static/ami-elastigroup-121.png) + +For most settings here, see the corresponding [AMI Basic Workflow instructions](ami-deployment.md#basic-setup-asg). The following steps and recommendations are specific to Elastigroup Blue/Green deployments: + +1. In the **Elastigroup Name** field, you can choose to enter a short, recognizable name. (The default name will be a long string automatically concatenated from variables representing the Harness Application, Service, and Environment names.) +2. Harness recommends setting the **Service Steady State Wait Timeout (min)** field to at least **20** minutes, as shown in the above screen capture. This is a safe interval to prevent deployments from failing while waiting for the [Route Update](#swap_routes_bg) step's Blue/Green switchover to complete. +3. In the **AWS Load Balancer Configurations** section, click **Add** to open the controls shown below:![](./static/ami-elastigroup-122.png) +4. Use the **Elastic Load Balancer** drop-down to select at least one load balancer specified in your Infrastructure Definition's [Elastigroup Configuration](#add_elastigroup_configuration). +5. Select, or type in, your desired **Production Listener Port** and **Stage Listener Port**. + +You can use the **Add** link to select additional load balancers for this Workflow.The dialog's lower panel will now look something like this:![](./static/ami-elastigroup-123.png) + +**Listener Rules** — If you are using [Listener Rules](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules) in your target groups, you can select them in **Production Listener Rule ARN** and **Stage Listener Rule ARN**. + +If you do not select a listener rule, Harness uses the Default rule. You do not need to select the Default rule. + +Default rules don't have [conditions](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types), but other rules do. If you select other rules, ensure the traffic that will use the rules matches the conditions you have set in the rules. + +For example, if you have a path condition on a rule to enable [path-based routing](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html), ensure that traffic uses that path. + +1. Click **Submit** to save your Elastigroup Setup. + + +##### Elastigroup Setup in Deployment + +Let's look at an example deployment of the Elastigroup Setup shown above. Here is this step in the Harness Deployments page: + +![](./static/ami-elastigroup-124.png) + +Here's partial output, showing a successful setup: + + +``` +Querying aws to get the stage target group details for load balancer: [satyam-lb] +Using TargetGroup: [satyam-tg-1], ARN: [arn:aws:elasticloadbalancing:us-east-1:XXXXXXXXXXXX:targetgroup/satyam-tg-1/ce865f5f89254d34] with new ElastiGroup +Querying to find Elastigroup with name: [Spotinst__STAGE__Harness] +Sending request to create new Elasti Group with name: [Spotinst__STAGE__Harness] +Created Elastigroup with name: [Spotinst__STAGE__Harness] and id: [sig-3fffdecc] +Querying Spotinst for Elastigroup with name: [Spotinst] +Found existing Prod Elasti group with name: [Spotinst] and id: [sig-e3dbb309] +Completed Blue green setup for Spotinst +``` + +#### Step 2: Elastigroup Deploy + +In Step 2, define how many instances to deploy in the new Elastigroup cluster, as either a count or a percentage. This deployment example uses percentage scaling, with a desired target of 100%. + +![](./static/ami-elastigroup-125.png) + +For details about how this setting corresponds to AWS parameters, see the AMI Basic Workflow topic's [Step 2: Deploy Service](ami-deployment.md#upgrade-asg) section. + + +##### Elastigroup Deploy Step in Deployment + +At this point, Harness deploys the new Elastigroup—containing instances created from your new AMI—to the Stage Target Group: + +![](./static/ami-elastigroup-126.png) + +Using the 100% **Desired Instances** configuration shown above, here is the **Elastigroup Deploy** step in the Harness Deployments page: + +![](./static/ami-elastigroup-127.png) + +The **Upscale Elastigroup** log shows the initial request to enlarge the Elastigroup: + + +``` +Current state of Elastigroup: [sig-3fffdecc], min: [0], max: [0], desired: [0], Id: [sig-3fffdecc] +Sending request to Spotinst to update elasti group: [sig-3fffdecc] with min: [0], max: [1] and target: [1] +Request Sent to update Elastigroup +``` +Next, the **Upscale wait for steady state** log shows the **Elastigroup** successfully resized and at steady state: + + +``` +Waiting for Elastigroup: [sig-3fffdecc] to reach steady state +Desired instances: [1], Total instances: [1], Healthy instances: [0] for Elastigroup: [sig-3fffdecc] +... +Desired instances: [1], Total instances: [1], Healthy instances: [1] for Elastigroup: [sig-3fffdecc] +Elastigroup: [sig-3fffdecc] reached steady state +``` +And the **Final Deployment status** log shows success: + + +``` +No deployment error. Execution success. +``` +##### Approval Sub-Step + +This example shows an (optional) Approval added to Step 2. It requests manual approval, following successful registration of the Stage Elastigroup, and prior to the Blue/Green (staging/production) switchover in the Step 4. + +![](./static/ami-elastigroup-128.png) + + +#### Step 4: Route Update + +This is the heart of a Blue/Green deployment. Here, Harness directs your selected Load Balancer(s) to perform the following swap: + +* Rename the new staging (Blue) Elastigroup to match the production route. +* Rename your production (Green) Elastigroup to match the staging route. + +When this step is complete, the new Elastigroup—containing the instances created from your new AMI—are deployed to the production route: + +![](./static/ami-elastigroup-129.png) + +In Step 4, open the **Swap Production with Stage** dialog if you want to toggle the **Downsize Old Elastigroup** setting. When enabled, this check box directs AWS to free up resources from the old Elastigroup once the new Elastigroup registers its targets and reaches steady state. + +![](./static/ami-elastigroup-130.png) + + +##### Swap Production with Stage Step in Deployment + +Using the configuration shown above, here is the **Swap Production with Stage** step in the Harness Deployments page: + +![](./static/ami-elastigroup-131.png) + +Here's partial output. The **Swap Routes** log shows successful swapping of the two Elastigroups' routes: + + +``` +Sending request to rename Elastigroup with ID: [sig-3fffdecc] to [Spotinst] +Sending request to rename Elastigroup with ID: [ElastiGroup(id=sig-e3dbb309, name=Spotinst, capacity=ElastiGroupCapacity(minimum=0, maximum=1, target=1))] to [Spotinst__STAGE__Harness] +Updating Listener Rules for Load Balancer +Route Updated Successfully +``` +Now that the former production Elastigroup (`sig-e3dbb309`) has been swapped to staging, the **Downscale Elastigroup** log next shows a request to zero out its instances: + + +``` +Current state of Elastigroup: [sig-e3dbb309], min: [0], max: [1], desired: [1], ID: [sig-e3dbb309] +Sending request to Spotinst to update Elastigroup: [sig-e3dbb309] with min: [0], max: [0] and target: [0] +Request Sent to update Elastigroup +``` +The **Downscale wait for steady state** log next confirms the staging group's zero instances: + + +``` +Waiting for Elastigroup: [sig-e3dbb309] to reach steady state +Elastigroup: [sig-e3dbb309] does not have any instances. +``` +And the **Final Deployment status** log confirms the **Route Update** step's success: + + +``` +No deployment error. Execution success. +``` + +#### Deployment Example + +Once your setup is complete, you can click the Workflow's **Deploy** button to start the Blue/Green deployment. + +![](./static/ami-elastigroup-132.png) + +In the resulting **Start New Deployment** dialog, select the AMI to deploy, and click **Submit**. + +As the Workflow deploys, the Deployments page displays details about the deployed instances. + +![](./static/ami-elastigroup-133.png) + +To verify the completed deployment, log into your [Spotinst Console](https://console.spotinst.com/#/aws/ec2/elastigroup/list) and locate the newly deployed instance(s). + +### Continuous Verification with Route Update + +You can add Harness Continuous Verification to the **Verify Staging** section of the Workflow, but that only verifies the deployment before the **Route Update** section where you use **Shift Traffic Weight** or **Swap Production with Stage**. + +To verify deployment after you use **Shift Traffic Weight** or **Swap Production with Stage**, you can add AppDynamics and ELK verification steps. + +Click **Add Step**, and then locate these options. + +![](./static/ami-elastigroup-134.png) + +For information on setting these up, see [Verify Deployments with AppDynamics](https://docs.harness.io/article/ehezyvz163-3-verify-deployments-with-app-dynamics) and [Verify Deployments with Elasticsearch](https://docs.harness.io/article/e2eghvcyas-3-verify-deployments-with-elasticsearch). + + +### Canary Workflow and Deployment + +Assuming that you've set up all [prerequisites](#prerequisites), the following sections outline how to create a Canary Workflow and deploy your AMI. To avoid duplication, they focus on Elastigroup-specific configuration and deployment. For background and details, please refer to these related [AMI Canary Deployment Guide](ami-canary.md) sections: + +* [Overview](ami-canary.md#overview) and [Default Structure](ami-canary.md#default-structure) for Canary deployment concepts—how Harness progressively deploys your AMI instances. +* [Create a Canary Workflow](ami-canary.md#workflow) for the fundamentals of a Harness AMI Canary deployment—how the default Workflow phases and steps implement the Canary model. + +Elastigroups perform the functions that Auto Scaling Groups perform in standard AMI deployments. + +#### To Create the Canary Workflow: + +1. In your Application, click **Workflows** > **Add Workflow**. +2. In the resulting **Workflow** dialog, enter a unique **Name** and (optionally) a **Description** of this Workflow's purpose. +3. In **Workflow Type**, select **Canary Deployment**. +4. Select the **Environment** that you [configured earlier](#harness_prereq). (This Environment provides a template for your Elastigroups.) + The dialog will now look something like this: + ![](./static/ami-elastigroup-135.png) +5. Click **SUBMIT**. You've now created your new Canary Workflow.![](./static/ami-elastigroup-136.png) + + +#### Example Structure + +In this guide's remaining sections, we will expand only the Workflow's **Deployment Phases**—adding multiple phases, each deploying a percentage of the instance count specified in the first phase. This will build out the following structure: + +![](./static/ami-elastigroup-137.png) + +Here are the phases and steps we'll build: + +1. [Phase 1: Canary](#phase_1) + * [Elastigroup Setup](#setup_asg): Specify how many EC2 instances to launch in the Elastigroup that Harness deploys at the end of the Workflow. This step also specifies the steady state timeout. + * [Deploy Service](#upgrade_asg_1): Specify the percentage of instances to deploy in this phase. When you add additional phases, each phase automatically includes a Deploy Service step, which you must configure with the count or percentage of instances you want deployed in that phase. + * [Verify Staging](#verify_service_1): This is a stub, while Harness adds support for [Verification Providers](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) in Elastigroup Canary deployments. + * [Rollback Steps](#rollback_1): Roll back the ASG if deployment fails. (Rollback steps are automatically added here, and to each of the remaining phases. This guide covers them only in this first phase.) +2. [Phase 2: Canary](#phase_2) + * [Deploy Service](#upgrade_asg_2): Upgrade the Elastigroup to a higher percentage of instances. + * [Verify Staging](#verify_service_2): This example uses a second round of CloudWatch tests. +3. [Phase 3: Primary](#phase_3) + * [Deploy Service](#upgrade_asg_3): Upgrade the Elastigroup to its full target capacity. + +Ready to deploy? Let's configure and execute this sample Workflow's three Deployment Phases. + + +### Phase 1: Canary + +This example Workflow's first phase defines your Elastigroup, upgrades it to a 25% Canary deployment, and evaluates this partial deployment using (in this example) [CloudWatch](https://docs.harness.io/article/q6ti811nck-cloud-watch-verification-overview) verification. + +To add a Canary Phase: + +1. In **Deployment Phases**, click **Add Phase**. +2. In **Service**, select the Service you previously [set up](#service) for this AMI. +3. Select the [Infrastructure Definition](#add_infra_def) that you previously configured. +4. In **Service Variable Overrides**, you can add values to overwrite any variables in the Service you selected. Click **Add**, then enter the **Name** of the variable to override, and the override **Value**. (For details, see [Workflow Phases](https://docs.harness.io/article/m220i1tnia-workflow-configuration#workflow_phases).) + The **Workflow Phase** dialog will now look something like this: + ![](./static/ami-elastigroup-138.png) +5. Click **Submit**. The new Phase is created.![](./static/ami-elastigroup-139.png) +6. Click **Phase 1** to define this Phase's Steps. + +On the resulting page, we'll fill in the predefined structure for Steps 1 and 2, and add a Verification provider in Step 3.![](./static/ami-elastigroup-140.png) + +You can give each Phase a descriptive name by clicking the pencil icon at the top right. + + +#### Step 1: Elastigroup Setup + +In Step 1, select **Elastigroup Setup** to define your Elastigroup's instances in the dialog shown below: + +![](./static/ami-elastigroup-141.png) + +For details about this dialog's fields, see the corresponding [AMI Basic Workflow instructions](ami-deployment.md#basic-setup-asg). For this Workflow, we've selected **Fixed Instances**, and have set **Max Instances** to **10** and **Target Instances** to **4**. We've also increased the default **Timeout** to **20** minutes. + +All Canary counts or percentages specified later in the Workflow are based on the **Target Instances** setting. So, when we later deploy **25%** in this phase's [Elastigroup Deploy](#upgrade_asg_1) step, that will be 25% of this **Target Instances** setting. +##### Elastigroup Setup Step in Deployment + +Let's look at an example of deploying the Elastigroup Setup we configured above. Here's the step in the Harness Deployments page: + +![](./static/ami-elastigroup-142.png) + +Here's partial output, showing a successful setup: + + +``` +Querying Spotinst for existing Elastigroups with prefix: [cdteam_satyam_pr__] +Sending request to create Elastigroup with name:[cdteam_satyam_pr__5] +Created Elastigroup with ID: [sig-054f224d] +Completed setup for Spotinst +``` +The new Elastigroup is set up, but no instances are deployed yet. Instances will be deployed in this phase's [following](#upgrade_asg_1) **Elastigroup Deploy** step, and in future phases' similar steps. + + +#### Step 2: Elastigroup Deploy + +In Step 2, select **Elastigroup Deploy** to open a dialog where you can define how many (by **Count** or **Percent**) of the Elastigroup's Target Instances to deploy: + +![](./static/ami-elastigroup-143.png) + +In this example, we've selected **Percent** units, and **25** percent of the **Target Instances** we set in the [previous step](#setup_asg)'s **Elastigroup Setup**. + +For general information on customizing this dialog's settings, and on how they correspond to AWS parameters, see the corresponding [AMI Basic Workflow section](ami-deployment.md#upgrade-asg). +##### Elastigroup Deploy Step in Deployment + +Using the **Elastigroup Setup** configuration shown above, here is the **Elastigroup Deploy** step in the Harness Deployments page: + +![](./static/ami-elastigroup-144.png) + +The **Upscale Elastigroup** log shows the request to expand the Elastigroup. We requested **25 Percent** of **4 Target Instances**, so the log shows a request for `target: [1]`. + + +``` +Current state of Elastigroup: [sig-054f224d], min: [0], max: [0], desired: [0], ID: [sig-054f224d] + +Sending request to Spotinst to update Elastigroup: [sig-054f224d] with min: [1], max: [1] and target: [1] + +Request Sent to update Elastigroup +``` +The **Upscale wait for steady state** log (excerpted) confirms that the new Elastigroup has successfully expanded to `Healthy instances: [1]`. + + +``` +Waiting for Elastigroup: [sig-054f224d] to reach steady state +Desired instances: [1], Total instances: [1], Healthy instances: [0] for Elastigroup: [sig-054f224d] +... +Desired instances: [1], Total instances: [1], Healthy instances: [1] for Elastigroup: [sig-054f224d] +Elastigroup: [sig-054f224d] reached steady state +``` + +#### Step 3: Verify Staging + +Harness does not yet support [Verification Providers](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) in Elastigroup Canary deployments.Once Continuous Verification support is added for Elastigroup, a Canary Workflow's Canary phases are the ideal places to add verification steps, using the [Canary Analysis strategy](https://docs.harness.io/article/0avzb5255b-cv-strategies-and-best-practices#canary_analysis). If the Canary phases are verified, you can assume that the Primary phase will proceed successfully. + +For an example of how a **Verify Staging** step appears in the Harness Deployments page (and its **Details** panel), see our (non-Elastigroup) AMI Canary Deployment Guide's [Step 3: Verify Service](ami-canary.md#verify-service-1). + + +#### Rollback Steps + +By default, each Elastigroup Canary phase includes a **Rollback Steps** section, containing an **Elastigroup Rollback** step. There's nothing to configure in this step. + +![](./static/ami-elastigroup-145.png) + +If an Elastigroup Canary phase fails to deploy, its Rollback step will roll back the whole Workflow to its state prior to this deployment, by performing the following operations: + +* Roll back all Workflow phases, at once. +* Restore the old Elastigroup to its original capacity. +* Downscale the new Elastigroup, and delete it. This deletes its newly created instances, conserving AWS resources and costs. + +A rollback does not modify the configuration JSON within Spotinst. + +Here is a rollback example, from a separate deployment: + +![](./static/ami-elastigroup-146.png) + +Excerpts from the **Upscale Elastigroup** and **Upscale wait for steady state** logs show the old Elastigroup expanding back to its original capacity: + + +``` +Current state of Elastigroup: [sig-006847e5], min: [0], max: [0], desired: [0], Id: [sig-006847e5] +Sending request to Spotinst to update Elastigroup: [sig-006847e5] with min: [1], max: [4] and target: [2] +Waiting for Elastigroup: [sig-006847e5] to reach steady state +Desired instances: [2], Total instances: [2], Healthy instances: [0] for Elastigroup: [sig-006847e5] +... +Desired instances: [2], Total instances: [2], Healthy instances: [2] for Elastigroup: [sig-006847e5] +Elastigroup: [sig-006847e5] reached steady state +``` +Excerpts from the **Downscale Elastigroup** and **Downscale wait for steady state** logs show the new Elastigroup shrinking to zero instances: + + +``` +Current state of Elastigroup: [sig-926c0052], min: [1], max: [4], desired: [2], Id: [sig-926c0052] +Sending request to Spotinst to update Elastigroup: [sig-926c0052] with min: [0], max: [0] and target: [0] +... +Waiting for Elastigroup: [sig-926c0052] to reach steady state +Elastigroup: [sig-926c0052] does not have any instances. +``` +Finally, the **Delete new Elastigroup** log confirms the deletion of the failed new Elastigroup: + + +``` +Sending request to Spotinst to delete newly created Elastigroup with id: [sig-926c0052] +Elastigroup: [sig-926c0052] deleted successfully +``` + +### Phase 2: Canary + +In this example Workflow, we'll add a second Canary phase, in which we'll define a second **Elastigroup Deploy** step. To add the second phase: + +1. In **Deployment Phases**, again click **Add Phase**. + ![](./static/ami-elastigroup-147.png) +2. In the resulting **Workflow Phase** dialog, select the same **Service**, **Infrastructure Definition**, and any **Service Variable Overrides** that you selected in [Phase 1](#phase_1). +3. Click **Submit** to create the new Phase. + + +#### Step 1: Deploy Service + +Since we already [set up the Elastigroup](#setup_asg) in Phase 1, this new phase's Step 1 defaults directly to **Elastigroup Deploy**. + +Click the **Elastigroup Deploy** link to open this dialog, where we're again using **Percent** scaling, but doubling the percentage to **50 Percent** of the Elastigroup's **Target Instances**, before clicking **Submit**: + +![](./static/ami-elastigroup-148.png) + +To review: This means we're requesting 50 percent of the **4** Target Instances that we specified in Phase 1's [Elastigroup Setup](#setup_asg) step. + + +##### Deploy Service Step in Deployment + +Using the **Upgrade AutoScaling Group** configuration shown above, here is the **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-elastigroup-149.png) + +Here is partial log output, showing the Elastigroup successfully resized and at steady state. The upgrade to `Desired instances: [2]` corresponds to our request for **50 Percent** of **4 Target Instances**: + + +``` +Current state of Elastigroup: [sig-054f224d], min: [1], max: [1], desired: [1], ID: [sig-054f224d] +Sending request to Spotinst to update Elastigroup: [sig-054f224d] with min: [2], max: [2] and target: [2] +Waiting for Elastigroup: [sig-054f224d] to reach steady state +Desired instances: [2], Total instances: [2], Healthy instances: [1] for Elastigroup: [sig-054f224d] +Request Sent to update Elastigroup +... +Desired instances: [2], Total instances: [2], Healthy instances: [2] for Elastigroup: [sig-054f224d] +Elastigroup: [sig-054f224d] reached steady state +... +No deployment error. Execution success. +``` + +#### Step 2: Verify Staging + +Once Harness adds [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) support for Elastigroup deployments, you could use this step to verify this second Canary phase using any Harness-supported [Verification Provider](https://docs.harness.io/article/myw4h9u05l-verification-providers-list#supported_providers). + + +### Phase 3: Primary + +If prior Canary phases succeed, the Workflow's final phase runs the actual deployment—creating an Elastigroup with the full number of instances you specified in the [Elastigroup Setup](#setup_asg) step. + +To add this final phase: + +1. Return to the Workflow's details page. +2. In **Deployment Phases**, below your two existing Phases, again click **Add Phase**.![](./static/ami-elastigroup-150.png) +3. In the resulting **Workflow Phase** dialog, select the same **Service**, **Infrastructure Definition**, and any **Service Variable Overrides** that you selected in [Phase 1](#phase_1). +4. Click **Submit** to create the new Phase. + +The resulting **Phase 3** page provides structure only for an **Elastigroup Deploy** step, and that's the only step we'll define.![](./static/ami-elastigroup-151.png) + + +#### Step 1: Deploy Service + +To define this phase's scaling: + +1. In Step 1, select **Elastigroup Deploy**. +2. In the resulting dialog, again select **Percent** scaling, and set the **Desired Instances** to **100** percent of the Elastigroup's Target Instances**:**![](./static/ami-elastigroup-152.png) +3. Click **SUBMIT** to complete this Workflow's three-phase configuration.![](./static/ami-elastigroup-153.png) + + +##### Elastigroup Deploy Step in Deployment + +Using the **Elastigroup Deploy** configuration shown above, here is this final **Deploy Service** step in the Harness Deployments page: + +![](./static/ami-elastigroup-154.png) + +Here is partial output, showing the Elastigroup fully upscaling to `Desired instances: [4]`, and reaching steady state: + + +``` +Current state of Elastigroup: [sig-054f224d], min: [2], max: [2], desired: [2], Id: [sig-054f224d] +Sending request to Spotinst to update Elastigroup: [sig-054f224d] with min: [0], max: [10] and target: [4] +Request Sent to update Elastigroup +... +Waiting for Elastigroup: [sig-054f224d] to reach steady state +Desired instances: [4], Total instances: [4], Healthy instances: [2] for Elastigroup: [sig-054f224d] +... +Desired instances: [4], Total instances: [4], Healthy instances: [3] for Elastigroup: [sig-054f224d] +... +Desired instances: [4], Total instances: [4], Healthy instances: [4] for Elastigroup: [sig-054f224d] +Elastigroup: [sig-054f224d] reached steady state +... +No deployment error. Execution success. +``` +And at this point...our AMI is fully deployed. + + +### Deploy the Workflow + +As with the [AMI Basic deployment](ami-deployment.md#deployment-basic), once your setup is complete, you can click the Workflow's **Deploy** button to start the Canary deployment. + +![](./static/ami-elastigroup-155.png) + +In the resulting **Start New Deployment** dialog, select the AMI to deploy, and click **Submit**. + +![](./static/ami-elastigroup-156.png) + +As the Workflow deploys, the Deployments page displays details about the deployed instances.To verify the completed deployment, log into your [Spotinst Console](https://console.spotinst.com/#/aws/ec2/elastigroup/list) and locate the newly deployed instance(s). + + +### Frequently Asked Questions + +#### How Does Harness Downsize Old ASGs? + +See [How Does Harness Downsize Old Elastigroups?](../../concepts-cd/deployment-types/ami-spotinst-elastigroup-deployments-overview.md#how-does-harness-downsize-old-elastigroups). + +#### Can a single Workflow deploy multiple Services, each with different User Data? + +You define [User Data](ami-deployment.md#deployment-specification-user-data) at the Service level. A Harness Canary Workflow can deploy different Services, each in a separate Workflow phase. Currently, each Harness Blue/Green Workflow supports only a single Service. However, as a multi-Service workaround, you can combine multiple Blue/Green Workflows in a Pipeline. + +#### Can changeable Elastigroup parameters be set in Harness' JSON, without configuring them in Spotinst? + +Yes. Because Harness directs Spotinst to build the Elastigroups at deployment time, you can use the Harness Infrastructure Definition's **Elastigroup Configuration** JSON to define infrastructure not yet configured in Spotinst. For details, see [Service Variables in Elastigroup Configuration](#svc_variables). + +#### Can users set inbound-traffic restrictions on a Blue/Green deployment's Staging port? + +Yes. Assign an appropriate port in your Workflow's [Elastigroup Setup](#setup_asg_bg) step. + + +### Next Steps + +* Add monitoring to your AMI deployment and running instances: see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) and [24/7 Service Guard Overview](https://docs.harness.io/article/dajt54pyxd-24-7-service-guard-overview). + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/configure-traffic-split-verification.md b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/configure-traffic-split-verification.md new file mode 100644 index 00000000000..898f999a589 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/configure-traffic-split-verification.md @@ -0,0 +1,70 @@ +--- +title: Configure Spotinst Traffic Shift Verification +description: You can configure performance monitoring and log analysis verification of Spotinst traffic shifting. Currently, this feature is only supported for SpotInst Blue/Green deployments. In this topic -- Befo… +# sidebar_position: 2 +helpdocs_topic_id: e3ww34nu8g +helpdocs_category_id: mizega9tt6 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can configure performance monitoring and log analysis verification of Spotinst traffic shifting.  + +Currently, this feature is only supported for SpotInst Blue/Green deployments. + + + +### Before You Begin + +* Review the [prerequisites for configuring the Workflow using Traffic Split Strategy](ami-elastigroup.md#spotinst-prerequisites). +* Follow the deployment instructions for [Blue/Green Deployment with Incremental Traffic Shift](ami-elastigroup.md#blue-green-with-incremental-traffic-shift). + +### Step 1: Add the Workflow with Traffic Shift Strategy + +1. In the Harness Application containing the Service and Infrastructure Definition you want to use, click **Workflows**. +2. Click **Add Workflow**. +3. Enter a name for the Workflow. +4. In **Workflow Type**, select **Blue/Green Deployment**. +5. Select an **Environment** and **Service**, and the **Infrastructure Definition** containing your imported Elastigroup JSON Configuration. +6. In **Traffic Shift Strategy**, select Incrementally **Shift Traffic using ELB**. +7. Click **Submit**. + +Harness creates the Workflow and automatically adds the steps for deployment. + +By default, only one **Shift Traffic Weight** step is added. Unless you want to shift the traffic in one step, you will likely add more **Shift Traffic Weight** steps to incrementally shift traffic. + +For more information about Blue/Green Deployment and Workflow creation, see [Blue/Green Deployment with Incremental Traffic Shift](ami-elastigroup.md#blue-green-with-incremental-traffic-shift). + +### Step 2: Add Traffic Split Step + +1. Add the required steps in sections for deployment and staging verification. +2. Add traffic shift steps in the Route Update section after the Verify Staging section based on the percentage shift required. + +![](./static/configure-traffic-split-verification-00.png) + +You can manipulate traffic shifting using as many Shift Traffic Weight steps as you like. + +Typically, you add Approval steps between each Shift Traffic Weight step to ensure that everything is running smoothly. For example, you can test the new feature(s) of your app before approving. This is a simple way to incorporate A/B testing into your Workflow. + +Approval steps are very useful because they enable you to cancel a deployment and return to the pre-deployment traffic weighting with a single step. + +### Step 3: Add Verification Step + +In the **Route Update** section, after each traffic split step, you can add the verification step for one of the following: + +* **Performance Monitoring**—**AppDynamics**: For more information on configuring AppDynamics verification, see [Verify Deployments with AppDynamics](https://docs.harness.io/article/ehezyvz163-3-verify-deployments-with-app-dynamics). +* **Log Analysis**—**ELK**: For more information on configuring ELK verification see [Verify Deployments with Elasticsearch](https://docs.harness.io/article/e2eghvcyas-3-verify-deployments-with-elasticsearch). + +Make sure you add the verification steps for traffic split only in the **Route Update** section.![](./static/configure-traffic-split-verification-01.png) + +The configure AppDynamics/ELK procedure is similar to the regular configuration, except for the **Baseline for Risk Analysis** input. You can select only **Canary Analysis**. + +Traffic split analysis will be performed only if the new traffic percentage is less than 50%. Analysis will not be performed if the traffic split is more than 50%. + +### Step 4: View Verification Results + +Once you have executed the Workflow, Harness performs the verification you configured and displays the results in the **Deployments** and **Continuous Verification** pages. Verification is executed in real time, quantifying the business impact of every production deployment. + +For a quick overview of the verification UI elements, see [Continuous Verification Tools](https://docs.harness.io/article/xldc13iv1y-meet-harness#continuous_verification_tools). For details about viewing and interpreting verification results, see [Verification Results Overview](https://docs.harness.io/article/2la30ysdz7-deployment-verification-results). + +![](./static/configure-traffic-split-verification-02.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-44.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-44.png new file mode 100644 index 00000000000..dd47af9e59d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-44.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-45.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-45.png new file mode 100644 index 00000000000..7d6c280a3db Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-45.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-46.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-46.png new file mode 100644 index 00000000000..3ea112d68bb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-46.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-47.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-47.png new file mode 100644 index 00000000000..89110cb5989 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-47.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-48.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-48.png new file mode 100644 index 00000000000..d49d722768e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-48.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-49.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-49.png new file mode 100644 index 00000000000..509c1f8e291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-49.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-50.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-50.png new file mode 100644 index 00000000000..809223eeea3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-50.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-51.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-51.png new file mode 100644 index 00000000000..cd2b1140a76 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-51.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-52.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-52.png new file mode 100644 index 00000000000..605256600b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-52.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-53.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-53.png new file mode 100644 index 00000000000..172f865d12f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-53.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-54.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-54.png new file mode 100644 index 00000000000..2753542a0e3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-54.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-55.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-55.png new file mode 100644 index 00000000000..173ddcd8038 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-55.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-56.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-56.png new file mode 100644 index 00000000000..5af8b3a0dcc Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-56.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-57.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-57.png new file mode 100644 index 00000000000..6904274d364 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-57.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-58.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-58.png new file mode 100644 index 00000000000..5ecefebebce Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-58.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-59.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-59.png new file mode 100644 index 00000000000..e6c55404b8c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-59.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-60.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-60.png new file mode 100644 index 00000000000..5affdc36781 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-60.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-61.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-61.png new file mode 100644 index 00000000000..a97d319bf4a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-61.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-62.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-62.png new file mode 100644 index 00000000000..796e3c0341a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-62.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-63.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-63.png new file mode 100644 index 00000000000..d9e2fe52270 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-63.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-64.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-64.png new file mode 100644 index 00000000000..32b5534dbbc Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-64.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-65.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-65.png new file mode 100644 index 00000000000..4e82046e4b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-65.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-66.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-66.png new file mode 100644 index 00000000000..9c05c0eed34 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-66.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-67.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-67.png new file mode 100644 index 00000000000..75c7fb7dd6f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-67.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-68.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-68.png new file mode 100644 index 00000000000..188488e545f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-68.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-69.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-69.png new file mode 100644 index 00000000000..dd47af9e59d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-69.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-70.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-70.png new file mode 100644 index 00000000000..62f66b87516 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-70.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-71.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-71.png new file mode 100644 index 00000000000..e10f668c774 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-71.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-72.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-72.png new file mode 100644 index 00000000000..7d6c280a3db Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-72.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-73.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-73.png new file mode 100644 index 00000000000..098eb9956ba Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-73.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-74.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-74.png new file mode 100644 index 00000000000..5d26fb3ada7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-74.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-75.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-75.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-75.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-76.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-76.png new file mode 100644 index 00000000000..cc8fe2a34a7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-76.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-77.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-77.png new file mode 100644 index 00000000000..e86946e5d91 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-blue-green-77.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-157.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-157.png new file mode 100644 index 00000000000..6d811e66aae Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-157.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-158.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-158.png new file mode 100644 index 00000000000..aa34c8a2ad2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-158.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-159.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-159.png new file mode 100644 index 00000000000..d42c583e2fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-159.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-160.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-160.png new file mode 100644 index 00000000000..3ec64a81111 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-160.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-161.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-161.png new file mode 100644 index 00000000000..6d22ee5b563 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-161.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-162.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-162.png new file mode 100644 index 00000000000..b8a3024af7a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-162.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-163.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-163.png new file mode 100644 index 00000000000..28185dd73c0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-163.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-164.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-164.png new file mode 100644 index 00000000000..e2693d47014 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-164.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-165.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-165.png new file mode 100644 index 00000000000..c395995767c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-165.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-166.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-166.png new file mode 100644 index 00000000000..537ea9c6a46 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-166.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-167.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-167.png new file mode 100644 index 00000000000..9506548baa3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-167.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-168.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-168.png new file mode 100644 index 00000000000..21ab85ba159 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-168.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-169.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-169.png new file mode 100644 index 00000000000..85d4122bc9b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-169.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-170.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-170.png new file mode 100644 index 00000000000..f7389bb8f9a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-170.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-171.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-171.png new file mode 100644 index 00000000000..e6ce09fea89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-171.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-172.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-172.png new file mode 100644 index 00000000000..bb77f0e6738 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-172.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-173.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-173.png new file mode 100644 index 00000000000..571ee047067 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-173.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-174.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-174.png new file mode 100644 index 00000000000..b6ca3869e27 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-174.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-175.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-175.png new file mode 100644 index 00000000000..60071f0992a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-175.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-176.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-176.png new file mode 100644 index 00000000000..81af4cf9c86 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-176.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-177.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-177.png new file mode 100644 index 00000000000..44ab1fe00a8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-177.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-178.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-178.png new file mode 100644 index 00000000000..d0ff9227cb5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-178.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-179.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-179.png new file mode 100644 index 00000000000..92fd9960fd9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-179.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-180.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-180.png new file mode 100644 index 00000000000..5f2867746cb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-180.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-181.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-181.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-181.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-182.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-182.png new file mode 100644 index 00000000000..dc89f85ce18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-182.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-183.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-183.png new file mode 100644 index 00000000000..950471dea4a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-canary-183.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-03.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-03.png new file mode 100644 index 00000000000..07e8f4e8847 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-03.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-04.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-04.png new file mode 100644 index 00000000000..dfab2144265 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-04.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-05.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-05.png new file mode 100644 index 00000000000..5835ed417dc Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-05.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-06.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-06.png new file mode 100644 index 00000000000..632eb4dd408 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-06.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-07.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-07.png new file mode 100644 index 00000000000..61b98010dfa Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-07.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-08.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-08.png new file mode 100644 index 00000000000..d41eea3775d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-08.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-09.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-09.png new file mode 100644 index 00000000000..5a63909d4dc Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-09.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-10.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-10.png new file mode 100644 index 00000000000..e5ec2fb1e93 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-10.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-11.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-11.png new file mode 100644 index 00000000000..39152c91f55 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-11.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-12.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-12.png new file mode 100644 index 00000000000..6ea2602f9a3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-12.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-13.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-13.png new file mode 100644 index 00000000000..4023fea8f0d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-13.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-14.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-14.png new file mode 100644 index 00000000000..82353578e4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-14.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-15.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-15.png new file mode 100644 index 00000000000..fb72eb67130 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-15.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-16.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-16.png new file mode 100644 index 00000000000..857d5fd3f4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-16.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-17.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-17.png new file mode 100644 index 00000000000..fab1f062f92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-17.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-18.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-18.png new file mode 100644 index 00000000000..7f12abcd5d9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-18.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-19.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-19.png new file mode 100644 index 00000000000..5ac77abf9a5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-19.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-20.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-20.png new file mode 100644 index 00000000000..c494f6b7fc9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-20.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-21.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-21.png new file mode 100644 index 00000000000..a82fd9e3625 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-21.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-22.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-22.png new file mode 100644 index 00000000000..da7a03eda58 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-22.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-23.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-23.png new file mode 100644 index 00000000000..f3a432793c9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-23.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-24.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-24.png new file mode 100644 index 00000000000..5c1fee2f33f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-24.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-25.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-25.png new file mode 100644 index 00000000000..97db204beac Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-25.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-26.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-26.png new file mode 100644 index 00000000000..0c870974ed3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-26.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-27.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-27.png new file mode 100644 index 00000000000..d10fb3fe35e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-27.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-28.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-28.png new file mode 100644 index 00000000000..0f3bea128c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-28.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-29.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-29.png new file mode 100644 index 00000000000..08eb075a1d7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-29.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-30.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-30.png new file mode 100644 index 00000000000..98f02c43db8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-30.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-31.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-31.png new file mode 100644 index 00000000000..58302ec9d5b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-31.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-32.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-32.png new file mode 100644 index 00000000000..1c5cbc8c773 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-32.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-33.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-33.png new file mode 100644 index 00000000000..a2db1d98608 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-33.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-34.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-34.png new file mode 100644 index 00000000000..2ffc12e7d4f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-34.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-35.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-35.png new file mode 100644 index 00000000000..dca8f48fa02 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-35.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-36.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-36.png new file mode 100644 index 00000000000..e1d36f12ea0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-36.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-37.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-37.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-37.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-38.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-38.png new file mode 100644 index 00000000000..cd048011003 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-38.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-39.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-39.png new file mode 100644 index 00000000000..ca617c6d731 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-39.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-40.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-40.png new file mode 100644 index 00000000000..c6f6723466f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-40.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-41.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-41.png new file mode 100644 index 00000000000..90d482eccd0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-41.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-42.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-42.png new file mode 100644 index 00000000000..8ed7e2339df Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-42.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-43.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-43.png new file mode 100644 index 00000000000..b5a4359d2fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-deployment-43.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-100.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-100.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-100.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-101.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-101.png new file mode 100644 index 00000000000..fc22d21e9e1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-101.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-102.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-102.png new file mode 100644 index 00000000000..7e27ff199c2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-102.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-103.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-103.png new file mode 100644 index 00000000000..ae6d7a6c2ce Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-103.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-104.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-104.png new file mode 100644 index 00000000000..6904274d364 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-104.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-105.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-105.png new file mode 100644 index 00000000000..34152669d7d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-105.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-106.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-106.png new file mode 100644 index 00000000000..f2bcc85443f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-106.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-107.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-107.png new file mode 100644 index 00000000000..28846af1bf0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-107.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-108.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-108.png new file mode 100644 index 00000000000..a720e5d3210 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-108.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-109.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-109.png new file mode 100644 index 00000000000..ce83ef694ab Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-109.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-110.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-110.png new file mode 100644 index 00000000000..3234eda8726 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-110.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-111.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-111.png new file mode 100644 index 00000000000..7bdb31aa100 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-111.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-112.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-112.png new file mode 100644 index 00000000000..42c5338813d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-112.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-113.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-113.png new file mode 100644 index 00000000000..2a583612ec6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-113.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-114.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-114.png new file mode 100644 index 00000000000..1e065376b71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-114.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-115.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-115.png new file mode 100644 index 00000000000..5e74f10623f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-115.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-116.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-116.png new file mode 100644 index 00000000000..c26330a718f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-116.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-117.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-117.png new file mode 100644 index 00000000000..b58acaf805d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-117.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-118.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-118.png new file mode 100644 index 00000000000..ae60e13808e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-118.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-119.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-119.png new file mode 100644 index 00000000000..2563ba73992 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-119.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-120.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-120.png new file mode 100644 index 00000000000..abbf9ec92ee Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-120.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-121.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-121.png new file mode 100644 index 00000000000..78c9b6a6e4f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-121.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-122.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-122.png new file mode 100644 index 00000000000..2a4c53a1455 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-122.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-123.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-123.png new file mode 100644 index 00000000000..144a5daf204 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-123.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-124.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-124.png new file mode 100644 index 00000000000..8b8f0060092 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-124.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-125.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-125.png new file mode 100644 index 00000000000..e34a29b285d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-125.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-126.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-126.png new file mode 100644 index 00000000000..b58acaf805d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-126.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-127.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-127.png new file mode 100644 index 00000000000..a9e878641ba Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-127.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-128.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-128.png new file mode 100644 index 00000000000..23e7c6de4bb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-128.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-129.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-129.png new file mode 100644 index 00000000000..ae60e13808e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-129.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-130.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-130.png new file mode 100644 index 00000000000..2a06dc77892 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-130.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-131.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-131.png new file mode 100644 index 00000000000..0e5230a61bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-131.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-132.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-132.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-132.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-133.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-133.png new file mode 100644 index 00000000000..a5ae79bf45f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-133.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-134.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-134.png new file mode 100644 index 00000000000..b64c0156c48 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-134.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-135.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-135.png new file mode 100644 index 00000000000..fa3e5f60507 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-135.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-136.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-136.png new file mode 100644 index 00000000000..29be67a3b66 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-136.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-137.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-137.png new file mode 100644 index 00000000000..8e403aecadb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-137.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-138.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-138.png new file mode 100644 index 00000000000..9d1c9f43482 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-138.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-139.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-139.png new file mode 100644 index 00000000000..75bc56e1be9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-139.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-140.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-140.png new file mode 100644 index 00000000000..5d207b3aa89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-140.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-141.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-141.png new file mode 100644 index 00000000000..77b11f865d9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-141.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-142.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-142.png new file mode 100644 index 00000000000..c34b6239154 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-142.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-143.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-143.png new file mode 100644 index 00000000000..63a82252a99 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-143.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-144.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-144.png new file mode 100644 index 00000000000..fccca75d6df Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-144.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-145.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-145.png new file mode 100644 index 00000000000..ed84eb5b006 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-145.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-146.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-146.png new file mode 100644 index 00000000000..5732062ba7e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-146.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-147.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-147.png new file mode 100644 index 00000000000..426c7aad655 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-147.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-148.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-148.png new file mode 100644 index 00000000000..45741ffc5cd Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-148.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-149.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-149.png new file mode 100644 index 00000000000..e8fd5412cc7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-149.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-150.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-150.png new file mode 100644 index 00000000000..2e7a8f12a98 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-150.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-151.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-151.png new file mode 100644 index 00000000000..468159cf10b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-151.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-152.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-152.png new file mode 100644 index 00000000000..eed1ada72ad Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-152.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-153.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-153.png new file mode 100644 index 00000000000..080c2a10ada Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-153.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-154.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-154.png new file mode 100644 index 00000000000..9d649232e92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-154.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-155.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-155.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-155.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-156.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-156.png new file mode 100644 index 00000000000..5b431c36af9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-156.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-78.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-78.png new file mode 100644 index 00000000000..0e5230a61bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-78.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-79.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-79.png new file mode 100644 index 00000000000..63bc9d4ff6a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-79.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-80.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-80.png new file mode 100644 index 00000000000..fb08e7f2da4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-80.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-81.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-81.png new file mode 100644 index 00000000000..2f1ac5a4697 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-81.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-82.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-82.png new file mode 100644 index 00000000000..abd0a57943e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-82.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-83.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-83.png new file mode 100644 index 00000000000..5eebe1afe44 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-83.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-84.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-84.png new file mode 100644 index 00000000000..a6a066b2513 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-84.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-85.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-85.png new file mode 100644 index 00000000000..12633d5eeb2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-85.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-86.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-86.png new file mode 100644 index 00000000000..653649a10a2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-86.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-87.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-87.png new file mode 100644 index 00000000000..98d084e3be1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-87.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-88.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-88.png new file mode 100644 index 00000000000..0f3bea128c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-88.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-89.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-89.png new file mode 100644 index 00000000000..58163fe6c1d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-89.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-90.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-90.png new file mode 100644 index 00000000000..9512869a8dd Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-90.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-91.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-91.png new file mode 100644 index 00000000000..ae3f2059d94 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-91.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-92.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-92.png new file mode 100644 index 00000000000..d2dd594e97a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-92.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-93.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-93.png new file mode 100644 index 00000000000..20109c9328a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-93.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-94.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-94.png new file mode 100644 index 00000000000..f1ea649d87a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-94.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-95.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-95.png new file mode 100644 index 00000000000..78241d54755 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-95.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-96.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-96.png new file mode 100644 index 00000000000..fd6fad33b96 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-96.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-97.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-97.png new file mode 100644 index 00000000000..b6546289b57 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-97.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-98.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-98.png new file mode 100644 index 00000000000..a669f65514a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-98.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-99.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-99.png new file mode 100644 index 00000000000..ed84eb5b006 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/ami-elastigroup-99.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-00.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-00.png new file mode 100644 index 00000000000..4d9d7a45aef Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-00.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-01.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-01.png new file mode 100644 index 00000000000..a1e58438a89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-01.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-02.png b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-02.png new file mode 100644 index 00000000000..652a5832576 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ami-deployments/static/configure-traffic-split-verification-02.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/aws-general/_category_.json b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/_category_.json new file mode 100644 index 00000000000..847b638549c --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/_category_.json @@ -0,0 +1 @@ +{"label": "General AWS Deployment Information", "position": 50, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "General AWS Deployment Information"}, "customProps": { "helpdocs_category_id": "az9zwp259r"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/aws-general/set-amazon-sdk-backoff-strategy-params-for-cloud-formation.md b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/set-amazon-sdk-backoff-strategy-params-for-cloud-formation.md new file mode 100644 index 00000000000..df6f4726900 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/set-amazon-sdk-backoff-strategy-params-for-cloud-formation.md @@ -0,0 +1,100 @@ +--- +title: Set Amazon SDK Default Backoff Strategy Params for CloudFormation and ECS +description: Set Amazon SDK Default Backoff Strategy Params for CloudFormation. +# sidebar_position: 2 +helpdocs_topic_id: actaxli00u +helpdocs_category_id: az9zwp259r +helpdocs_is_private: false +helpdocs_is_published: true +--- + +In some Harness CloudFormation and ECS deployments you might get failures with `ThrottlingException` or `Rate exceeded` errors for CloudFormation and ECS API calls. + +This can happen when CloudFormation and ECS API calls exceed the maximum allowed API request rate per AWS account and region. Requests are throttled for each AWS account on a per-region basis to help service performance. See [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) from AWS. + +This topic describes how to remedy this situation by setting Amazon SDK default backoff strategy params for CloudFormation and ECS. + +### Limitations + +* If you have multiple AWS accounts tied to one Harness account with backoff strategy params enabled, backoff params and strategies will be applied on all AWS accounts. + +### Equal Jitter and Full Jitter Backoff Strategies + +The Amazon SDK Default backoff strategy is the combination of Equal Jitter and Full Jitter backoff strategies. The SDK Default strategy uses the Full Jitter strategy for non-throttled exceptions and the Equal Jitter strategy for throttled exceptions. + +Here's the list of non-throttled error and status codes where Full Jitter strategy is applied:  + + +``` +"TransactionInProgressException", +"RequestTimeout", +"RequestTimeoutException", +"IDPCommunicationError", +500, +502, +503, +504, +"RequestTimeTooSkewed", +"RequestExpired", +"InvalidSignatureException", +"SignatureDoesNotMatch", +"AuthFailure", +"RequestInTheFuture", +"IOException" +``` +Here's list of throttled error codes where Equal Jitter strategy is applied: + + +``` +"Throttling", +"ThrottlingException", +"ThrottledException", +"ProvisionedThroughputExceededException", +"SlowDown", +"TooManyRequestsException", +"RequestLimitExceeded", +"BandwidthLimitExceeded", +"RequestThrottled", +"RequestThrottledException", +"EC2ThrottledException", +"PriorRequestNotComplete", +"429 Too Many Requests" +``` +For more strategies, see [Exponential Backoff And Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) from AWS. + +### Setting Backoff Strategy Params in Harness + +In a Harness CloudFormation or ECS implementation, you can set these parameters using **Account Defaults** settings. + +In your Harness account, click **Setup**. + +In **Account**, click more options (︙), and then click **Account Defaults**. + +![](./static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-00.png) + +The **Account Defaults** appear. + +![](./static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-01.png) + +To add a parameter, click **Add Row**. + +Here's the list of supported Amazon default SDK backoff strategy parameters: + +All values are in milliseconds. + +| | | | +| --- | --- | --- | +| **Default SDK Backoff Strategy Param** | **Default Value** | **Description** | +| `AmazonSDKDefaultBackoffStrategy_maxErrorRetry` | `5` ms | The maximum number of retries. | +| `AmazonSDKDefaultBackoffStrategy_baseDelay` | `100` ms | Base delay for *FullJitterBackoffStrategy.* | +| `AmazonSDKDefaultBackoffStrategy_maxBackoff` | `20000` ms | The maximum backoff time after which retries will not be performed. | +| `AmazonSDKDefaultBackoffStrategy_throttledBaseDelay` | `500` ms | Base delay for *EqualJitterBackoffStrategy.* | + +When you're done, the **Account Defaults** will look something like this: + +![](./static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-02.png) + +### Next Steps + +You will need to tune Amazon SDK Default backoff strategy params based on the API request rate per AWS account and region already set. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-00.png b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-00.png new file mode 100644 index 00000000000..1d23298a471 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-00.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-01.png b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-01.png new file mode 100644 index 00000000000..72f77c9126e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-01.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-02.png b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-02.png new file mode 100644 index 00000000000..c03b0348fbb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/aws-general/static/set-amazon-sdk-backoff-strategy-params-for-cloud-formation-02.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/_category_.json b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/_category_.json new file mode 100644 index 00000000000..9f5cb06b3c2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/_category_.json @@ -0,0 +1 @@ +{"label": "AWS CloudFormation", "position": 10, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "AWS CloudFormation"}, "customProps": { "helpdocs_category_id": "hupik7gwhc"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/add-cloud-formation-templates.md b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/add-cloud-formation-templates.md new file mode 100644 index 00000000000..6b6a439e7de --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/add-cloud-formation-templates.md @@ -0,0 +1,153 @@ +--- +title: Add CloudFormation Templates +description: Set up a CloudFormation Infrastructure Provisioner. +# sidebar_position: 2 +helpdocs_topic_id: wtper654tn +helpdocs_category_id: hupik7gwhc +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).This topic describes how to set up a Harness Infrastructure Provisioner for CloudFormation. + +Once the Harness Infrastructure Provisioner is set up, you can use it to define a deployment target in a Harness Infrastructure Definition. + +Once you add the Infrastructure Definition to a Workflow, you add a CloudFormation Create Stack step to the Workflow. The CloudFormation Create Stack step uses the same Harness Infrastructure Provisioner to run your templates and build the target infrastructure, and then deploy to it. + +This topic walks you through a detailed setup of a Harness CloudFormation Provisioner. + + +### Before You Begin + +* [CloudFormation Provisioning with Harness](../../concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md) +* [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md) + +### Limitations + +* Harness supports first class CloudFormation provisioning for AWS-based infrastructures: + + SSH + + AMI/Auto Scaling Group + + ECS + + Lambda +* AWS CloudFormation has its own template limits. See [Limits and Restrictions](https://aws.amazon.com/cloudformation/faqs/#Limits_and_Restrictions) from AWS. +* If you have plus signs (`+`) in the AWS S3 bucket URL, Harness changes these to spaces. Harness changes the plus signs because plus signs in AWS S3 URLs are interpreted as spaces by AWS. It is a limitation of AWS. If you want to use a `+` in your URL, replace it with URL encoding `%2B` in any file path in Harness. + +### Visual Summary + +This topic describes step 1 in the Harness CloudFormation Provisioning implementation process: + +![](./static/add-cloud-formation-templates-20.png) + +Once you have completed this topic, you can move onto the next step: [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md). + +### Step 1: Add a CloudFormation Provisioner + +Setting up the CloudFormation Provisioner involves the following steps: + +1. Add your CloudFormation template via its S3 bucket, Git repo, or simply paste it into Harness. +2. Import any input variables. + +Let's get started. + +To set up a CloudFormation Infrastructure Provisioner, do the following: + +1. In your Harness Application, click **Infrastructure Provisioners**. +2. Click **Add Infrastructure Provisioner**, and then click **CloudFormation**. The **Add CloudFormation Provisioner** dialog appears. + ![](./static/add-cloud-formation-templates-21.png) +3. In **Display Name**, enter the name for this provisioner. You will use this name to select this provisioner in Harness Infrastructure Definition and the CloudFormation Create Stack Workflow step. + +### Step 2: Add Your CloudFormation Template + +Your CloudFormation template can be added in one of three ways: + +* AWS S3 bucket. +* Git repo. +* Paste in the template. + +For S3 and the Git repo, you must have an AWS Cloud Provider or Source Repro Provider set up in Harness. See [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md). + +Let's walk through these options. + +CloudFormation templates may be in JSON or YAML and Harness accepts both formats. Nested stacks are supported for **Amazon S3** source types only. This is an [AWS limitation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html). + +#### Template Body + +1. If you select **Template Body**, then paste in the CloudFormation template JSON or YAML. + +![](./static/add-cloud-formation-templates-22.png) + +You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in **Template Body**. + +#### Git Repository + +For Git Repository, ensure that you have added a SourceRepo Provider in Harness that connects to your Git repo. For more information, see [Add SourceRepo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +If you select **Git Repository**, do the following: + +1. In **Source Repository**, select a SourceRepo Provider for the Git repo you added to your Harness account. +2. In **Commit ID**, select **Latest from Branch** or **Specific Commit ID**. +3. In **Branch/Commit ID**, enter the branch or commit ID for the remote repo. +4. In **File Path**, enter the repo file and folder path. + +For example, if the full path to your script is **http://github.com/johnsmith/harness/branch1/scripts/foo.yaml**, and you selected **Branch** and entered **branch1**, in **File Path** you can enter **scripts/foo.yaml** or even **./scripts/foo.yaml**. + +Using the same example, if you selected **Specific Commit ID** and enter a commit ID, in **File Path** you can enter **scripts/foo.yaml** or even **./scripts/foo.yaml**. + +You cannot also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in **File Path** at this time. + +#### Amazon S3 + +For Amazon S3, ensure you have added an AWS Cloud Provider to connect Harness to your AWS account, as described in [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md). + +If you select **Amazon S3**, in **Template File Path**, enter the URL for the template in its S3 bucket. + +![](./static/add-cloud-formation-templates-23.png) + +Only enter the [globally-unique S3 bucket name URL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html). Not the region-specific URL.You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in **Template File Path**.Ensure that the AWS Cloud Provider has permissions to read the bucket contents. The required policies is `AmazonS3ReadOnlyAccess` and you need another policy with the action `cloudformation:GetTemplateSummary`. See [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md). + +You can find many template samples from CloudFormation [Sample Templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-sample-templates.html). + +### Step 3: Add Input Variables + +Likely, your template contains [input parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/gettingstarted.templatebasics.html#gettingstarted.templatebasics.parameters) that require specific values be passed in when Harness creates a stack from the template. For example, here is an input parameter for a key pair named **KeyName**: + + +``` +Parameters: + KeyName: + Description: Name of an existing EC2 KeyPair to enable SSH access to the instance + Type: 'AWS::EC2::KeyPair::KeyName' + ConstraintDescription: must be the name of an existing EC2 KeyPair. +``` +You can add these input parameters to your Harness CloudFormation Provisioner and specify the values for the inputs when you use this provisioner in a Workflow. + +In **Variables**, click **Add** to add your inputs manually. + +You can use **Populate Variables** if you added a URL to your template in an S3 bucket or Git repo. + +You can also use **Populate Variables** if you added your template manually, but you will need to also select an AWS Cloud Provider and region. Harness uses those to access the AWS CloudFormation API, parse the template you entered manually, and extract the variables.We will cover the **Populate Variables** scenario. + +1. Click **Populate Variables**. The **Populate from Example** assistant appears.![](./static/add-cloud-formation-templates-24.png) +2. If you are using an AWS S3 source: + 1. In **AWS Cloud Provider**, select the AWS Cloud Provider you added that has permission to access the template in the S3 bucket. + 2. In **Region**, select the AWS region where the AWS Cloud Provider should connect. AWS S3 is global, but AWS connections require a region. +3. If you are using a Git Repository source, you do not need to enter anything. +4. Click **Submit**. The input parameters from your template are added automatically:![](./static/add-cloud-formation-templates-25.png) +5. For each input, select the type of value required: **Text** or **Encrypted Text**. When this provisioner is added to a Workflow, the user will have to provide a value for the input that matches the type. Encrypted Text values use secrets you set up in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + +Here is what the input variables look like in a Workflow CloudFormation Create Stack step: + +![](./static/add-cloud-formation-templates-26.png) + +### Step 4: Complete the CloudFormation Provisioner + +Once you have completed your setup, click **Submit**. The CloudFormation Provisioner is created. + +![](./static/add-cloud-formation-templates-27.png) + +Next you will map template outputs to the Harness Infrastructure Definition settings Harness requires for provisioning. + +### Next Steps + +* [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloud-formation-account-setup.md b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloud-formation-account-setup.md new file mode 100644 index 00000000000..5e1681e6c3f --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloud-formation-account-setup.md @@ -0,0 +1,201 @@ +--- +title: Set Up Your Harness Account for CloudFormation +description: Set up the Delegate, Repo, and Cloud Provider for CloudFormation. +# sidebar_position: 2 +helpdocs_topic_id: 308nblm0vc +helpdocs_category_id: hupik7gwhc +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).The first step in integrating your CloudFormation templates and processes is setting up the necessary Harness account components: Delegates, Cloud Providers, and Source Repo Providers. + +This topic describes how to set up these components for CloudFormation. + +Once your account is set up, you can begin integrating your CloudFormation templates. See  [Add CloudFormation Templates](add-cloud-formation-templates.md). + + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* [CloudFormation Provisioning with Harness](../../concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md) +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) +* [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) + +### Visual Summary + +This topic describes the Harness account setup steps that you perform before you start to add your CloudFormation templates. + +Once your Harness account is set up, CloudFormation provisioning in Harness is as follows: + +![](./static/cloud-formation-account-setup-00.png) + +### Review: Limitations + +* Harness CloudFormation integration does not support AWS Serverless Application Model (SAM) templates. Only standard [AWS CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-whatis-concepts.html#w2ab1b5c15b7). +* Harness Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. + +### Step 1: Set Up Harness Delegates + +There are many types of Delegates, but for CloudFormation, the Shell Script and ECS Delegates are used most often. + +The Harness AWS Cloud Provider can connect Harness to your AWS account, and Source Repo if needed, using these Delegates. For more information, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) and [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +The Delegate should be installed where it can connect to the provisioned environment it creates. + +Ideally, this is the same subnet as the instances you will provision, but if you are provisioning the subnet then you can put the Delegate in the same VPC and ensure that it can connect to the provisioned subnet using security groups. + +To set up the Delegate, do the following: + +1. Install the Delegate on a host where it will have connectivity to your provisioned instances. To install a Delegate, follow the steps in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) using a Shell Script or ECS Delegate. Once the Delegate is installed, it will be listed on the **Harness Delegates** page. + ![](./static/cloud-formation-account-setup-01.png) +2. When you add a Harness AWS Cloud Provider, you will set up the Cloud Provider to assume the IAM role used by the Delegate. This is done using a Delegate Selector. For steps on installing a Delegate Selector, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +When you are done, the Delegate listing will look something like this: + +![](./static/cloud-formation-account-setup-02.png) + +#### Permissions + +The Delegate requires permissions according to the target deployment service (ECS, EC2, Lambda). + +For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see  [Trust Relationships and Roles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#trust_relationships_and_roles). + +If you will use AWS S3 as the source for your CloudFormation templates, then the IAM role used by the Delegate will also need policies to read templates from AWS S3. This is described below in [Step 3: Add Template Resource](#step_3_add_template_resource). + +### Step 2: Set Up the AWS Cloud Provider + +For a CloudFormation deployment, Harness can use a single AWS Cloud Provider to connect to your AWS account and do the following: + +* Obtain artifacts from Elastic Container Registry (ECR) or S3. +* Obtain the CloudFormation template from S3. +* Provision infrastructure in AWS. +* Deploy to the provisioned infrastructure in AWS. + +When you create the AWS Cloud Provider, you can enter the platform account information for the Cloud Provider to use as credentials, or you can use a Delegate running in your AWS infrastructure to provide the IAM role for the Cloud Provider. + +With CloudFormation, you are building an infrastructure on a platform that requires specific permissions, and so the account used by the AWS Cloud Provider (either by username and password or Delegate IAM role) needs the required policies. + +For example, to create AWS EC2 AMI instances, the account/role needs the **AmazonEC2FullAccess** policy. + +See the list of policies in [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +For steps on adding an AWS Cloud Provider, see [Amazon Web Services (AWS) Cloud](https://docs.harness.io/article/whwnovprrb-cloud-providers#amazon_web_services_aws_cloud). + +When the AWS Cloud Provider uses the installed Delegate for credentials via the Delegate's Selector, it assumes the IAM role used to add the Delegate. + +#### Permissions + +The AWS Cloud Provider must have **create** permissions for the resources you are planning to create using the CloudFormation template. + +As discussed earlier, for Harness AWS Cloud Providers, you can install the Delegate in your AWS VPC and have the Cloud Provider assume the permissions used by the Delegate. + +Just ensure that the IAM role assigned to the Delegate host (EC2 or ECS) has **create** permissions for the resources you are planning to create using the CloudFormation template. + +### Step 3: Add Template Resource + +CloudFormation templates are added to Harness by either pasting them into a text field, using an AWS S3 URL that points to the template, or using a Git repo. + +![](./static/cloud-formation-account-setup-03.png) + +Setting up and AWS S3 and Git connections are described below. + +Connections to AWS CodeCommit are made in Harness Source Repo Providers, not as an AWS Cloud Provider. + +#### Option 1: Use AWS S3 + +You can use the same AWS Cloud Provider to provision your AWS deployment environment and access the S3 bucket URL. + +The AWS Cloud Provider will need credentials to access the S3 bucket. + +These policies are required: + +* The Managed Policy **AmazonS3ReadOnlyAccess**. +* The [Customer Managed Policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) you create using `ec2:DescribeRegions`. +* The Customer Managed Policy you create using `cloudformation:GetTemplateSummary`. + +The AWS  [IAM Policy Simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) is a useful tool for evaluating policies and access.**Policy Name**: `AmazonS3ReadOnlyAccess`. + +**Policy ARN:** `arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess`. + +**Description:** Provides read-only access to all buckets via the AWS Management Console. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:Get*", + "s3:List*" + ], + "Resource": "*" + } + ] +} +``` +**Policy Name:** `HarnessS3`. + +**Description:** Harness S3 policy that uses EC2 permissions. This is a customer-managed policy you must create. In this example we have named it `HarnessS3`. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": "ec2:DescribeRegions", + "Resource": "*" + } + ] +} +``` +If you want to use an S3 bucket that is in a separate account than the account used to set up the AWS Cloud Provider, you can grant cross-account bucket access. For more information, see  [Bucket Owner Granting Cross-Account Bucket Permissions](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html) from AWS.**Policy Name:** `HarnessCloudFormation`. + +**Description:** Returns information about a new or existing template. See [GetTemplateSummary](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_GetTemplateSummary.html) from AWS. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "cloudformation:GetTemplateSummary" + ], + "Resource": "*" + } + ] +} +``` +See [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +The policies can be added to the AWS account you use to set up the AWS Cloud Provider. If the AWS Cloud Provider is using the Delegate for credentials, then the role applied to the Delegate host must have the policies. + +The following links provide useful information for ensuring access between EC2 instances and S3 buckets: + +* [Verify Resource-Based Permissions Using the IAM Policy Simulator](https://aws.amazon.com/blogs/security/verify-resource-based-permissions-using-the-iam-policy-simulator/) +* [How can I grant my Amazon EC2 instance access to an Amazon S3 bucket in another AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-instance-access-bucket/) + +#### Option 2: Use Your Git Repo + +If you want to use a Git repo as the source of your CloudFormation templates, you need to add a connection to your repo as a Harness Source Repo Provider. + +See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +### Next Steps + +* [Add CloudFormation Templates](add-cloud-formation-templates.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloud-formation-provisioner.md b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloud-formation-provisioner.md new file mode 100644 index 00000000000..8a0e6056637 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloud-formation-provisioner.md @@ -0,0 +1,24 @@ +--- +title: CloudFormation How-tos (FirstGen) +description: Harness has first-class support for AWS CloudFormation as an infrastructure provisioner. +sidebar_position: 100 +helpdocs_topic_id: 78g32khjcu +helpdocs_category_id: hupik7gwhc +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).Harness has first-class support for [AWS CloudFormation](https://aws.amazon.com/cloudformation/) as an infrastructure provisioner. + +See the following CloudFormation How-tos: + +* [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md) +* [Add CloudFormation Templates](add-cloud-formation-templates.md) +* [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md) +* [Provision using CloudFormation Create Stack](provision-cloudformation-create-stack.md) +* [Using CloudFormation Outputs in Workflow Steps](using-cloudformation-outputs-in-workflow-steps.md) +* [Remove Provisioned Infra with CloudFormation Delete Stack](cloudformation-delete-stack.md) + +For a conceptual overview of Harness CloudFormation integration, see [CloudFormation Provisioning with Harness](../../concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md). + +**Deployment Strategies Supported** — For most deployments, Harness Infrastructure Provisioners are only supported in Canary and Multi-Service types. For AMI/ASG and ECS deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloudformation-delete-stack.md b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloudformation-delete-stack.md new file mode 100644 index 00000000000..17b01bec1f8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/cloudformation-delete-stack.md @@ -0,0 +1,73 @@ +--- +title: Remove Provisioned Infra with CloudFormation Delete Stack +description: Add a CloudFormation Delete Stack Workflow step to remove any provisioned infrastructure. +# sidebar_position: 2 +helpdocs_topic_id: i1agf0s6h4 +helpdocs_category_id: hupik7gwhc +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).You can add a CloudFormation Delete Stack Workflow step to remove any provisioned infrastructure, just like running the `cloudformation delete-stack` command. See [delete-stack](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/delete-stack.html) from AWS. + + +### Before You Begin + +* [CloudFormation Provisioning with Harness](../../concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md) +* [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md) +* [Add CloudFormation Templates](add-cloud-formation-templates.md) +* [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md) +* [Provision using CloudFormation Create Stack](provision-cloudformation-create-stack.md) +* [Using CloudFormation Outputs in Workflow Steps](using-cloudformation-outputs-in-workflow-steps.md) + +### Review: What Gets Deleted? + +CloudFormation Delete Stack can delete any CloudFormation stack. You identify the stack you want deleted using its stack name or by using the default settings in the Workflow CloudFormation steps. + +Let's look at a couple examples: + +#### Delete Using Default Steps + +When you provision infrastructure using CloudFormation, you add a **CloudFormation Create Stack** step in the **Workflow Pre-deployment Steps** section. If you do not enter a custom name for that stack, Harness names the stack using the `HarnessStack-` prefix and the ID of the Environment used. + +If you want to delete that stack, add the **CloudFormation Delete Stack** step to the **Post-deployment Steps** of the same Workflow. + +In **CloudFormation Delete Stack**, do not enter a custom name, and ensure you specify the same settings as the CloudFormation Create Stack step (Provisioner, AWS Cloud Provider, Region). + +The **CloudFormation Delete Stack** step will delete the stack created by the **CloudFormation Create Stack** step. + +#### Delete Using Stack Name + +You can also use the **Use Custom Name** setting in the **CloudFormation Delete Stack** step to delete any stack by name. + +This is the same as the [delete-stack](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/delete-stack.html) API command. + +To see the list of stacks and their names, you can simply run: + + +``` +aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE +``` +### Step 1: Add CloudFormation Delete Stack + +1. In the **Post-deployment Steps** of the Workflow, click **Add Step**, and then select **CloudFormation Delete Stack**. The CloudFormation Delete Stack settings appear. + +### Option 1: Delete Stack by Name + +If you want to specify the name of a specific stack, do the following: + +1. In **AWS Cloud Provider**, select the AWS Cloud Provider with credentials to delete stacks. Typically, this is the same AWS Cloud Provider you selected in the **CloudFormation Create Stack** step that created the stack you want to delete. + For details on permissions, see [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md). +2. In **Region**, select the same region you selected in the **CloudFormation Create Stack** step that created the stack you want to delete. +3. Select **Use Custom Stack Name** and enter the name of the stack to delete in **Custom Stack Name**. +4. Click **Submit**. + +### Option 2: Delete Stack Using Defaults + +If you want to delete the exact same stack you provisioned using the **CloudFormation Create Stack** step in this Workflow, do the following: + +1. In **Provisioner**, select the same CloudFormation Infrastructure Provisioner you selected in the **CloudFormation Create Stack** step that created the stack you want to delete. +2. In **AWS Cloud Provider**, select the same AWS Cloud Provider you selected in the **CloudFormation Create Stack** step that created the stack you want to delete. +3. In **Region**, select the same region you selected in the **CloudFormation Create Stack** step that created the stack you want to delete. +4. Click **Submit**. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/map-cloud-formation-infrastructure.md b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/map-cloud-formation-infrastructure.md new file mode 100644 index 00000000000..3c3eec662b1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/map-cloud-formation-infrastructure.md @@ -0,0 +1,142 @@ +--- +title: Map CloudFormation Infrastructure +description: Simply map CloudFormation template outputs to the required Harness settings. +# sidebar_position: 2 +helpdocs_topic_id: 4xtxj2f88b +helpdocs_category_id: hupik7gwhc +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).To enable Harness to deploy to the provisioned infrastructure you defined in [Add CloudFormation Templates](add-cloud-formation-templates.md), you map outputs from your CloudFormation template to the Harness Infrastructure Definition settings Harness requires for provisioning. + +Mappings provide Harness with the minimum settings needed to provision using your template. + +Harness supports first class mapping for AWS-based infrastructures (SSH, ASG, ECS, Lambda). + +### Before You Begin + +Ensure you have read the following topics before you map the CloudFormation Provisioner in an Infrastructure Definition: + +* [CloudFormation Provisioning with Harness](../../concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md) +* [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md) +* [Add CloudFormation Templates](add-cloud-formation-templates.md) + +### Limitations + +**AWS Services Supported** — Harness supports first class CloudFormation provisioning for AWS-based infrastructures: + +* SSH +* AMI/Auto Scaling Group +* ECS +* Lambda + +**Deployment Strategies Supported** — Harness Infrastructure Provisioners are only supported in Canary and Multi-Service types. For AMI and ECS, Infrastructure Provisioners are also supported in Blue/Green deployments. + +**CloudFormation Template Outputs** — If you have been running your deployments manually, you might not have outputs configured in your template files. To provision using your CloudFormation template, you will need to add these output variables to your template. + +**Template Formats** — CloudFormation templates may be in JSON or YAML. + +**CloudFormation Provisioners and Environments:** + +* A CloudFormation Provisioner should not be used to provision different infrastructures/stacks within the same Environment. +* A CloudFormation Provisioner + Environment pair should be unique per provisioned infrastructure/stack. + +### Visual Summary + +This topic describes step 2 in the Harness CloudFormation Provisioning implementation process: + +![](./static/map-cloud-formation-infrastructure-05.png) + +Once you have completed this topic, you can move onto the next step: [Provision using CloudFormation Create Stack](provision-cloudformation-create-stack.md). + +### Step: Create an Infrastructure Definition + +As noted above, ensure you have done [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md) and [Add CloudFormation Templates](add-cloud-formation-templates.md) before using the CloudFormation Infrastructure Provisioner to create the Infrastructure Definition. + +To use a CloudFormation Infrastructure Provisioner to create an ​Infrastructure Definition, do the following: + +1. In the same Harness Application where you created the CloudFormation Infrastructure Provisioner, in an existing Environment, click **Add ​Infrastructure Definition**. The ​Infrastructure Definition settings appear. +2. In **Name**, enter the name for the ​Infrastructure Definition. This is the name you will select when you add this ​Infrastructure Definition to a Workflow or Workflow Phase. +3. In **Cloud Provider Type**, select the type of Harness Cloud Provider you will use to connect to the target platform.Harness supports first class mapping for AWS-based infrastructures (SSH, ASG, ECS, Lambda). +4. In **Deployment Type**, select the platform for the deployment, such as ECS, AMI, etc. +5. Select **Map Dynamically Provisioned Infrastructure**. This option reveals the Infrastructure Provisioner settings for the ​Infrastructure Definition. +6. In **Provisioner**, select the name of the CloudFormation Provisioner you want to use. +7. In **Cloud Provider**, select the Cloud Provider to use to connect to the target cloud platform. +The remainder of the settings are specific to the Provisioner and Cloud Provider you selected. +8. Map the required fields to your CloudFormation template outputs. The platform-specific sections below provide examples for the common deployment types. + +You map the CloudFormation template outputs using this syntax, where `exact_name` is the name of the output: + + +``` +${cloudformation.*exact\_name*} +``` +When you map a CloudFormation template output to a Harness Infrastructure Definition setting, the expression for the output, `${cloudformation.exact_name​}`, can be used anywhere in the Workflow that uses that CloudFormation Provisioner. This can be useful if you want to echo the outputs in a [Shell Script step](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) for example. + +### Option 1: Map an AWS AMI/ASG Infrastructure + +AMI/ASG and ECS deployments are the only type that supports Terraform and CloudFormation Infrastructure Provisioners in Blue/Green deployments.The AMI deployment type uses an AWS Auto Scaling Group and only requires that you provide a region and Auto Scaling Group. + +The Auto Scaling Group output is the Auto Scaling Group you want Harness to mirror (use as a template) when it provisions a new Auto Scaling Group as part of the AMI deployment. + +For more information, see [AWS AMI Deployments Overview](../../concepts-cd/deployment-types/aws-ami-deployments-overview.md). + +In the following example, we show: + +* Required outputs. +* The outputs used for the optional Target Group and Application Load Balancer. +* The stage Target Group and Application Load Balancer used for Blue/Green deployments. + +![](./static/map-cloud-formation-infrastructure-06.png) + +### Option 2: Map an AWS ECS Infrastructure + +The ECS mapping supports both ECS launch types, EC2 and Fargate. + +The ECS deployment type has two **Launch Type** options: + +* **EC2 Instances** - Region and Cluster are required. Here is an example mapping Region and Cluster and the remaining fields: + +![](./static/map-cloud-formation-infrastructure-07.png) + +* **Fargate Launch Type** - Region, Cluster, Task Execution Role, VPC, Subnets, Security Group are required. Here is an example mapping all of these: + +![](./static/map-cloud-formation-infrastructure-08.png) + +See [AWS ECS Deployments Overview](../../concepts-cd/deployment-types/aws-ecs-deployments-overview.md) and [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments). + +### Option 3: Map an AWS Lambda Infrastructure + +The Lambda deployment type supports AWS Instance and AWS Auto Scaling Groups. Both require IAM role and region outputs. + +Here is an Infrastructure Definition example for Lambda. + +![](./static/map-cloud-formation-infrastructure-09.png) + +See [Lambda Deployment Overview](../lambda-deployments/lambda-deployment-overview.md) and [AWS Lambda Quickstart](https://docs.harness.io/article/wy1rjh19ej-aws-lambda-deployments). + +### Option 4: Map a Secure Shell (SSH) Infrastructure on AWS + +The Secure Shell (SSH) deployment type is supported with CloudFormation on AWS only. To use SSH with a datacenter, see [Shell Script Provisioner](https://docs.harness.io/article/1m3p7phdqo-shell-script-provisioner).The Secure Shell (SSH) deployment type has two **AWS Node Type** options: **AWS Instance** and **AWS Autoscaling Group**. + +For the **AWS Instance**, only AWS tags are required. Here is an example mapping both VPCs and AWS tags: + +![](./static/map-cloud-formation-infrastructure-10.png) + +The remaining settings can also use output variables or be hardcoded. + +For AWS **Auto Scaling Group**, only an Auto Scaling Group is required: + +![](./static/map-cloud-formation-infrastructure-11.png) + +See [Traditional Deployments Overview](../../traditional-deployments/traditional-deployments-overview.md). + +### Next Steps + +Now that the Infrastructure Definition is mapped to the CloudFormation outputs in your script, the provisioned infrastructure can be used as a deployment target by a Harness Workflow. But the CloudFormation template must still be run to provision this infrastructure. + +To run the CloudFormation template in your Harness Infrastructure Provisioner and create the infra you defined in Infrastructure Definition, you add a **CloudFormation Create Stack** step to a Workflow that uses the Infrastructure Definition you just set up. + +For steps on adding the CloudFormation Create Stack step, see  [Provision using CloudFormation Create Stack](provision-cloudformation-create-stack.md). + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/provision-cloudformation-create-stack.md b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/provision-cloudformation-create-stack.md new file mode 100644 index 00000000000..a08af88930d --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/provision-cloudformation-create-stack.md @@ -0,0 +1,287 @@ +--- +title: Provision using CloudFormation Create Stack +description: Provision infrastructure using the Workflow CloudFormation Create Stack step. +# sidebar_position: 2 +helpdocs_topic_id: 5wdb3r765g +helpdocs_category_id: hupik7gwhc +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).This topic describes how to provision infrastructure using the Workflow CloudFormation Create Stack step. + +Once you have [created a CloudFormation Infrastructure Provisioner](add-cloud-formation-templates.md) and [added it to a Harness Infrastructure Definition](map-cloud-formation-infrastructure.md), you add that Infrastructure Definition to a Workflow. + +Next, you use the CloudFormation Create Stack step in that Workflow to run the same CloudFormation template added in the Infrastructure Provisioner. + +During Workflow pre-deployment, the CloudFormation Create Stack step provisions the target infrastructure. + +Next, during Workflow deployment, the Workflow deploys to the provisioned infrastructure as defined in its Infrastructure Provisioner. + +## Before You Begin + +Ensure you have read the following topics before you add the CloudFormation Create Stack step to a Workflow: + +* [CloudFormation Provisioning with Harness](../../concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md) +* [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md) +* [Add CloudFormation Templates](add-cloud-formation-templates.md) +* [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md) + +## Important Notes + +* **AWS Services Supported**: Harness supports first class CloudFormation provisioning for AWS-based infrastructures: + + SSH + + AMI/Auto Scaling Group + + ECS + + Lambda +* **Deployment Strategies Supported**: For most deployments, Harness Infrastructure Provisioners are only supported in Canary and Multi-Service types. For AMI/ASG and ECS deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. +* **Control stack deployment wait time:** + Currently, this feature is behind the Feature Flag `CLOUDFORMATION_SKIP_WAIT_FOR_RESOURCES`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + + By default, Harness waits for 30 seconds after a successful stack deployment to ensure that resources have come up. + + You can can remove this wait time by enabling the Feature Flag `CLOUDFORMATION_SKIP_WAIT_FOR_RESOURCES`. + + If `CLOUDFORMATION_SKIP_WAIT_FOR_RESOURCES` is enabled and you still want to have a waiting condition, use the [AWS CreationPolicy attribute](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html). + +## Where can I use CloudFormation Create Stack? + +CloudFormation Create Stack can be used to provision infrastructure or for ad hoc provisioning. + +When used in the Pre-deployment steps of a Workflow, the CloudFormation Create Stack provisions infrastructure that can be rolled back in the Workflow fails. + +When used outside of the Pre-deployment steps of a Workflow, the CloudFormation Create Stack step does not participate in Workflow rollback. Only use the CloudFormation Create Stack step outside of the Pre-deployment steps of a Workflow for ad hoc provisioning. + +To delete the ad hoc provisioned infrastructure in the case of a Workflow failure, add the CloudFormation Delete Stack to the Workflow **Rollback Steps** section. See [Remove Provisioned Infra with CloudFormation Delete Stack](cloudformation-delete-stack.md). + +## Visual Summary + +This topic describes steps 4 through 6 in the Harness CloudFormation Provisioning implementation process: + +![](./static/provision-cloudformation-create-stack-12.png) + +For step 1, see [Add CloudFormation Templates](add-cloud-formation-templates.md). For step 2, see [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md). + +Here is an illustration using a deployment: + +![](./static/provision-cloudformation-create-stack-13.png) + +1. The **CloudFormation Create Stack** step executes pre-deployment to build the infrastructure. +2. The **Infrastructure Definition** is used to select the provisioned nodes. +3. The app is **installed** on the provisioned node. + +## Step 1: Add Environment to Workflow + +To use a CloudFormation Provisioner in your Workflow, do the following: + +1. In your Harness Application, click **Workflows**. +2. Click **Add Workflow**. The Workflow dialog appears. +3. Enter a name and description for the Workflow. +4. In **Workflow Type**, select **Canary**. +:::note +For most deployments, Harness Infrastructure Provisioners are only supported in Canary and Multi-Service types. For AMI deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. +::: +5. In **Environment**, select the Environment that has the CloudFormation Provisioner set up in its Infrastructure Definitions. +6. Click **SUBMIT**. The new Workflow is created. + +By default, the Workflow includes a **Pre-deployment Steps** section. This is where you will add a step that uses your CloudFormation Provisioner. + +## Step 2: Add CloudFormation Create Stack Step to Pre-deployment Steps + +In this step you will use the CloudFormation Create Stack step to select the same CloudFormation Infrastructure Provisioner you used in the Workflow Infrastructure Definition. + +The CloudFormation Create Stack step will provision using the template in the CloudFormation Infrastructure Provisioner. + +The CloudFormation Create Stack step is basically the same as the [`aws cloudformation create-stack`](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack.html) command. + +The CloudFormation Create Stack step provisions your target infrastructure, and so it is added to the **Pre-deployment steps** in the Canary Workflow. + +To add the CloudFormation Create Stack step, do the following: + +1. In your Workflow, in **Pre-deployment Steps**, click **Add Step**. +2. Select **CloudFormation Create Stack**, and click **Next**. +3. In **Provisioner**, select the same Harness CloudFormation Infrastructure Provisioner you used in the Infrastructure Definition of this Workflow. +4. In **AWS Cloud Provider**, typically, you will select the same Cloud Provider you used when setting up the Infrastructure Definition used by this Workflow. +:::note +You need to select an AWS Cloud Provider even if the CloudFormation Infrastructure Provisioner you selected uses a manually-entered template body. Harness needs access to the AWS API for CloudFormation via the credentials in the AWS Cloud Provider. Ensure that the AWS Cloud Provider has the credentials described in [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md). +::: +5. In **Region**, select the region where you will be provisioning your resources. +You can use a [Harness variable expression](https://docs.harness.io/article/9dvxcegm90-variables) in the Region setting, such as a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). This allows you to select the AWS region for the provisioning when you deploy your Workflow. +:::note +Currently, expressions in the Region setting is in Beta and behind a Feature Flag. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it is available for Trial and Community Editions. +::: +6. To name your stack, select **Use Custom Stack Name** and enter a name for your stack. If you do not select this option, Harness will automatically generate a unique name for your stack prefixed with `HarnessStack`and the ID for your Harness Environment, such as `HarnessStack-7HklGe0N6AvviJmZ`. + If you plan on using the [CloudFormation Delete Stack](cloudformation-delete-stack.md) step later in this Workflow, it is a good idea to name your stack. +7. In **Role ARN**, enter the Amazon Resource Name (ARN) of an AWS IAM role that CloudFormation assumes to create the stack. If you don't specify a value, Harness uses the credentials you provided via **AWS Cloud Provider**. This allows you to tune the step for provisioning a specific AWS resource. For example, if you will only provision AWS S3, then you can use a role that is limited to S3. + You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in **Role ARN**. For example, you can create a Service or Workflow variable and then enter its expression in **Role ARN**, such as `${serviceVariables.roleARN}` or `${workflow.variables.roleArn}`. +8. To acknowledge the capabilities in the CloudFormation template, enable **Specify Capabilities**. + This acknowledges that the template contains certain capabilities (for example, `CAPABILITY_AUTO_EXPAND`), giving AWS CloudFormation the specified capabilities before it creates the stack. This is the same as using the `--capabilities` option in the `aws cloudformation create-stack` CLI command. See [create-stack](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack.html). + In **Capabilities**, select one or more of the capabilities from the spec. +9. To add CloudFormation Tags, enable **Add CloudFormation Tags**. + + Enter the tags in JSON format only (lowercase is required): + + ``` + [{ + "key": "string", + "value": "string" + },{ + "key": "string", + "value": "string" + }] + ``` + + The tags you add here are applied to all of the resources in the stack. AWS has a limit of 50 unique tags for each stack. + + You can use Harness variable expressions in the keys and values. See [Built-in Variables List](https://docs.harness.io/article/aza65y4af6-built-in-variables-list) and [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). +10. In **Skip based on Stack Status**, you can add the stack states that will not prevent provisioning. +:::note +Harness checks if the stack is in ROLLBACK\_COMPLETE state before the deployment. If present, Harness deletes the stack and then triggers the deployment. +::: + ![](./static/provision-cloudformation-create-stack-14.png) +11. In **Timeout**, enter how long Harness should wait for the successful CloudFormation Provisioner set up before failing the Workflow. +12. Click **Next**. The **Input Values** settings appear. + +## Option 1: Enter Input Values from Parameter Files + +You can use CloudFormation parameters files to specify input parameters for the stack. + +This is the same as using the AWS CloudFormation CLI `create-stack` option `--parameters` and a JSON parameters file: + + +``` +aws cloudformation create-stack --stackname startmyinstance +--template-body file:///some/local/path/templates/startmyinstance.json +--parameters https://your-bucket-name.s3.amazonaws.com/params/startmyinstance-parameters.json +``` +Where the JSON file contains these parameters: + + +``` +[ + { + "ParameterKey": "KeyPairName", + "ParameterValue": "MyKey" + }, + { + "ParameterKey": "InstanceType", + "ParameterValue": "m1.micro" + } +] +``` +### Use a CloudFormation Parameter File + +1. In **Input Values**, select **Use CloudFormation Template Parameters files**. +2. In **Path to Parameters.json**, enter the path to the parameter file. + +### Source Types + +Parameter files can be used with git repo and AWS S3 source types. See [Add CloudFormation Templates](add-cloud-formation-templates.md). + +### Git-based Parameter Files + +Enter the full path to the file. + +For Git-based parameter files, the path entered is relative to the **URL** setting of the Source Repo Provider used by the CloudFormation Provisioner. + +For example, the CloudFormation Provisioner you select in **Provisioner** uses a Source Repo Provider with a **URL** setting of `https://github.com/account-name/cf-files`. + +In the **cf-files** repo folder there is a file named **parameters.json**. So, in **Path to Parameters.json**, you would simply enter **parameters.json**. + +### Encrypted Text Secrets + +Use can use Harness encrypted text secrets in **Path to Parameters.json**. See [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +![](./static/provision-cloudformation-create-stack-15.png) + +### Multiple Parameter Files + +You can enter paths to single and multiple files. Separate multiple files using commas: + + +``` +https://my-bucket.s3.amazonaws.com/parameters1.json,https://my-bucket.s3.amazonaws.com/parameters3.json +``` +### Workflow Variable Expressions in Paths + +You can use Harness Workflow variables in **Path to Parameters.json**. + +![](./static/provision-cloudformation-create-stack-16.png) + +When the Workflow is deployed, by itself, in a Pipeline, or in a Trigger, you will provide values for the Workflow variables. This allows you to templatize the path. + +See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and [Templatize a Workflow](https://docs.harness.io/article/bov41f5b7o-templatize-a-workflow-new-template). + +### Workflow Variable Expressions in Files + +You can use Harness builtin and Workflow variables in the parameter values inside the parameter file. Harness will replace the variables when it executes the **Pre-deployment Steps** section. + +For example: + + +``` +[ + { + "ParameterKey": "KeyPairName", + "ParameterValue": "${workflow.variables.KeyPairNameValue}" + }, + { + "ParameterKey": "InstanceType", + "ParameterValue": "${workflow.variables.InstanceTypeValue}" + } +] +``` +### Use Parameters Files and Inline Values Together + +You can use **Use CloudFormation Template Parameters files** and **Inline Values** together. Inline Values override parameter file values. + +## Option 2: Enter Inline Input Values + +The Input Values are automatically populated with the same variables from the CloudFormation Infrastructure Provisioner **Variables** section, as described in [Add CloudFormation Templates](add-cloud-formation-templates.md). + +Enter or select a value for each variable in **Input Values**. For encrypted text values, select an Encrypted Text secret from Harness Secrets Management. + +![](./static/provision-cloudformation-create-stack-17.png) + +For more information, see [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +Click **Submit**. The **CloudFormation Create Stack** step is added to your Workflow. + +![](./static/provision-cloudformation-create-stack-18.png) + +Now your Workflow is set up to provision an infrastructure using your CloudFormation template in the CloudFormation Infrastructure Provisioner, and then deploy to the provisioned infrastructure. + +## Step 3: Add Infrastructure Definition to Phases + +Now that the Workflow **Pre-deployment** section has your CloudFormation Create Stack step added, you need to add the target Infrastructure Definition where the Workflow will deploy. + +This is the same Infrastructure Definition where you mapped your CloudFormation Infrastructure Provisioner outputs, as described in  [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md). + +For Canary Workflows, Infrastructure Definitions are added in Phases, in the **Deployment Phases** section. + +In the **Deployment Phases** section, click **Add Phase**. The Workflow Phase settings appear. + +1. In **Service**, select the Harness Service to deploy. +2. In **Infrastructure Definition**, select the target Infrastructure Definition where the Workflow will deploy. This is the same Infrastructure Definition where you mapped your CloudFormation Infrastructure Provisioner outputs, as described in  [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md). +Here is an example: + + ![](./static/provision-cloudformation-create-stack-19.png) + +3. Click **Submit**. Use the same Infrastructure Definition for the remaining phases in your Canary Workflow. + +Once you are done, your Workflow is ready to deploy. + +## Deployment Rollback + +If you have successfully deployed CloudFormation resources and on the next deployment there is an error that initiates a rollback, Harness will roll back the provisioned infrastructure to the previous, successful version of the CloudFormation state. + +Harness will not increment the serial in the state, but perform a hard rollback to the exact version of the state provided. + +Harness determines what to rollback using a combination of the following Harness entities: + +`CloudFormation Infrastructure Provisioner + Environment` + +If you have templated these settings (using Workflow variables), Harness uses the values it obtains at runtime when it evaluates the template variables. + +## Next Steps + +* The variables you use to map CloudFormation template outputs in an Infrastructure Definition can also be used in other Workflow commands. See [Using CloudFormation Outputs in Workflow Steps](using-cloudformation-outputs-in-workflow-steps.md). +* If you want to delete the stack as part of a Workflow, see [Remove Provisioned Infra with CloudFormation Delete Stack](cloudformation-delete-stack.md). + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-20.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-20.png new file mode 100644 index 00000000000..1e8b3563fc8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-20.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-21.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-21.png new file mode 100644 index 00000000000..c6ca141ca8e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-21.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-22.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-22.png new file mode 100644 index 00000000000..dfff6d69b4a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-22.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-23.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-23.png new file mode 100644 index 00000000000..c49f1d4a5b0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-23.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-24.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-24.png new file mode 100644 index 00000000000..85726dba5e4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-24.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-25.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-25.png new file mode 100644 index 00000000000..506d77290a6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-25.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-26.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-26.png new file mode 100644 index 00000000000..8e105a8c0ce Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-26.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-27.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-27.png new file mode 100644 index 00000000000..b1c271933a2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/add-cloud-formation-templates-27.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-00.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-00.png new file mode 100644 index 00000000000..e49d1ef25c0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-00.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-01.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-01.png new file mode 100644 index 00000000000..301fca74c26 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-01.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-02.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-02.png new file mode 100644 index 00000000000..7a0362dbf6c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-02.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-03.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-03.png new file mode 100644 index 00000000000..e7a9922a722 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/cloud-formation-account-setup-03.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-05.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-05.png new file mode 100644 index 00000000000..1e8b3563fc8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-05.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-06.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-06.png new file mode 100644 index 00000000000..657db28dbf2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-06.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-07.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-07.png new file mode 100644 index 00000000000..3bce8d008d2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-07.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-08.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-08.png new file mode 100644 index 00000000000..6a169fcbf48 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-08.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-09.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-09.png new file mode 100644 index 00000000000..da463071548 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-09.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-10.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-10.png new file mode 100644 index 00000000000..7e904c3aede Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-10.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-11.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-11.png new file mode 100644 index 00000000000..caf7e60c934 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/map-cloud-formation-infrastructure-11.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-12.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-12.png new file mode 100644 index 00000000000..1e8b3563fc8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-12.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-13.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-13.png new file mode 100644 index 00000000000..5823604a348 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-13.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-14.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-14.png new file mode 100644 index 00000000000..e06f8929352 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-14.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-15.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-15.png new file mode 100644 index 00000000000..65cce923e14 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-15.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-16.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-16.png new file mode 100644 index 00000000000..179c3f82eaa Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-16.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-17.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-17.png new file mode 100644 index 00000000000..67cc62228f2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-17.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-18.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-18.png new file mode 100644 index 00000000000..a2193edd0ca Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-18.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-19.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-19.png new file mode 100644 index 00000000000..76d3a6c8292 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/provision-cloudformation-create-stack-19.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/using-cloudformation-outputs-in-workflow-steps-04.png b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/using-cloudformation-outputs-in-workflow-steps-04.png new file mode 100644 index 00000000000..657db28dbf2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/static/using-cloudformation-outputs-in-workflow-steps-04.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/using-cloudformation-outputs-in-workflow-steps.md b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/using-cloudformation-outputs-in-workflow-steps.md new file mode 100644 index 00000000000..f32d6db0b64 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/cloudformation-category/using-cloudformation-outputs-in-workflow-steps.md @@ -0,0 +1,65 @@ +--- +title: Using CloudFormation Outputs in Workflow Steps +description: Use CloudFormation output expressions in other Workflow steps. +# sidebar_position: 2 +helpdocs_topic_id: ez8bgluqg5 +helpdocs_category_id: hupik7gwhc +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).The CloudFormation output variables you use to map CloudFormation template outputs in an Infrastructure Definition can also be output in other Workflow commands. + +For example, if you use `${cloudformation.Region}` to map a region output to the AWS region in an Infrastructure Definition, you can add a Shell Script step in your Workflow and use `echo ${cloudformation.Region}` to print the value. + +In this topic: + +* [Before You Begin](using-cloudformation-outputs-in-workflow-steps.md#before-you-begin) +* [Visual Summary](#visual_summary) +* [Step 1: Add A Workflow Step](#step_1_add_a_workflow_step) +* [Step 2: Enter the Output Variable Expression](#step_2_enter_the_output_variable_expression) + +### Before You Begin + +* [CloudFormation Provisioning with Harness](../../concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md) +* [Set Up Your Harness Account for CloudFormation](cloud-formation-account-setup.md) +* [Add CloudFormation Templates](add-cloud-formation-templates.md) +* [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md) +* [Provision using CloudFormation Create Stack](provision-cloudformation-create-stack.md) + +### Visual Summary + +When you use a Harness CloudFormation Infrastructure Provisioner to map template outputs to Infrastructure Definition settings, you create variable expressions and use them as parameters. + +In the following example, we show: + +* Required outputs. +* The outputs used for the optional Target Group and Application Load Balancer. +* The stage Target Group and Application Load Balancer used for Blue/Green deployments. + +![](./static/using-cloudformation-outputs-in-workflow-steps-04.png) + +As you can see, you map the CloudFormation template outputs using this syntax, where `exact_name` is the name of the output: + + +``` +${cloudformation.*exact\_name*} +``` +Once these variable expressions are defined as Infrastructure Definition parameters, and used by the CloudFormation Create Stack step in a Workflow, they can be used elsewhere in the Workflow. + +### Step 1: Add A Workflow Step + +This topic assumes you have a Workflow that uses an Infrastructure Definition that is dynamically mapped to a Harness CloudFormation Infrastructure Provisioner, and a CloudFormation Create Stack step in the Workflow that provisions that infrastructure. + +For details, see [Map CloudFormation Infrastructure](map-cloud-formation-infrastructure.md). + +Add a Workflow step where you want to use the CloudFormation template output value. Typically, this is a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +### Step 2: Enter the Output Variable Expression + +You can use any variable expression that is already used in the Infrastructure Definition in the Workflow settings. + +For Canary Workflows, the Infrastructure Definition is added in the Phase settings. Therefore, you can only use the output variable expression within the Phase. + +For example, let's say you use `${cloudformation.AutoScalingGroup}` to map an ASG output to the ASG in an Infrastructure Definition. You can add a Shell Script step in your Workflow and use `echo ${cloudformation.AutoScalingGroup}` to print the value. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/_category_.json b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/_category_.json new file mode 100644 index 00000000000..c733d7fab0c --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/_category_.json @@ -0,0 +1 @@ +{"label": "AWS ECS Deployments", "position": 30, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "AWS ECS Deployments"}, "customProps": { "helpdocs_category_id": "df9vj316ec"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/deploy-multiple-containers-in-a-single-ecs-workflow.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/deploy-multiple-containers-in-a-single-ecs-workflow.md new file mode 100644 index 00000000000..6a48c9a95e3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/deploy-multiple-containers-in-a-single-ecs-workflow.md @@ -0,0 +1,453 @@ +--- +title: Deploy Multiple ECS Sidecar Containers +description: Deploy multiple containers and images using a single Harness ECS Service and Workflow. +sidebar_position: 1000 +helpdocs_topic_id: 2eyw6epug0 +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can deploy sidecar containers using a single Harness ECS Service and Workflow. + +In the Harness Service for ECS, in addition to the spec for the Main Container used by Harness, you simply add container specs for however many sidecar containers you need. + +Harness deploys all containers and images as defined in the specs. + + +### Before You Begin + +This topic assumes you have read or performed the following: + +* [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments) +* [AWS ECS Deployments Overview](../../concepts-cd/deployment-types/aws-ecs-deployments-overview.md) +* [ECS Workflows](ecs-workflows.md) +* [ECS Blue/Green Workflows](ecs-blue-green-workflows.md) +* AWS ECS [container task definition parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions) (from AWS). + +### Review: ECS Sidecar Containers + +AWS ECS sidecar containers are common. They move some of the responsibility of a service out into a containerized module deployed alongside a core application container. + +Put simply, they improve performance by freeing your application container from various CPU intensive tasks. + +For example, a telemetry sidecar container that must start before and shut down after the other containers in a task, or an initialization container that must complete its work before other containers in the task can start. + +Here is a blog post from AWS explaining another sidecar use: [Deploying an NGINX Reverse Proxy Sidecar Container on Amazon ECS](https://aws.amazon.com/blogs/compute/nginx-reverse-proxy-sidecar-container-on-amazon-ecs/). + +### Review: New ARN and Resource ID Format Must be Enabled + +When deploying sidecar containers, Harness uses an AWS tag to distinguish the Main Container. The Main Container is the container used by Harness for the image Harness deploys. Sidecar containers are used for additional images. + +AWS ECS requires that their new ARN and resource ID format be enabled to add tags to the ECS service. + +If you have not opted into the new ECS ARN and resource ID format before you attempt deployment, you might receive the following deployment error: + +`InvalidParameterException: The new ARN and resource ID format must be enabled to add tags to the service. Opt in to the new format and try again.` + +To solve this issue, opt into the new format and try again. For more information, see  [Migrating your Amazon ECS deployment to the new ARN and resource ID format](https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-deployment-to-the-new-arn-and-resource-id-format-2/) from AWS. + +### Review: Main Container for ECS Deployments + +The Main Container is the container used by Harness for deployments. Its spec is defined in the Harness Service. + +You can add sidecar containers, as described in this topic, but you must always include the Main Container. + +The Main Container is identified using the following mandatory placeholders: + +* `${CONTAINER_NAME}` — At deployment runtime, this placeholder is replaced with a container name based on the image name. Such as `harness_todolist-sample_9`. Each time you deploy the container, its numeric suffix is increased (`_9`). +* `${DOCKER_IMAGE_NAME}` — At deployment runtime, this placeholder is replaced with the Docker image name and tag. Such as `harness/todolist-sample:9`. + +The Main Container task spec must have a container definition using the placeholders. + +### Step 1: Add Sidecar Container Specs + +ECS container specs are added in Harness Services. The **Deployment Type** for the Services must be **Amazon ECS Container Service (ECS)**: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-01.png) + +1. In the Harness ECS Service, in **Deployment Specification**, click **Container Specification**. The **ECS - Container Command Definition** settings appear. + ![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-02.png) + The simple interface is for adding a single, EC2 container spec. For Fargate, sidecar containers, or granular settings, use **Advanced Settings**. +2. Click **Advanced Settings**. + +Here is where you will enter the sidecar container task definitions. + +The first definition is for the Main Container, and uses the Harness placeholders for container name and image: + + +``` +{ + "containerDefinitions" : [ { + "name" : "${CONTAINER_NAME}", + "image" : "${DOCKER_IMAGE_NAME}", + "cpu" : 1, + "memory" : 1000, +... +``` +Now let's add a second container spec for a sidecar container. + +For this example, we'll simply copy the default container spec but use the suffix `_Sidecar` for the side container name. + + +``` +... +{ + "name" : "${CONTAINER_NAME}_Sidecar", + "image" : "${DOCKER_IMAGE_NAME}", + "memory" : "512", + "portMappings" : [ { + "containerPort" : 85, + "protocol" : "tcp" + } ] + } ], +... +``` + +Here is what the full specs look like: + +``` + + +{ + + "containerDefinitions" : [ { + + "name" : "${CONTAINER_NAME}", + + "image" : "${DOCKER_IMAGE_NAME}", + + "links" : [ ], + + "portMappings" : [ { + + "containerPort" : 80, + + "protocol" : "tcp" + + } ], + + "memory" : "512", + + "entryPoint" : [ ], + + "command" : [ ], + + "environment" : [ ], + + "mountPoints" : [ ], + + "volumesFrom" : [ ], + + "dependsOn" : [ ], + + "dnsServers" : [ ], + + "dnsSearchDomains" : [ ], + + "extraHosts" : [ ], + + "dockerSecurityOptions" : [ ], + + "ulimits" : [ ], + + "systemControls" : [ ], + + "resourceRequirements" : [ ] + + }, + +{ + + "name" : "${CONTAINER_NAME}_Sidecar", + + "image" : "${DOCKER_IMAGE_NAME}", + + "memory" : "512", + + "portMappings" : [ { + + "containerPort" : 85, + + "protocol" : "tcp" + + } ] + + } ], + + "executionRoleArn" : "${EXECUTION_ROLE}", + + "volumes" : [ ], + + "requiresAttributes" : [ ], + + "placementConstraints" : [ ], + + "compatibilities" : [ ], + + "requiresCompatibilities" : [ ], + + "cpu" : "512", + + "memory" : "1024", + + "inferenceAccelerators" : [ ] + +} + + +``` + +Using `${CONTAINER_NAME}_Sidecar` isn't something you would do in production. It's just a simple way to try out the feature yourself. We include an advanced example later in this topic. Once the Service is deployed, you will see both containers in the ECS console: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-03.png) + +You can use specs that deploy the Main Container and sidecars. + + +Here's an advanced example that deploys the Main Container, Nginx, and Tomcat: + +``` + + +{ + + "containerDefinitions" : [ { + + "name" : "${CONTAINER_NAME}", + + "image" : "${DOCKER_IMAGE_NAME}", + + "memory" : 1024, + + "links" : [ ], + + "portMappings" : [ ], + + "entryPoint" : [ ], + + "command" : [ ], + + "environment" : [ ], + + "mountPoints" : [ ], + + "volumesFrom" : [ ], + + "dependsOn" : [ ], + + "dnsServers" : [ ], + + "dnsSearchDomains" : [ ], + + "extraHosts" : [ ], + + "dockerSecurityOptions" : [ ], + + "ulimits" : [ ], + + "systemControls" : [ ], + + "resourceRequirements" : [ ] + + }, + +{ + + "name": "nginx", + + "image": "nginx:latest", + + "memory": 256, + + "essential": true, + + "portMappings": [ + + { + + "containerPort": 8181, + + "protocol": "tcp" + + } + + ] + +}, + +{ + + "essential": true, + + "name": "tomcat-webserver", + + "image": "tomcat", + + "memory": 512, + + "portMappings": [ + + { + + "hostPort": 91, + + "containerPort": 9191, + + "protocol": "tcp" + + } + + ] + +} + + ], + + "executionRoleArn" : "${EXECUTION_ROLE}", + + "volumes" : [ ], + + "requiresAttributes" : [ ], + + "placementConstraints" : [ ], + + "compatibilities" : [ ], + + "requiresCompatibilities" : [ ], + + "inferenceAccelerators" : [ ], + + "cpu": 1024 + +} + + +``` + +You can see that this example uses the [default public Docker Hub setting for the container image](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definition_image). You can also use a local repo. + +Harness does not pull these images. They are pulled by ECS.You can see all three container in the ECS console: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-04.png) + +That's all you have to do to deploy sidecar containers. Use the Harness Service with a Harness Basic, Canary, or Blue/Green ECS Workflow and all of the containers are deployed. + +### Step 2: Identify Main Container by Tag in ECS Console + +In a Harness ECS deployment, the container that points to the main artifact being deployed is called the Main Container. + +In the container spec, the `${CONTAINER_NAME}` and `${DOCKER_IMAGE_NAME}` placeholders identifies the Main Container. At deployment runtime, the placeholder is replaced with a name generated using the artifact name. + +When you add verification steps to the **Verify** section of your Harness Workflow, Harness performs verification on your Main Container only. The sidecar containers are not verified using Harness [Continuous Verification](https://docs.harness.io/article/ina58fap5y-what-is-cv). + +The Main Container is identified by Harness using the AWS Tag **HARNESS\_DEPLOYED\_MAIN\_CONTAINER**. + +The `key:value` for the tag is `HARNESS_DEPLOYED_MAIN_CONTAINER:`. + +You can see this tag in the ECS console. + +Locate the ECS service you deployed, and then click its **Task definition**. + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-05.png) + +In the task definition, in the **Builder** tab, in **Container Definitions**, you can see the containers that were deployed: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-06.png) + +In the task definition **Tags** tab, you can see the Main Container tag that displays the name of the Main Container: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-07.png) + +Do not edit or remove this tag.You will see the same Main Container tag when you deploy multiple sidecar containers also: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-08.png) + +### Option: Using Workflow Variables in Container Specs + +In the container spec, the `${CONTAINER_NAME}` and `${DOCKER_IMAGE_NAME}` placeholders identifies the Main Container. These placeholders must be present. + +For the sidecar specs, you can hardcode the name and image values, or you can use Harness [Workflow variable expressions](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +When you use Workflow variable expressions, you provide the values for the spec when the Workflow is deployed. + +This is one way to template the ECS sidecar container specs. + +Let's look at a Harness ECS Workflow with Workflow variables for two sidecar containers: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-09.png) + +Now let's look at how these are used in the sidecar container specs in a Harness Service: + + +``` +... +{ + "name": "${workflow.variables.Sidecar1Name}", + "image": "${workflow.variables.Sidecar1Image}", + "memory": 256, + "essential": true, + "portMappings": [ + { + "containerPort": 8181, + "protocol": "tcp" + } + ] +}, +{ + "essential": true, + "name": "${workflow.variables.Sidecar2Name}", + "image": "${workflow.variables.Sidecar2Image}", + "memory": 512, + "portMappings": [ + { + "hostPort": 91, + "containerPort": 9191, + "protocol": "tcp" + } + ] +} +... +``` +When the Workflow is deployed, you are prompted to provide values for the Workflow variables used in the Service's container specs: + +![](./static/deploy-multiple-containers-in-a-single-ecs-workflow-10.png) + +During deployment, the values you provided for the Workflow variables replace the Workflow variable expressions in the Service's container specs. + +### Notes + +The following notes discuss important related information. + +#### Steady State + +Harness deploys and verifies steady state at the ECS task level, not the container level. + +But if the task is running in a steady state, then its containers are also in a steady state. + +If a single container fails, the task fails. + +#### Display Host and Container Information + +You can use [Harness built-inn variables expressions](https://docs.harness.io/article/9dvxcegm90-variables) and a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step in your Workflow to display useful information about the deployed containers and hosts. + +Here is an example: + + +``` +echo instance.hostName: ${instance.hostName} + +echo instance.host.hostName: ${instance.host.hostName} + +echo instance.host.ip: ${instance.host.ip} + +echo instance.EcsContainerDetails.dockerId: ${instance.EcsContainerDetails.dockerId} + +echo instance.EcsContainerDetails.completeDockerId: ${instance.EcsContainerDetails.completeDockerId} + +echo ec2Instance.privateIpAddress: ${instance.host.ec2Instance.privateIpAddress} +``` +### Next Steps + +* [Harness built-in AWS ECS variable expressions](https://docs.harness.io/article/9dvxcegm90-variables#aws_ecs). + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-blue-green-workflows.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-blue-green-workflows.md new file mode 100644 index 00000000000..289fdcd816c --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-blue-green-workflows.md @@ -0,0 +1,527 @@ +--- +title: 6 - ECS Blue/Green Workflows +description: Learn different ways to create Blue/Green ECS deployments. +sidebar_position: 700 +helpdocs_topic_id: 7qtpb12dv1 +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes different methods for creating ECS Blue/Green Workflows. + +For Canary and Basic Workflows, see [ECS Workflows](ecs-workflows.md). + +### Overview + +There are two types of ECS Blue/Green deployments in Harness: + +* **Elastic Load Balancer (ALB and NLB)** - Using two Target Groups in the ELB, each with its own listener, traffic between the stage and production environments is swapped each time a new service is deployed and verified. + +Application Load Balancer (ALB) and Network Load Balancer (NLB) are supported.* **Route 53** **DNS** - Using a AWS Service Discovery namespace containing two service discovery services, and a Route 53 zone that hosts CNAME (alias) records for each service, Harness swaps traffic between the two service discovery services. The swap is achieved using [Weighted Routing](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted), where Harness assigns each CNAME record a relative weight that corresponds with how much traffic to send to each resource. + +There are no changes required to your Harness Service or Environment when setting up ECS Blue/Green Workflows. + +In this section, we will cover the set up for both Blue/Green deployment methods. + +### Review: Permissions + +To create and deploy an ECS Workflow, you must belong to a Harness User Group with the following Account Permissions enabled: + +* `Workflow Update` +* `Workflow Create` + +See [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). + +### ECS Blue/Green Using ELB + +With ELB configured for ECS Blue/Green deployment, you have old and new versions of your service running behind the load balancer. Your ELB uses two listeners, Prod and Stage, each forwarding to a different target group where ECS services are run. Blue/Green deployments are achieved by swapping listeners between the target groups, always pointing the Prod listener to the target group running the latest version. + +In Harness, you identify which listeners are the Prod and Stage listeners. When Harness deploys, it uses the target group for the Stage listener (for example, **target1**) to test the deployment, verifies the deployment, and then swaps the Prod listener to use that target group. Next, the Stage listener now uses the old target group (**target2**). + +When a new version of the service is deployed, the Stage listener and its target group (**target2**) are first used, then, after verification, the swap happens and the Prod listener forwards to **target2** and the Stage listener now forwards to **target1**. + +To use ELB for Blue/Green deployment, you must have the following set up in AWS: + +* **ELB** **Load Balancer** - An application load balancer must be set up in your AWS VPC. The VPC used by the ELB must have two subnets, each in a separate availability zone, which the ELB will use. +* **Two Listeners** - A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to one or more target groups, based on the rules that you define. Your load balancer must have two listeners set up: One listener for the production traffic (Prod) that points to one target group, and one listener for the stage traffic (Stage) that points to another target group. + +You do not need to register instances for the target groups. Harness will perform that step during deployment. + +For more information on ELB Application Load Balancers, see [What Is an Application Load Balancer?](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) from AWS. + +Application Load Balancer (ALB) and Network Load Balancer (NLB) are supported. + +#### Ports Used in Blue/Green Using ELB + +There are three places where ports are configured in this deployment: + +* Harness ECS Service **Container Specification** - You will specify ports in the **Port Mappings** in the Container Specification.![](./static/ecs-blue-green-workflows-50.png) + +The port number used here must also be used in the ELB Target Groups you use for Blue/Green. +* **Target Group** - You will create two target groups, and Harness will swap them to perform Blue/Green. When you create a target group, you will specify the same port number as the **Port Mappings** in the Container Specification in Service:![](./static/ecs-blue-green-workflows-51.png) + +Both target groups must use the same port number, which is also the same number as the **Port Mappings** in the Container Specification in Service. +* **ELB Listener** - In your ELB, you create a listener for each target group. Listeners also use port numbers, but these are simply entry points for the ELB. For example, one listener uses port 80, and the other listener uses 8080. + +If the port number used in the **Port Mappings** in the Container Specification in Service does not match the port number used in the target groups, you will see this error: + + +``` +Error: No container definition has port mapping that matches the target port: 80 for target group: +arn:aws:elasticloadbalancing:us-west-1:4xxxxxxx5317:targetgroup/target1/ac96xxxxxx1d16 +``` +Simply correct the port numbers and rerun the deployment. + +#### Set Up AWS for Blue/Green Using ELB + +To set up AWS for Blue/Green using ELB and Harness, do the following: + +1. Ensure you have a Harness Delegate installed on an instance in the same VPC where your ECS cluster and load balancer are installed. +2. In the AWS EC2 console, click **Target Groups**.![](./static/ecs-blue-green-workflows-52.png) +3. In Target Groups, click **Create target group**. +4. Give the target group a name, such as **target1**, and port **8080**. +5. Select the VPC where your ECS cluster instances will be hosted, and click **Create**. +6. Create a second target group using a new name, such as **target2**, use the same port number, **8080**, and the same VPC as the first target. + + It is important that you use the same port numbers for both target groups.When you are done, the target configuration will look something like this: + + ![](./static/ecs-blue-green-workflows-53.png) + + Now that your targets are created, you can create the load balancer that will switch between the targets. + +7. Create a Application Load Balancer. In the EC2 Console, click **Load Balancers**. + + ![](./static/ecs-blue-green-workflows-54.png) + +2. Click **Create Load Balancer**, and then under **Application Load Balancer**, click **Create**. + + ![](./static/ecs-blue-green-workflows-55.png) + +You do not need to add listeners at this point. We will do that after the load balancer is created. + +Ensure that the VPC you select for the load balancer has two subnets, each in a separate availability zone, like the following: + +![](./static/ecs-blue-green-workflows-56.png) + +Once your load balancer is created, you can add its Prod and Stage listeners. + +1. In your load balancer, click its **Listeners** tab to add the targets you created as listeners. + ![](./static/ecs-blue-green-workflows-57.png) +2. Click **Add Listener**. +3. In the **Protocol : port** section, enter the port number for your first target, port **80**. Listeners do not need to use the same port numbers as their target groups. +4. In **Default action**, click **Add action**, and select **Forward to**, and then select your target. + ![](./static/ecs-blue-green-workflows-58.png) +5. Click **Save**. +6. Repeat this process to add a listener using the other target you created, using a port number such as **8080**. When you are done you will have two listeners: + +![](./static/ecs-blue-green-workflows-59.png) + +You AWS ELB setup is complete. Now you can set up you Harness Workflow. + +#### Blue/Green Workflow with ELB + +To set up a Blue/Green deploy using ELB in Harness, do the following: + +1. In Harness, in your Application, click **Workflows**, and then click **Add Workflow**. The **Workflow** dialog appears. +2. Enter the following options to select a Blue/Green Deployment using DNS: + * **Name** - Enter the name of the Workflow, such as **ECS BG ELB**. + * **Description** - Enter a description to provide context for the Workflow. + * **Workflow Type** - Select **Blue/Green Deployment**. + * **Environment** - Select the Environment where the ECS Service Infrastructure you want to use is configured. + * **Service** - Select the ECS Service you created for your Application. + * **Infrastructure Definition** - Select the Infrastructure Definition where you want to deploy your ECS Service. +3. When you select the Infrastructure Definition, the **Blue Green Switch** field appears. +4. In **Blue Green Switch**, select **Elastic Load Balancer (ELB)**. When you are done the dialog will look something like this:![](./static/ecs-blue-green-workflows-60.png) +5. Click **SUBMIT**. The ECS Blue Green Workflow appears. The following image shows the default steps. + +![](./static/ecs-blue-green-workflows-61.png) + +The following section describes how to configure the default steps. + +#### ECS Blue Green Load Balancer Setup + +The ECS Blue Green Load Balancer Setup step is where you specify the load balancer and target listeners for Harness to use when deploying. + +Click **ECS Blue Green Load Balancer Setup** top open it. ECS Blue Green Load Balancer Setup has the following fields: + +* **ECS Service Name** - Enter a name for the ECS service that will be deployed. You will see this name in your cluster once the service is deployed. +* **Desired Instance Count** - Specify the number of instances to deploy. The first time you run this Workflow, there are no instances of the service running. You can set a number in **Max Instances** or **Fixed Instances Count**. After this Workflow has been deployed successfully, you can set a number in **Fixed Instances Count** only. +* **Elastic Load Balancer** - Click here and select the AWS load balancer you added. Harness uses the Delegate to locate the load balancers and list them in Elastic Load Balancer. If you do not see your load balancer, ensure that the Delegate is in the same VPC. Once the load balancer is selected, Harness will populate the Prod and Stage Listener drop-downs. +* **Prod Listener** - Select the ELB listener that you want to use as the Prod Listener. +* **Stage Listener** - Select the ELB listener that you want to use as the Stage Listener. +* **Listener Rules** — If you are using [Listener Rules](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules) in your target groups, you can select them in **Production Listener Rule ARN** and **Stage Listener Rule ARN**. + + If you do not select a listener rule, Harness uses the Default rule. You do not need to select the Default rule. + + Default rules don't have [conditions](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types), but other rules do. If you select other rules, ensure the traffic that will use the rules matches the conditions you have set in the rules. + + For example, if you have a path condition on a rule to enable [path-based routing](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html), ensure that traffic uses that path. + +The following image shows how the AWS load balancer and listeners map to the ECS Blue Green Load Balancer Setup settings: + +![](./static/ecs-blue-green-workflows-62.png) + +* **IAM Role** - You can leave this field blank as this setting isn't often necessary with Blue/Green ECS deployments. You can select the IAM role to use when using the ELB. The role must have the [AmazonEC2ContainerServiceRole](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_IAM_role.html) policy. +* **Target Container Name** and **Target Port** - You can leave these fields blank. They are used if the container specification has multiple container definitions, which is not common. When you deploy your ECS service with Harness, Harness uses the container name and port from the **Service Specification** in the Harness Service. If you choose to use these fields, note that as an ECS requirement Target Container cannot be empty if Target Port is defined. +* **Service Steady State Wait Timeout** - Enter how many minutes Harness should wait for the ECS service to reach steady state. You cannot use Harness variable expressions in this setting. They are supported in Basic and Canary Workflow types, when using Replica Scheduling. +* **AWS Auto Scalar Configuration** - For more information, see [AWS Auto Scaling with ECS](ecs-workflows.md#aws-auto-scaling-with-ecs). + +When you are finished, click **SUBMIT**. You now have defined the load balancer and target listeners for Harness to use when executing the Blue/Green deployment. + +Here's what the **ECS Blue Green Load Balancer Setup** step looks like in a Deployment of the Workflow. + +![](./static/ecs-blue-green-workflows-63.png) + +Here's the output from the step, where the old version of the service is deleted, and the load balancer is set up to use the new version via target group **target1**: + + +``` +Cluster Name: docs-ecs +Docker Image Name: harness/todolist-sample:4 +Container Name: harness_todolist-sample_4 +Creating task definition ecs__blue__green__elb with container image harness/todolist-sample:4 +**Deleting Old Service** {Green Version}: ecs__blue__green__elb__15 +Deletion successful +Setting load balancer to service +Creating ECS service ecs__blue__green__elb__18 in cluster docs-ecs +Checking for Auto-Scalar config for existing services +No Auto-scalar config found for existing services +Load Balancer Name: docs-example +Target Group ARN: arn:aws:elasticloadbalancing:us-west-1:44xxxxxxx17:targetgroup/**target1**/6998xxxxxfbfe +``` +#### Upgrade Containers + +The Upgrade Containers step adds new ECS service instances. + +![](./static/ecs-blue-green-workflows-64.png) + +In **Desired Instances**, set the number or percentage of ECS service instances to use for this stage. + +The value in **Desired Instances** relates to the number of ECS service instances set in the **Setup Load Balancer** dialog. For example, if you entered 2 as the **Desired Instance Count** in **Setup Load Balancer** and then enter 50 Percent in **Upgrade Containers**, that means Harness will deploy 1 ECS service instance. + +**Use Expressions:** You can use [Harness Service, Environment Override, and Workflow](https://docs.harness.io/article/9dvxcegm90-variables) variable expressions in **Desired Instances** by selecting **Use Expression** and then entering the expression, like `${workflow.variables.DesiredInstances}`. When you run the Workflow, you can provide a value for the variable.Here's what the **Upgrade Containers** step looks like in a Deployment of the Workflow. + +![](./static/ecs-blue-green-workflows-65.png) + +Here's the output from the step, where the desired count is updated to 2, and 2 targets are registered in **target1**. + + +``` +Resize service [ecs__blue__green__elb__18] in cluster [docs-ecs] from 0 to 2 instances +Waiting for service: ecs__blue__green__elb__18 to reflect updated desired count: 2 +Current service desired count return from aws for Service: ecs__blue__green__elb__18 is: 2 +Service update request successfully submitted. +Waiting for pending tasks to finish. 0/2 running ... +Waiting for pending tasks to finish. 0/2 running ... +AWS Event: (service ecs__blue__green__elb__18) has reached a steady state. +Waiting for pending tasks to finish. 2/2 running ... +AWS Event: (service ecs__blue__green__elb__18) **has started 2 tasks**: + (task 897f0145edb440daae1952e7a9f6d3f6) (task a97da5fbda594f6aa8bdf82296572bf1). +Waiting for service to be in steady state... +AWS Event: (service ecs__blue__green__elb__18) **registered 2 targets** + in (target-group arn:aws:elasticloadbalancing:us-west-1:448640225317:targetgroup/**target1**/6998b12a548efbfe) +AWS Event: (service ecs__blue__green__elb__18) has reached a steady state. +Service has reached a steady state +No Autoscalar config provided. + +Container IDs: + 10.0.0.132 (new) + 10.0.0.132 (new) + +Completed operation +---------- + +Service [ecs__blue__green__elb__17] in cluster [docs-ecs] stays at 2 instances +No Autoscalar config provided. +Completed operation +---------- +``` +#### Upgrade Containers and Rollback Containers Steps are Dependent + +In order for rollback to add ECS Auto Scaling to the previous, successful service, you must have both the **Upgrade Containers** and **Rollback Containers** steps in the same Phase. + +![](./static/ecs-blue-green-workflows-66.png) + +Since ECS Auto Scaling is added by the **Upgrade Containers** step, if you delete **Upgrade Containers**, then **Rollback Containers** has no ECS Auto Scaling to roll back to. + +If you want to remove ECS Auto Scaling from a Phase, delete both the **Upgrade Containers** and **Rollback Containers** steps. The Phase will no longer perform ECS Auto Scaling during deployment or rollback. + +#### Swap Target Groups + +The **Swap Target Groups** step performs the Blue/Green route swap once the deployment is verified. That is why **Swap Target Groups** comes after the **Verify Service** section in the Workflow. + +![](./static/ecs-blue-green-workflows-67.png) + +When you deploy, Harness will use the target group for the **Stage Listener** in the **Setup Load Balancer** step for deployment. After verifying the success of the deployment, the **Swap Target Groups** step simply swaps the target groups between the listeners. Now, the target group with the latest version receives production traffic. The target group with the old version receives the stage traffic. + +The following image shows two ECS deployments. In the first deployment, the service uses the **target1** target group, and in the second deployment, the service uses the **target2** target group. + +![](./static/ecs-blue-green-workflows-68.png) + +**Downsize Older Service:** choose whether to downsize the older, previous version of the service. + +If you enable this option, the previous service is downsized to 0. The service is downsized, but not deleted. If the older service needs to be brought back up again, it is still available. + +**Delay:** use this setting to to reduce incidents where non-idle connections are sent to the old service before ELB terminates the connection. This helps you ensure that all traffic has migrated to the new service before Harness begins shutting down the old service. + +Currently, the **Delay** feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.Here's what the **Swap Target Groups** step looks like in a Deployment of the Workflow. + +![](./static/ecs-blue-green-workflows-69.png) + +Here's the output from the step, where the Prod listener is now pointed to **target1** and the Stage listener is pointed to **target2**. + + +``` +Updating ELB **Prod Listener** to Forward requests to Target group associated with new Service, TargetGroup: + arn:aws:elasticloadbalancing:us-west-1:44xxxx17:targetgroup/**target1**/69xxxxxxxefbfe +Updating ELB **Stage Listener** to Forward requests to Target group associated with new Service, TargetGroup: + arn:aws:elasticloadbalancing:us-west-1:44xxxx17:targetgroup/**target2**/04xxxxxxxda71e3 +Successfully update Prod and Stage Listeners +Updating service: [ecs__blue__green__elb__18] with tag: [BG_VERSION:BLUE] +Tag update successful +Updating service: [ecs__blue__green__elb__17] with tag: [BG_VERSION:GREEN] +Tag update successful +Downsizing Green Service: ecs__blue__green__elb__17 +Waiting for service: ecs__blue__green__elb__17 to reflect updated desired count: 0 +Current service desired count return from aws for Service: ecs__blue__green__elb__17 is: 0 +Waiting: [30] seconds for the downsize to complete ECS services to synchronize +``` +You can also see the service count for the old version downsized to 0. + +If you look at the service in the AWS ECS console, you can see the `BLUE` and `GREEN` tags added to them to indicate their status using the `BG_VERSION` key: + +![](./static/ecs-blue-green-workflows-70.png) + +### ECS Blue/Green Using DNS + +Using AWS Route 53 DNS, you can swap production traffic from an older version of a service to the newer version of the service. In this architecture, both services (Blue and Green) have a Service Discovery Service associated with them. This associates the services with URLs in a hosted DNS zone that was created when the namespace of the Service Discovery Services was created. + +To use DNS for Blue/Green deployment, you must have the following set up in AWS: + +* **Service Discovery namespace** - A Service Discovery namespace containing two Service Discovery services. This is added in AWS Cloud Map. +* **Route 53 zone** - An Amazon Route 53 zone to host the CNAME (alias) records Harness will register and use to swap traffic between the two Service Discovery services. When you create a namespace in AWS Cloud Map a zone is created automatically; however, Harness cannot modify this zone due to AWS restrictions. Consequently, you need to create a separate DNS zone where Harness can register CNAME records. + +Harness will register the CNAME records in the zone when you first deploy your Workflow. You simply provide Harness with the name to use in the CNAME records. + +Let's look at an example AWS setup. Here is the namespace **bg-namespace** created in AWS Cloud Map: + +![](./static/ecs-blue-green-workflows-71.png) + +When you create the namespace, AWS created a Route 53 DNS zone for the namespace automatically, containing the NS and SOA record for the namespace. In our example, the namespace is **bg-namespace**: + +![](./static/ecs-blue-green-workflows-72.png) + +Harness is not able to modify this zone due to AWS restrictions, and so you need to add another zone where Harness can register CNAME records, and manage their weights for routing. In our example, we will create another namespace name **bg-namespace\_upper**: + +![](./static/ecs-blue-green-workflows-73.png) + +When you are done, Route 53 will have two zones: the zone automatically created by AWS Cloud Map (**bg-namespace**) and the zone you created manually (**bg-namespace\_upper**): + +![](./static/ecs-blue-green-workflows-74.png) + +Next, you need to create the two services in the namespace. You can do this via AWS CLI or the AWS Cloud Map console. Below are examples using the AWS Cloud Map console. + +#### Set up AWS for Blue/Green Using DNS + +To create the two new services, in AWS Cloud Map, in your namespace, click **Create service**. + +1. For **Service name**, enter a name such as **service1**. +2. In **DNS configuration**, select **Weighted routing**, and in Record type, select **SRV**. +3. Click **Create service**. The new service is added to the namespace. +4. Repeat these steps to add a second service, **service2**. When you are finished, the AWS Cloud Map page for the namespace will look something like this: + +![](./static/ecs-blue-green-workflows-75.png) + +Now that you AWS setup is complete, you can create your Blue/Green Deployment Workflow in Harness. + +#### Blue/Green Workflow with DNS + +To set up a Blue/Green using DNS in Harness, do the following: + +1. In Harness, in your Application, click **Workflows**, and then click **Add Workflow**. The **Workflow** dialog appears. +2. Enter the following options to select a Blue/Green Deployment using DNS: + * **Name** - Enter the name of the Workflow, such as **ECS BG DNS**. + * **Description** - Enter a description to provide context for the Workflow. + * **Workflow Type** - Select **Blue/Green Deployment**. + * **Environment** - Select the Environment where the ECS Service Infrastructure you want to use is configured. + * **Service** - Select the ECS Service you created for your Application. + * **Infrastructure Definition** - Select the Infrastructure Definition where you want to deploy your ECS Service. +3. When you select the Infrastructure Definition, the **Blue Green Switch** field appears. +2. In **Blue Green Switch**, select **Domain Name System (DNS)**. When you are done the dialog will look something like this:![](./static/ecs-blue-green-workflows-76.png) +3. Click **SUBMIT**. The ECS Blue Green Workflow appears. The following image shows the default steps. + +![](./static/ecs-blue-green-workflows-77.png) + +The following section describe the default steps. + +#### ECS Blue Green Route 53 Setup + +The ECS Blue Green Route 53 Setup step is where you will specify the namespace, services, and hosted zone information needed by Harness to register the CNAME records for your services. + +![](./static/ecs-blue-green-workflows-78.png) + +The **ECS Blue Green Route 53 Setup** step has the following settings. + +* **ECS Service Name** - Enter a name for the ECS Service that will be deployed AWS, or use the default values provided by Harness. +* **Desired Instance Count** - Specify the number of service instances to deploy, using **Same as already running instances** or **Fixed**. +* **Service Discovery** - This section is where you will specify the AWS namespace, service, and zone information. +* **Specification - 1** and **Specification - 2** - These are the JSON specifications for the services you created in your namespace. You will need to enter the JSON description for each service, like this: + + +``` +{ + "registryArn": "arn:aws:servicediscovery:us-east-1:52516162:service/srv-xxxxxxxxx", + "containerName": "${CONTAINER_NAME}", + "containerPort": 8080 +} +``` +You can use `containerPort` or `port`. + +To obtain the `registryArn`, look at the **Service ID** in AWS Cloud Map: + +![](./static/ecs-blue-green-workflows-79.png) + +Copy the **Service ID** for each service and enter them into the JSON for **Specification - 1** and **Specification - 2**, like this: + +![](./static/ecs-blue-green-workflows-80.png) + +* **Alias (canonical) Name** - The name for the alias that you want redirected using the CNAME records. Typically, this is your app name, like **myapp**. You must include the zone name also. In our example, we are using **bg-namespace\_upper** as the zone name, and so, in **Alias (canonical) Name**, we enter `myapp.bg-namespace_upper.`. Note the dot at the end of the name entered. +* **Zone Hosting Alias** - The name of the zone hosting the CNAME record. + +The following image shows the **Alias (canonical) Name** and **Zone Hosting Alias** settings and their corresponding DNS records. In this example, the CNAME records are already registered, but when you first deploy this will not be the case and Harness will register the records. + +![](./static/ecs-blue-green-workflows-81.png) + +* **IAM Role** - You can leave this field blank as this setting isn't often necessary with Blue/Green ECS deployments. You can select the IAM role to use when using the ELB. The role must have the [AmazonEC2ContainerServiceRole](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_IAM_role.html) policy. +* **Target Container Name** and **Target Port** - You can leave these fields blank. They are used if the container specification has multiple container definitions, which is not common. When you deploy your ECS service with Harness, Harness uses the container name and port from the **Service Specification** in the Harness Service. If you choose to use these fields, note that as an ECS requirement Target Container cannot be empty if Target Port is defined. +* **AWS Auto Scalar Configuration** - For more information, see [AWS Auto Scaling with ECS](ecs-workflows.md#aws-auto-scaling-with-ecs). + +Here's what the **Setup Route 53** step looks like in a Deployment of the Workflow. + +![](./static/ecs-blue-green-workflows-82.png) + +Here's the output from the step, where the current service ID is listed, and the service ID you provided in the **Specification - 1** field is used to create the new service. + + +``` +Cluster Name: docs-ecs +Docker Image Name: harness/todolist-sample:4 +Container Name: harness_todolist-sample_4 +Creating task definition ecs_bg_dns with container image harness/todolist-sample:4 +Current ECS service uses: [arn:aws:servicediscovery:us-west-1:448640225317:service/**srv-c53l4mh5xym45wtm**] +Using: [arn:aws:servicediscovery:us-west-1:448640225317:service/**srv-ytqooonrmzj63r76**] for new service. +Creating ECS service ecs_bg_dns__8 in cluster docs-ecs +Checking for Auto-Scalar config for existing services +No Auto-scalar config found for existing services +``` +#### Upgrade Containers + +The Upgrade Containers step adds new ECS service instances. + +![](./static/ecs-blue-green-workflows-83.png) + +In **Desired Instances**, set the number or percentage of ECS service instances to use for this stage. + +The value in **Desired Instances** relates to the number of ECS service instances set in the **Setup Route 53** dialog. For example, if you entered 2 as the **Desired Instance Count** in **Setup Route 53** and then enter 50 Percent in **Upgrade Containers**, that means, Harness will deploy 1 ECS service instance. + +Here's what the **Upgrade Containers** step looks like in a Deployment of the Workflow. + +![](./static/ecs-blue-green-workflows-84.png) + +Here's the output from the step, where the service count is increased to 2. + + +``` +Resize service [ecs_bg_dns__8] in cluster [docs-ecs] from 0 to 2 instances +Waiting for service: ecs_bg_dns__8 to reflect updated desired count: 2 +Current service desired count return from aws for Service: ecs_bg_dns__8 is: 2 +Service update request successfully submitted. +Waiting for pending tasks to finish. 0/2 running ... +Waiting for pending tasks to finish. 0/2 running ... +AWS Event: (service ecs_bg_dns__8) has started 2 tasks: (task b870fbd5f86342e6a2bc94600598fa25) (task 90f926fdf9c44b8ea35d0ad0474013d6). +Waiting for pending tasks to finish. 0/2 running ... +Waiting for pending tasks to finish. 2/2 running ... +Waiting for service to be in steady state... +AWS Event: (service ecs_bg_dns__8) has reached a steady state. +Service has reached a steady state +No Autoscalar config provided. + +Container IDs: + 10.0.0.132 (new) + 10.0.0.132 (new) + +Completed operation +---------- + +Service [ecs_bg_dns__7] in cluster [docs-ecs] stays at 2 instances +No Autoscalar config provided. +Completed operation +---------- + +``` +#### Upgrade Containers and Rollback Containers Steps are Dependent + +In order for rollback to add ECS Auto Scaling to the previous, successful service, you must have both the **Upgrade Containers** and **Rollback Containers** steps in the same Phase. + +![](./static/ecs-blue-green-workflows-85.png) + +Since ECS Auto Scaling is added by the **Upgrade Containers** step, if you delete **Upgrade Containers**, then **Rollback Containers** has no ECS Auto Scaling to roll back to. + +If you want to remove ECS Auto Scaling from a Phase, delete both the **Upgrade Containers** and **Rollback Containers** steps. The Phase will no longer perform ECS Auto Scaling during deployment or rollback. + +#### Change Route 53 Weights + +A weight value determines the proportion of DNS queries that Route 53 responds to using the current record. The **Change Route 53 Weights** step is configured with two weights to apply to the CNAME records Harness registers. + +By default, the weights are **100** for the new service and **0** for the old service. A weight of **0** disables routing to a resource using this CNAME record. This performs a complete redirect to the new service each time a new service is deployed. + +![](./static/ecs-blue-green-workflows-86.png) + +Here's what the Change Route 53 Weights step looks like in a Deployment of the Workflow. + +![](./static/ecs-blue-green-workflows-87.png) + +The Details section displays the service names and weights. + +Here's the output from the step, where the CNAME records are registered with the zone you added, and the weights are applied. Tags for the ECS services are also added to identify that they are BLUE or GREEN. The service for the old version is downsized to 0 but not deleted. + + +``` +Upserting parent record: [myapp.bg-namespace_upper.] with CNAME records: [service2.bg-namesapce:100] and [service1.bg-namesapce:0] +Swapping ECS tags Blue and Green +Updating service: [ecs_bg_dns__8] with tag: [BG_VERSION:BLUE] +Tag update successful +Updating service: [ecs_bg_dns__7] with tag: [BG_VERSION:GREEN] +Tag update successful +Downsizing old service if needed +Downsizing Green Service: ecs_bg_dns__7 +Waiting for service: ecs_bg_dns__7 to reflect updated desired count: 0 +Current service desired count return from aws for Service: ecs_bg_dns__7 is: 0 +Waiting: [30] seconds for the downsize to complete ECS services to synchronize +``` +In the Route 53 console, you can see the result of the CNAME weights in the zone you created, in the **Weight** column. Note that the **Set ID** column also lists **Harness-Green** or **Harness-Blue**. + +![](./static/ecs-blue-green-workflows-88.png) + +You can also see the Blue/Green tag in the ECS console, in the **Tags** tab for the service: + +![](./static/ecs-blue-green-workflows-89.png) + +### Rollbacks + +See [ECS Rollbacks](https://docs.harness.io/article/d7rnemtfuz-ecs-rollback). + +### Post-Production Rollback + +Harness also supports post-production rollback for cases where you want to recover from a deployment that succeeded on technical criteria, but that you want to undo for other reasons. + +See [Rollback Production Deployments](https://docs.harness.io/article/2f36rsbrve-post-deployment-rollback). + +### Next Step + +* [ECS Workflows](ecs-workflows.md) +* [ECS Setup in YAML](ecs-setup-in-yaml.md) +* [ECS Troubleshooting](ecs-troubleshooting.md) +* [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) +* [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-connectors-and-providers-setup.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-connectors-and-providers-setup.md new file mode 100644 index 00000000000..8818d99b7dd --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-connectors-and-providers-setup.md @@ -0,0 +1,50 @@ +--- +title: 2 - ECS Connectors and Providers Setup +description: Describes how to connect Harness to your artifact repository and to your target AWS ECS cluster. +sidebar_position: 300 +helpdocs_topic_id: gpu36fl1y0 +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness to connect to your artifact repository and to the target AWS ECS cluster. + +* **Artifact Repository** - To connect Harness to your artifact repository, you set up an Artifact Server in Harness. +* **AWS ECS Cluster** - To connect Harness to your ECS environment, you need to set up an AWS Cloud Provider in Harness. The AWS Cloud Provider you set up below uses Delegate Selectors as described in [Harness ECS Delegate](harness-ecs-delegate.md). + +If your artifacts are in AWS along with your target ECS cluster, you can simply add a Harness AWS Cloud Provider and use it for all ECS deployment connections. + +### Add an Artifact Server + +Harness integrates with many different types of repositories and artifact providers. We call these Artifact Servers, and they help you pull your artifacts into your Harness Applications. + +Add an Artifact Server for your artifact repository to your Harness account as described in [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +Later, when you set up a Harness Service, you will use the Artifact Server to select the artifact you want to deploy: + +![](./static/ecs-connectors-and-providers-setup-00.png) + +If you are using Amazon Elastic Container Registry (ECR) for your artifacts, you can simply add an AWS Cloud Provider to manage your artifact and AWS deployment environment connections. Setting up an AWS Cloud Provider is described below.### Add an AWS Cloud Provider + +Harness Cloud Providers represent the infrastructure of your applications, such as your ECS cluster. In this section, we will cover how to add an AWS Cloud Provider that uses the IAM role of the Harness ECS Delegate by using the Delegate Selectors. + +Adding a Delegate Selector to your Delegate was discussed earlier in [Harness ECS Delegate](harness-ecs-delegate.md). + +1. In **Harness**, click **Setup**. +2. Click **Cloud Providers**. The **Cloud Providers** page appears. +3. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears. +4. In **Type**, select **Amazon Web Services**. +5. In **Display Name**, enter a name for the Cloud Provider, such as **aws-ecs**. +6. Select the **Assume IAM Role on Delegate** option. +7. In **Delegate Selector**, enter the Selector you gave the ECS Delegate listed in the **Harness Installations** page. +8. Click **SUBMIT**. The Cloud Provider is added. + +For more information about setting up an AWS Cloud Provider, see [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +### Next Step + +Now that you have an Artifact Server and AWS Cloud Provider, you can create your Harness Application and define your ECS service in its Harness Service: + +* [3 - ECS Services](ecs-services.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-deployments-overview.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-deployments-overview.md new file mode 100644 index 00000000000..d8cd7947472 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-deployments-overview.md @@ -0,0 +1,86 @@ +--- +title: ECS Deployments Overview +description: An overview of ECS components and deployment steps. +sidebar_position: 100 +helpdocs_topic_id: 08whoizbps +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This guide explains how to use Amazon Elastic Container Service (ECS) with Harness. + +New to using ECS with Harness? See [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments).In this guide, we will set up Harness for ECS, create a Harness Application, and deploy a public Docker image from Docker Hub to an existing ECS cluster using Harness. This deployment scenario is very popular and a walkthrough of all the steps involved will help you set up this scenario in Harness for your own microservices and apps. + +Walk through this guide in the following order: + +* [Component Overview](ecs-deployments-overview.md#component-overview) +* [Prerequisites](ecs-deployments-overview.md#prerequisites) +* [Deployment Overview](ecs-deployments-overview.md#deployment-overview) +* [1 - Harness ECS Delegate](harness-ecs-delegate.md) +* [2 - ECS Connectors and Providers Setup](ecs-connectors-and-providers-setup.md) +* [3 - ECS Services](ecs-services.md) +* [4 - ECS Environments](ecs-environments.md) +* [5 - ECS Basic and Canary Workflows](ecs-workflows.md) +* [6 - ECS Blue/Green Workflows](ecs-blue-green-workflows.md) +* [7 - ECS Setup in YAML](ecs-setup-in-yaml.md) +* [8 - ECS Troubleshooting](ecs-troubleshooting.md) + +### Component Overview + +The following table lists the ECS components and where they are set up in Harness, as well as the related Harness components that perform ECS deployment operations. For detailed explanations of ECS, see the [ECS Developer Guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) from Amazon. + + + +| | | | +| --- | --- | --- | +| **Component** | **Description** | **Harness Location** | +| Harness Delegate | A software service you run in the same VPC as the ECS cluster to enable Harness to perform deployment operations. The Delegate does not need root privileges, and it only makes an outbound HTTPS connection to the Harness platform. | This guide will describe how to set up the Harness Delegate for ECS deployment.See [Harness ECS Delegate](harness-ecs-delegate.md). | +| Harness Cloud Provider | A Cloud Provider is a logical representation of your AWS infrastructure. Typically, a Cloud Provider is mapped to a AWS account, Kubernetes cluster, Google service account, Azure subscription, or a data center. | This guide will describe how to set up the AWS Cloud Provider for ECS deployment.For general information, see [ECS Connectors and Providers Setup](ecs-connectors-and-providers-setup.md). | +| ECS Task Definition | Describes the Docker containers to run (CPU, memory, environment variables, ports, etc) and represents your application. | Specified in the Harness Service, in Container Specification. | +| ECS Task | Instance of a Task Definition. Multiple Tasks can be created by one Task Definition, as demand requires. | | +| ECS Service | Defines the minimum and maximum Tasks from one Task Definition to run at any given time, autoscaling, and load balancing. | This is specified in the Harness Service, in Service Specification. | +| ECS Cluster | A Cluster is a group of ECS Container Instances where you run your service tasks in order for them to be accessible. The container management service handles the cluster across one or more ECS Container Instance(s), including the scheduling, maintaining, and scaling requests to these instances. | ECS Clusters are selected in two Harness components:* The AWS Cloud Provider, via the IAM role for Delegate option. +* Harness application Environment, where you select the AWS Cloud provider, and your ECS cluster name. | +| Launch Types | There are two types:

•  Fargate - Run containers without having to manage servers or clusters of Amazon EC2 instances.

•  EC2 - Run containers on a cluster of Amazon EC2 instances that you manage. | You specify the launch type to use when adding a Service Infrastructure to a Harness Environment. | +| Replica Scheduling Strategy | Places and maintains the desired number of tasks across your cluster. | This is specified in the Harness Service, in Service Specification. | +| Daemon Scheduling Strategy | As of July 2018, ECS has a daemon scheduling strategy that deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster.With a daemon strategy, a task is deployed onto each instance in a cluster to provide common supporting functionality. | This is specified in the Harness Service, in Service Specification. | +| awsvpc Network Mode | Provides each task with its own elastic network interface. Fargate task definitions require the awsvpc network mode. | | +| Service Discovery | An ECS service can use the ECS Service Discovery to manage HTTP and DNS namespaces for ECS services via the AWS Cloud Map API actions. | This is specified in the Harness Service, in Service Specification. | +| Auto Scaling | Auto Scaling adjusts the ECS desired count up or down in response to CloudWatch alarms. | This is specified in the Harness Workflow ECS Service Setup command. | + +### Prerequisites + +* One or more existing ECS clusters: + + You will need an ECS cluster to deploy your ECS services using Harness. + + If you use a Harness ECS Delegate (recommended), you will need an ECS cluster for the Delegate. The steps for setting up an ECS Delegate are in [Harness ECS Delegate](harness-ecs-delegate.md). +* If you want to run a Harness Shell Script Delegate on an EC2 instance in the same VPC as the ECS cluster, ensure it meets the Harness [Delegate Requirements](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_requirements). +* IAM Role for the Harness Cloud Provider connection to AWS. The policies are listed in [ECS (Existing Cluster)](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#ecs_existing_cluster) and also described in this document. + +### Deployment Overview + +This guide takes you through setting up ECS Deployment using the following steps: + +1. Install and run the Harness ECS Delegate in an ECS cluster in your VPC. +2. Add an AWS Cloud Provider that uses the IAM role of the Harness ECS Delegate. You can also create a Cloud Provider that uses another AWS account with the required ECS permissions, but using the Delegate is the easiest method. +3. Create a Harness Application for ECS. +4. Add a Harness Service. We will cover the following ECS features when we add a service: + 1. Replica Strategy. + 2. Daemon Strategy. + 3. awsvpc Network Mode. + 4. Service Discovery. +5. Add an Environment and ECS Service Infrastructure. +6. Add a Workflow: + * Canary Deployment with Replica Scheduling. + * Basic Deployment with Daemon Scheduling. + * Blue/Green Workflow. +7. Deploy an ECS Workflow. + +### Rollbacks + +See [ECS Rollbacks](https://docs.harness.io/article/d7rnemtfuz-ecs-rollback). + +### Next Step + +[1 - Harness ECS Delegate](harness-ecs-delegate.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-environments.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-environments.md new file mode 100644 index 00000000000..68d5a41eb5d --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-environments.md @@ -0,0 +1,84 @@ +--- +title: 4 - ECS Environments +description: Create a Harness Environment to identify the target AWS VPC for your ECS deployment. +sidebar_position: 500 +helpdocs_topic_id: yasp1dt3h5 +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Creating a Harness Environment for ECS is a simple process. First, you specify the ECS Deployment Type. Then, you specify the ECS cluster where you want to deploy the ECS Task and Service that you defined in your Harness Services. + + +### Create an Environment + +To create the Harness Environment for ECS, do the following: + +1. In your Harness Application, click **Environments**. +2. Click **Add Environment**. The **Environment** dialog appears. +3. In the **Environment** dialog, enter a name (such as **Stage**), select the **Non-Production** type (you can add your production environment later), and click **Submit**. The new Environment appears. + + +### Define the Infrastructure + +Next, you define one or more Infrastructure Definitions for the Environment. + +For ECS, an Infrastructure Definition specifies the ECS cluster, launch type, and related VPC information. + +To create an Infrastructure Definition: + +1. On your Environment page, click **Add Infrastructure Definition**. + + ![](./static/ecs-environments-91.png) + + The **Infrastructure Definition** dialog appears. +2. Enter a **Name** that will identify this Infrastructure Definition when you [add it to a Workflow](ecs-workflows.md). +3. In **Cloud Provider Type**, select **Amazon Web Services**. +4. In **Deployment Type**, select **Amazon Elastic Container Service (ECS)**. This expands the **Infrastructure Definition** dialog to look something like this: +5. Select **Use** **Already Provisioned Infrastructure**, and follow the [Define a Provisioned Infrastructure](#define_provisioned_infrastructure) steps below. + +Harness supports Terraform, CloudFormation, and custom Shell Script provisioning. Learn about each option here: + +* [Terraform Provisioner](../../terraform-category/terrform-provisioner.md) +* [CloudFormation Provisioner](../cloudformation-category/cloud-formation-provisioner.md) +* [Shell Script Provisioner](https://docs.harness.io/article/1m3p7phdqo-shell-script-provisioner) + +If you are using a configured Harness [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner), instead select **Map Dynamically Provisioned Infrastructure**, select the Harness Infrastructure Provisioner you set up, and click **Submit**. +##### Define a Provisioned Infrastructure + +Harness supports Terraform, CloudFormation, and custom Shell Script provisioning. Learn about each option here: + +* [Terraform Provisioner](../../terraform-category/terrform-provisioner.md) +* [CloudFormation Provisioner](../cloudformation-category/cloud-formation-provisioner.md) +* [Shell Script Provisioner](https://docs.harness.io/article/1m3p7phdqo-shell-script-provisioner) + +To fill out the **Infrastructure Definition** dialog's lower section: + +1. In **Cloud Provider**, select the AWS Cloud Provider you set up for your ECS deployment. +2. In **Region**, select the AWS region where your ECS cluster is located. +3. In **Cluster Name**, select the ECS cluster where Harness will deploy the Task Definition and Service defined in the [Harness Service](ecs-services.md) you will use with this Environment. +4. In **Launch Type**, select either **Fargate LaunchType** or **EC2 Instances**. +The only difference when configuring these launch types is that **Fargate LaunchType** requires that you specify the **Target Execution Role** in the next field.If you are using an ECS [Capacity provider strategy](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html#sd-capacityproviderstrategy), see [Capacity Providers](ecs-environments.md#capacity-providers). +5. In **VPC**, select the VPC where the ECS Cluster is located. +6. In **Security Groups**, select the AWS security group(s) that you want to use when creating instances. (This drop-down lists Security Groups by Group ID. You can locate the Group ID in the **ECS Dashboard** under **Security Groups**.) +7. In **Subnets**, select the VPC subnet(s) where the EC2 instances will be located. +8. Select **Assign Public IP** if you want external public IP addresses assigned to the deployed container tasks. +9. Enable **Scope to Specific Services**, and use the adjacent drop-down to select the [Harness Service](ecs-services.md) you've configured for ECS. + +Scoping is a recommended step, to make this Infrastructure Definition available to any Workflow or Phase that uses this Service.1. Click **Submit**. The new Infrastructure Definition is added to your Environment. You will select this Environment and Infrastructure Definition when you create your Harness [Workflow](ecs-workflows.md). + +#### Capacity Providers + +If you are using a [Capacity provider strategy](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html#sd-capacityproviderstrategy) (the `capacityProviderStrategy` is used in your ECS Service definition), select one of the following launch type: + +* For `FARGATE` or `FARGATE_SPOT` strategies, select **Fargate Launch Type**. +* For the Auto Scaling group strategy, select `EC2 Instances`. + +### Next Step + +Now that you have an [ECS Service](ecs-services.md) and [Environment](ecs-environments.md) set up, you can create an ECS Workflow: + +* [5 - ECS Workflows](ecs-workflows.md) +* [6 - ECS Blue/Green Workflows](ecs-blue-green-workflows.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-services.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-services.md new file mode 100644 index 00000000000..8024d9f12f0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-services.md @@ -0,0 +1,552 @@ +--- +title: 3 - ECS Services +description: Create a Harness Service for your artifacts and ECS container and service specifications. +sidebar_position: 400 +helpdocs_topic_id: riu73ehy2m +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic covers setting up a Harness Application and Service for an ECS Deployment, including the ECS task and service definitions for various scenarios: + +* [Create the Harness ECS Application](ecs-services.md#create-the-harness-ecs-application) +* [Add a Harness ECS Service](ecs-services.md#add-a-harness-ecs-service) + + [Task Definition](#task_definition) + + [awsvpc Network Mode](ecs-services.md#awsvpc-network-mode) + + [Service Definition](#service_definition) + + [Replica Strategy](ecs-services.md#replica-strategy) + + [Daemon Strategy](ecs-services.md#daemon-strategy) + + [Service Discovery](ecs-services.md#service-discovery) + + [Using Private Docker Registry Authentication](ecs-services.md#using-private-docker-registry-authentication) +* [Review: Task Definitions and Amazon ECS Service Quotas](#review_task_definitions_and_amazon_ecs_service_quotas) +* [Next Step](ecs-services.md#next-step) + +### Create the Harness ECS Application + +The Harness Application represents a logical group of the ECS setup and release process, including the ECS service and task definitions, ECS cluster environment, and deployment workflow steps particular to each service you are deploying. For more information on Harness Applications, see [Application Checklist](https://docs.harness.io/article/bucothemly-application-configuration). + +To create a new Application, do the following: + +1. In **Harness**, click **Setup**, and then click **Add Application**. The **Application** dialog appears. +2. Enter the name for your application, such as **ECS Demo Application**, and click **SUBMIT**. Your new Application appears. +3. Click the Application name. The Application entities appear. + +#### ECS and Infrastructure Provisioners + +You can add a Harness Infrastructure Provisioner for CloudFormation or Terraform to your Harness Application and use the Infrastructure Provisioner to define the ECS infrastructure on the fly. + +For more information, see [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner). + +### Add a Harness ECS Service + +A Harness Service represents your microservice as the artifact source (for example, Docker image), ECS task and service definitions, and any runtime variables used for deployment. You define where the artifact comes from, and you define the container and service specs for the ECS cluster. In addition, you can use configuration variables and files for the service. + +Harness Services are different from ECS services. Where a Harness Service describes your microservice, an ECS service is a specified number of task definition instances run and maintained simultaneously in an Amazon ECS cluster. For a detailed description of ECS services, see [Services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) from AWS.In this guide, we will cover the how the following common ECS features are implemented in Harness Services: + +* Replica Strategy. +* Daemon Strategy. +* awsvpc Network Mode. +* Service Discovery. + +Configurations for these features are also discussed in the Harness Environment and Workflows, later in this guide. For the Harness Service artifact example, we will use a Docker image publicly hosted on Docker Hub. + +To create a Harness Service for ECS, do the following: + +1. In **Harness**, click **Setup**. The list of Applications appears. +2. Click the name of the ECS Application you created. The Application appears. +3. Click **Services**. The **Services** page appears. +4. Click **Add Service**. The **Service** dialog appears.![](./static/ecs-services-39.png) +5. In **Name**, enter a name for the service. +6. In **Deployment Type**, select **Amazon Elastic Container Service (ECS)**. +7. Click **SUBMIT**. The new service is listed. +Next, we will add the artifact source for the service, a sample app publicly hosted on Docker Hub. +8. Click **Add Artifact Source**, and click **Docker Registry**. +9. The **Artifact Source** dialog appears.![](./static/ecs-services-40.png) +10. In **Source Server**, select the Harness Artifact Server for the Docker Registry. For information on setting up a Harness Artifact Server, see [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). +11. In **Docker Image Name**, enter the name of the image. +12. Click **SUBMIT**. The Artifact Source is added. + +Next, we will define the ECS task definition and service definition. + +ECS private registry authentication for tasks using AWS Secrets Manager enables you to store your credentials securely and then reference them in your container definition. For information on using private registry authentication for tasks, see [Private Registry Authentication for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html) from AWS. + +#### Task Definition + +If you are not very familiar with task and service definitions, see these examples from AWS: [Task Definitions for Amazon ECS](https://github.com/aws-samples/aws-containers-task-definitions), [Example Task Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html).You specify the ECS Task Definition in the Harness Service. + +You can specify it inline, as described below, or using remote Git files, as described in [Use Remote ECS Task and Service Definitions in Git Repos](use-ecs-task-and-service-definitions-in-git-repos.md). + +To specify the ECS Task Definition, do the following: + +1. In the Harness Service, in **Deployment Specification**, expand **ECS** (if necessary). The **Task Definition** appears. +2. Click **Task Definition**. The **ECS Container Command Definition** settings appear. + + ![](./static/ecs-services-41.png) + + The simplified ECS Container Command Definition settings are for EC2 ECS clusters only. For **Fargate** (or advanced EC2) clusters, click **Advanced Settings** and use the JSON, as described below. +Advanced Settings is required for Fargate because you must use the `${EXECUTION_ROLE}` placeholder, described below.You can specify the Task Definition using the fields in the dialog or click **Advanced Settings** to add or edit the JSON. + + ![](./static/ecs-services-42.png) + +For a description of all available Task Definition parameters, see [Task Definition Parameters](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html) from AWS. + +The Task Definition JSON uses the following placeholders. + + + +| | | +| --- | --- | +| **Placeholder** | **Description** | +| `${DOCKER_IMAGE_NAME}` | Required. This placeholder is used with the image label in the JSON:`"image" : "${DOCKER_IMAGE_NAME}"`The placeholder is replaced with the Docker image name and tag at runtime. | +| `${CONTAINER_NAME}` | This placeholder is used with the name label in the JSON:`"name" : "${CONTAINER_NAME}"`The `${CONTAINER_NAME}` placeholder references the Docker image you added in the Service **Artifact Source**.The placeholder is replaced with a container name based on the **Artifact Source** Docker image name at runtime.You don't have to use this placeholder. You can hardcode the image name in the Task Definition. In this case, any **Artifact Source** Docker image is ignored. | +| `${EXECUTION_ROLE}` | Required for Fargate. This placeholder is used with the `executionRoleArn` label in the JSON.`"executionRoleArn" : "${EXECUTION_ROLE}"`At deployment runtime, the `${EXECUTION_ROLE}` placeholder is replaced with the ARN of the **Target Execution Role** used by the Infrastructure Definition of the Workflow deploying this Harness Service.You can also replace the `${EXECUTION_ROLE}` placeholder with another ARN manually in the Container Definition in the Service. This will override the **Target Execution Role** used by the Infrastructure Definition.Replacing the `${EXECUTION_ROLE}` placeholder manually is usually only done when using a private repo.In most cases, you can simply leave the placeholder as is.For more information, see [Amazon ECS Task Execution IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) from AWS. | + +If you have an existing Task Definition, you can paste it into the JSON. You can obtain the Task Definition from the ECS console: + +![](./static/ecs-services-43.png) + +You can also obtain the Task Definition using the AWS CLI ([describe-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-task-definition.html)): + +`aws ecs describe-task-definition --task-definition ecsTaskDefinitionName` + +Ensure that the required placeholders `${DOCKER_IMAGE_NAME}` and `${EXECUTION_ROLE}` (for Fargate) are used.For some example Task Definitions, see [Example Task Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html) from AWS. + +Once Harness deploys the ECS application, you can see the placeholders replaced in the Task Definition JSON: + + +``` +... +     "volumesFrom": [], + +      "image": "registry.hub.docker.com/library/nginx:stable-perl",      ... + +      "name": "library_nginx_stable-perl" + +    } +``` +For Fargate, you will see the `executionRoleArn` placeholder replaced: + + +``` +{ + +  "ipcMode": null, + +  "executionRoleArn": "arn:aws:iam::4XXX0225XX7:role/ecsTaskExecutionRole", + +  "containerDefinitions": [ + +    { + +... +``` +##### Launch Types and Infrastructure Definitions + +By definition, EC2 and Fargate support different Task Definition settings. Consequently, if you add launch type-specific settings to the Task Definition in the Harness Service, you must select the corresponding **Launch Type** in the Harness Infrastructure Definition used by the Harness Workflow that deploys that Service. + +For example, the Fargate launch type Task Definition supports CPU and Memory settings: + + +``` +{ + "containerDefinitions": [ + { +... + } + ], + "cpu": "256", + "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole", + "family": "fargate-task-definition", + "memory": "512", + "networkMode": "awsvpc", + "runtimePlatform": { + "operatingSystemFamily": "LINUX" + }, + "requiresCompatibilities": [ + "FARGATE" + ] +} +``` +When you select the Infrastructure Definition, you must also select **Fargate Launch Type** for the **Launch Type**. + +![](./static/ecs-services-44.png) + +If you select EC2, Harness will ignore the CPU and Memory settings in your Task Definition. + +If you are not very familiar with task and service definitions, see these examples from AWS: [Task Definitions for Amazon ECS](https://github.com/aws-samples/aws-containers-task-definitions), [Example Task Definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html). + +##### Tags Support + +Currently, this feature is behind the Feature Flag `ECS_REGISTER_TASK_DEFINITION_TAGS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Harness will remove Feature Flags for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions.You can add ECS tags to your task definition just as you would in the AWS console or CLI. + +You can use Harness [Service](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) and [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in both keys and values. + +For example: + + +``` +... + "cpu" : "128", + "memory" : "256", + "tags" : [ { + "key": "4713abcd", + "value": "þþÿÿ" + }, + { + "key": "6422abcd", + "value": "þþÿÿ" + }, + { + "key": "7592abcd", + "value": "þþÿÿ" + }, + { + "key": "${serviceVariable.foo}", + "value": "${serviceVariable.baz}" + } +], + "inferenceAccelerators" : [ ] +} +... +``` +When the Harness Service is deployed and the ECS task definition is registered, you will see the tags in AWS: + +![](./static/ecs-services-45.png) + +Tags must meet the ECS requirements. See [Tag restrictions](https://docs.aws.amazon.com/AmazonECS/latest/userguide/ecs-using-tags.html#tag-restrictions) from AWS. + +#### awsvpc Network Mode + +When configuring the Task Definition via the Harness Service **Task Definition**, you can set the **awsvpc** network mode by simply adding the `networkMode` parameter. For details Network Mode, see [networkMode](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#network_mode) in the AWS docs. + +##### Example for awsvpc Network Mode + +The following example shows the `networkMode` parameter with the **awsvpc** value. + + +``` +... + +  "networkMode" : "awsvpc" + +} +``` +When you look at the Task Definition created by Harness, you can see the **awsvpc** network mode at the bottom of the definition JSON: + + +``` + ... + + "pidMode": null, + +  "requiresCompatibilities": [], + +  "networkMode": "awsvpc", + +  "cpu": null, + +  "revision": 2, + +  "status": "ACTIVE", + +  "volumes": [] + +} +``` +Task definitions that use the **awsvpc** network mode use the **AWSServiceRoleForECS** service-linked role, which is created for you automatically. For more information, see [Using Service-Linked Roles for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using-service-linked-roles.html) from AWS. + +#### Service Definition + +You can specify the ECS service configuration in the Harness Service **Service Definition**. + +You can specify it inline, as described below, or using remote Git files, as described in [Use Remote ECS Task and Service Definitions in Git Repos](use-ecs-task-and-service-definitions-in-git-repos.md). + +To specify the service configuration, do the following: + +1. In the Harness Service, in **Deployment Specification**, expand **ECS** (if necessary). The **Service Definition** appears. + +By default, the **Service Definition** uses a **Replica** strategy.If you have an existing service and you want to use its JSON in **Service Definition**, you can enter the JSON in **Service Definition**. You can enter in any parameter that is specified by the aws ecs [create-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html) command. + +You can obtain the JSON using the AWS CLI using [describe-services](https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-services.html): + +`aws ecs describe-services --cluster clusterName --service ecsServiceName` + +For information on all ECS Service definition parameters, see [Service Definition Parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html) from AWS. + +The following sections describe how to configure the Service Definition for different ECS features. + +If you add networking settings ([Network configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html#sd-networkconfiguration)) to the specification they will be overwritten at deployment runtime by the network settings you define for the target ECS cluster in the Harness Infrastructure Definition.##### Tags Support + +Currently, this feature is behind the Feature Flag `ECS_REGISTER_TASK_DEFINITION_TAGS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Harness will remove Feature Flags for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions.You can add ECS tags to your service definition just as you would in the AWS console or CLI. + +You can use Harness [Service](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) and [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in both keys and values. + +For example: + + +``` +... +{ +"placementConstraints":[ ], +"placementStrategy":[ ], +"healthCheckGracePeriodSeconds":null, +"tags":[{ + "key": "doc", + "value": "test" + }], +"schedulingStrategy":"REPLICA", +"propagateTags": "TASK_DEFINITION" +} +... +``` +When the Harness Service is deployed and the ECS service is registered, you will see the tags in AWS: + +![](./static/ecs-services-46.png) + +Tags must meet the ECS requirements. See [Tag restrictions](https://docs.aws.amazon.com/AmazonECS/latest/userguide/ecs-using-tags.html#tag-restrictions) from AWS. + +##### Capacity Provider Strategy Support + +You can use a [Capacity provider strategy](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html#sd-capacityproviderstrategy) in your ECS Service definition via the `capacityProviderStrategy` parameter. + +Later, when you define a [Harness Infrastructure Definition](ecs-environments.md) for the deployment of this ECS Service, you will select one of the following launch types: + +* For `FARGATE` or `FARGATE_SPOT` strategies, select **Fargate Launch Type**. +* For the Auto Scaling group strategy, select `EC2 Instances`. + +See [ECS Environments](ecs-environments.md). + +#### Replica Strategy + +You specify the Replica strategy using the `schedulingStrategy` parameter. By default, when you create a Harness Service using the Docker Image type, the **Service Definition** will generate the JSON for the Replica strategy. There are no changes that you need to make. + +##### Example Service Definition for Replica Strategy + +The following example is the default JSON generated for the **Service Definition**, setting the scheduling strategy for Replica: + + +``` +{ + +"placementConstraints":[ ], + +"placementStrategy":[ ], + +"healthCheckGracePeriodSeconds":null, + +"schedulingStrategy":"REPLICA" + +} +``` +#### Daemon Strategy + +You specify the Daemon strategy using the `schedulingStrategy` parameter. By default, when you create a Harness Service using the Docker Image type, the Service Definition will generate the JSON for the Replica strategy. To set a Daemon strategy, you simply need to change the `schedulingStrategy` parameter to **DAEMON**. + +##### Example Service Definition for Daemon Strategy + +Here is an example of how to specify the Daemon scheduling strategy in **Service Definition**: + + +``` +{ + +"placementConstraints":[ ], + +"placementStrategy":[ ], + +"healthCheckGracePeriodSeconds":null, + +"schedulingStrategy":"DAEMON" + +} +``` +#### Service Discovery + +Harness does not create an ECS Service Discovery Service, but Harness registers the ECS services it creates with the Service Discovery Service. + +If you have configured Service Discovery for an ECS service, Harness can deploy to that service, registering its DNS SRV records as needed. During rollback or in the case of an ECS task failing, ECS manages the DNS resolution, replacing A records, etc. + +For a detailed description of Service Discovery concepts, see [Service Discovery](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html) from AWS. In you are new to Service Discovery, see [Tutorial: Creating a Service Using Service Discovery](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html) from AWS. + +Here is what the ECS Service Discovery configuration looks like in AWS: + +![](./static/ecs-services-47.png) + +When you create the Service Discovery Service in ECS, you will specify a namespace and ECS will generate the DNS records (SRV and A records) for the ECS namespace in AWS Route 53. DNS Queries for the namespace are resolved by Route 53 and traffic is routed to the instances supporting the ECS cluster. + +To specify the Service Discovery Service in the Harness **Service Definition**, add the `serviceRegistries` parameter to the Harness **Service Definition**. The `serviceRegistries` parameter is defined like this: + + +``` +"serviceRegistries": [ + +  { + +    "registryArn": "arn:aws:servicediscovery:us-east-1:00000000:service/srv-jwyz7x4igkxckqno", + +    "containerName": "${CONTAINER_NAME}", + +    "containerPort": ${serviceVariable.containerPort} + +    # "port": + +  } + +], +``` +In this example, the Harness variable `${serviceVariable.containerPort}` is used for the `containerPort` value. You can simply enter the port number instead, such as **8080**. The `${serviceVariable.containerPort}` variable is created in the **Config Variables** section of the Service as **containerPort**, and referenced as **${serviceVariable.containerPort}** in the **Service Definition**. Using a **Config Variable** allows you to override the variable value when configuring the Harness Workflow that deploys the Service. For more information, see [Workflow Phases](https://docs.harness.io/article/m220i1tnia-workflow-configuration#workflow_phases). + +The following list describes the fields and values needed for a Service Discovery Service in the Harness **Service Definition**: + +* `registryArn` - The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is Amazon Route 53 Auto Naming. To obtain the `registryArn` value, use the [aws ecs describe-services](https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-services.html?highlight=registryarn) command. +* `containerName` - The `containerName` field is the container name value to be used for your Service Discovery Service, and already specified in the task definition in **Task Definition**. Typically, you simply use the variable `${CONTAINER_NAME}`. Harness verifies that the container name is specified in **Task Definition**. +* `containerPort` - The port value to be used for your Service Discovery Service. + +You can override Services variables in the Harness Environment and Workflow. For more information see [Override a Service Configuration](https://docs.harness.io/article/n39w05njjv-environment-configuration#override_a_service_configuration) and [Workflow Phases](https://docs.harness.io/article/m220i1tnia-workflow-configuration#workflow_phases). + +##### Which Service Parameters Do I Use? + +Here are the guidelines for when to use the different service parameters in `serviceRegistries`: + +* If the task definition for your service task uses the awsvpc network mode *and* a SRV DNS record is used, you must specify either a) `containerName` and `containerPort` or b) just `port` , *but not both*. +* If you use SRV DNS records, but *not* the awsvpc network mode, a `containerName` and `containerPort` combination is required. +* If you use awsvpc network mode *only* (no SRV record), you do *not* need the `containerName` and `containerPort`, but can use `port`. The `port` field is used only if both the awsvpc network mode *and* SRV records are used. + +Here is an example where the service task does not use the awsvpc network mode but a SRV DNS record is used: + + +``` +"serviceRegistries": [ + + { + + "registryArn": "arn:aws:servicediscovery:us-east-1:000000000000:service/srv-jwyz2x4igkxckqno", + + "containerName": "${CONTAINER_NAME}", + + "containerPort": ${serviceVariable.containerPort} + + } + +], +``` +The value for `containerName` is `${CONTAINER_NAME}`. This maps to the name field in the **Task Definition**, itself replaced with a container name based on the image name: + + +``` +{ + +  "containerDefinitions" : [ { + +    "name" : "${CONTAINER_NAME}", + +    "image" : "${DOCKER_IMAGE_NAME}", + +... +``` +The value for containerPort is 8080. + + +``` +    "portMappings" : [ { + +      "containerPort" : 8080, + +      "protocol" : "tcp" + +    } ], +``` +You can use Harness Environment variables to override the Service variables used in the Service Definition, thereby using the same Harness Service in multiple deployment environments. For more information, see [Override a Service Configuration](https://docs.harness.io/article/n39w05njjv-environment-configuration#override_a_service_configuration).#### Using Private Docker Registry Authentication + +In the Harness Service, you can add the `RepositoryCredentials` property type in the **Task Definition** to specify the repository credentials for private registry authentication. + +This process has the following steps: + +1. Add the `secretsmanager:GetSecretValue` policy to the ECS task execution role. Here is the policy: + + + ``` + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "kms:Decrypt", + "secretsmanager:GetSecretValue" + ], + "Resource": [ + "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name", + "arn:aws:kms:region:aws_account_id:key:key_id" + ] + } + ] + } + ``` + The action `kms:Decrypt` is required only if your key uses a custom KMS key and not the default key. The ARN for your custom key should be added as a resource. For more information, and details about ECS platform versions that support this feature, see [Private Registry Authentication for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html) from AWS.1. Add the [RepositoryCredentials](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-repositorycredentials.html) property type to the Harness Service as a part of the Task Definition in **Task Definition** like this: + + + ``` + "containerDefinitions": [ + + { + + "name" : "${CONTAINER_NAME}", + +   "image" : "${DOCKER_IMAGE_NAME}",... + +   "repositoryCredentials": { + +     "credentialsParameter": "arn:aws:secretsmanager:region:aws_account_id:secret:secret_name" + +   } + + ... + + } + + ] + ``` + +2. In addition to specifying the `repositoryCredentials`, you must also specify the Task execution role in the Service **Task Definition** for the Task Definition using the property `executionRoleArn`. This role authorizes Amazon ECS to pull private images for your task. For more information, see [Private Registry Authentication Required IAM Permissions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html#private-auth-iam). For example: + +`"executionRoleArn" : "arn:aws:iam::448000000317:role/ecsTaskExecutionRole",` + +The Task execution role is specified when the Task Definition is created in ECS, or in AWS IAM (see [Amazon ECS Task Execution Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html)). If you are creating the ECS Task Definition for the first time using Harness, create the role in IAM, and then add it in `executionRoleArn` in **Task Definition**. + +### Link Task and Service Definitions in Git Repos + +You can use your Git repo for the task and service definition files and Harness will use them at runtime. + +You must use either inline or remote task and service definitions. You cannot use inline and remote together. + +To use remote task and service definitions: + +1. Ensure you have set up a Harness Source Repo Provider connection to your Git repo. See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +2. In your Harness ECS Service, in **Deployment Specification**, click more options (︙), and then click **Link Remote Definitions**. + +### Review: Task Definitions and Amazon ECS Service Quotas + +This section discusses the impact Harness ECS deployments have on Amazon ECS service quotas. + +Once created, an ECS Task Definition cannot be updated as it is immutable. As discussed in AWS [Updating a task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-task-definition.html), to update a Task Definition you need to make a revision. + +In every new Harness ECS deployment, even if the task definition does not change, Harness must modify the task definition `image` property. For example, `"image": "registry.hub.docker.com/library/nginx:mainline"`. + +Harness must make this change so it can deploy the new version of the artifact. + +As there is no way to update the existing Task Definition, the only way to make the change is to create a new version of the Task Definition. + +AWS has a limit on **Revisions per task definition family** of 1 million, as covered in AWS [Amazon ECS service quotas](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-quotas.html). This limit is from AWS and Harness cannot change it. + +There is the possibility of Harness deployments causing this limit to be reached. Especially if a Task Definition is shared by test, stage, and production deployments. + +If the limit is reached, the ECS service name will have to be changed. + +### Next Step + +Now that you have set up your [Harness Artifact Server and Cloud Provider](ecs-connectors-and-providers-setup.md) and [ECS Service](ecs-services.md), you can create the Harness Environment to identify your target ECS cluster: + +* [4 - ECS Environments](ecs-environments.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-setup-in-yaml.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-setup-in-yaml.md new file mode 100644 index 00000000000..8af290621d2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-setup-in-yaml.md @@ -0,0 +1,100 @@ +--- +title: 7 - ECS Setup in YAML +description: Learn how to set up and manage Harness ECS deployments using YAML. +sidebar_position: 800 +helpdocs_topic_id: 5229btw1mq +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +All of the Harness configuration steps in this guide can be performed using code instead of the Harness user interface. You can view or edit the YAML for any Harness configuration by clicking the **YAML>** button on any page. + +![](./static/ecs-setup-in-yaml-48.png) + +When you click the button, the Harness code editor appears. + +![](./static/ecs-setup-in-yaml-49.png) + +For example, here is the YAML for the Daemon Scheduler, Basic Workflow we set up in this guide. + + +``` +harnessApiVersion: '1.0' +type: BASIC +envName: stage-ecs +failureStrategies: +- executionScope: WORKFLOW + failureTypes: + - APPLICATION_ERROR + repairActionCode: ROLLBACK_WORKFLOW + retryCount: 0 +notificationRules: +- conditions: + - FAILED + executionScope: WORKFLOW + notificationGroupAsExpression: false + notificationGroups: + - Account Administrator +phases: +- type: ECS + computeProviderName: aws-ecs + daemonSet: false + infraMappingName: example -AWS_ECS--Amazon Web Services- aws-ecs- us-west-1 + name: Phase 1 + phaseSteps: + - type: CONTAINER_SETUP + name: Setup Container + steps: + - type: ECS_DAEMON_SERVICE_SETUP + name: ECS Daemon Service Setup + stepsInParallel: false + - type: VERIFY_SERVICE + name: Verify Service + stepsInParallel: false + - type: WRAP_UP + name: Wrap Up + stepsInParallel: false + provisionNodes: false + serviceName: Default_Daemon + statefulSet: false +rollbackPhases: +- type: ECS + computeProviderName: aws-ecs + daemonSet: false + infraMappingName: example -AWS_ECS--Amazon Web Services- aws-ecs- us-west-1 + name: Rollback Phase 1 + phaseNameForRollback: Phase 1 + phaseSteps: + - type: CONTAINER_SETUP + name: Setup Container + phaseStepNameForRollback: Setup Container + statusForRollback: SUCCESS + steps: + - type: ECS_SERVICE_SETUP_ROLLBACK + name: Rollback Containers + stepsInParallel: false + - type: VERIFY_SERVICE + name: Verify Service + phaseStepNameForRollback: Deploy Containers + statusForRollback: SUCCESS + stepsInParallel: false + - type: WRAP_UP + name: Wrap Up + stepsInParallel: false + provisionNodes: false + serviceName: Default_Daemon + statefulSet: false +templatized: false + +``` +For more information, see [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) and [Harness GitOps](https://docs.harness.io/article/khbt0yhctx-harness-git-ops). + +The above example is a simple one. If you are using more steps, like a [Terraform Infrastructure Provisioner](../../terraform-category/terraform-provisioner-step.md) step, there will be additional labels and values. + +### Review: Do Not Use Multiple ECS Setup Steps + +The ECS Service Setup is added to a Harness ECS Workflow automatically when you create the Workflow. + +Your Basic Workflow or Canary Workflow Phase should only use one ECS Setup step. If you use multiple ECS Setup steps, the last step overrides all previous steps, rendering them useless. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-troubleshooting.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-troubleshooting.md new file mode 100644 index 00000000000..66484cd0f30 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-troubleshooting.md @@ -0,0 +1,107 @@ +--- +title: 8 - ECS Troubleshooting +description: General troubleshooting steps for ECS deployments. +sidebar_position: 900 +helpdocs_topic_id: rdk1j5s32z +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following errors might occur when setting up and deploying ECS in Harness: + +* [Rate Exceeded](ecs-troubleshooting.md#rate-exceeded) +* [New ARN and Resource ID Format Must be Enabled](ecs-troubleshooting.md#new-arn-and-resource-id-format-must-be-enabled) +* [Unable to Place a Task Because no Container Instance met all of its Requirements](ecs-troubleshooting.md#unable-to-place-a-task-because-no-container-instance-met-all-of-its-requirements) +* [Cannot Pull Container Image](ecs-troubleshooting.md#cannot-pull-container-image) +* [Invalid CPU or Memory Value Specified](ecs-troubleshooting.md#invalid-cpu-or-memory-value-specified) +* [ClientException: Fargate requires that 'cpu' be defined at the task level](ecs-troubleshooting.md#client-exception-fargate-requires-that-cpu-be-defined-at-the-task-level) +* [ClientException: The 'memory' setting for container is greater than for the task](ecs-troubleshooting.md#client-exception-the-memory-setting-for-container-is-greater-than-for-the-task) +* [AmazonElasticLoadBalancingException: Rate exceeded](ecs-troubleshooting.md#amazon-elastic-load-balancing-exception-rate-exceeded) + +For information on ECS troubleshooting, see [Amazon ECS Troubleshooting](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/troubleshooting.html) from AWS. + +### Rate Exceeded + +A common issue with AWS deployments is exceeding an AWS rate limit for some AWS component, such as ECS clusters per region or maximum number of scaling policies per Auto Scaling Groups. + +For steps to increase any AWS limits, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) from AWS. + +### New ARN and Resource ID Format Must be Enabled + +Harness uses tags for Blue/Green deployment, but ECS requires the new ARN and resource ID format to be enabled in order to add tags to the ECS service. + +If you have not opted into the new ECS ARN and resource ID format before you attempt Blue/Green deployment, you might receive the following error: + +`InvalidParameterException: The new ARN and resource ID format must be enabled to add tags to the service. Opt in to the new format and try again.` + +To solve this issue, opt into the new format and try again. For more information, see [Migrating your Amazon ECS deployment to the new ARN and resource ID format](https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-deployment-to-the-new-arn-and-resource-id-format-2/) from AWS. + +### Unable to Place a Task Because no Container Instance met all of its Requirements + +The Upgrade Containers step might show the following message: + +`(service service-name) was unable to place a task because no container instance met all of its requirements.` + +Review the CPU requirements in both the task size and container definition parameters of the task definition. + +See [Service Event Messages](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html#service-event-messages-list) from AWS. + +### Cannot Pull Container Image + +You might see Docker errors indicating that when creating a task, the container image specified could not be retrieved. + +![](./static/ecs-troubleshooting-90.png) + +See [Cannot Pull Container Image Error](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_cannot_pull_image.html) from AWS. + +### Invalid CPU or Memory Value Specified + +See the required settings in [Invalid CPU or Memory Value Specified](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html) from AWS. + +### ClientException: Fargate requires that 'cpu' be defined at the task level + +Ensure that you add the CPU and Memory settings in the Harness Service Container Specification section—for example: + + +``` +"cpu" : "1", + +"memory" : "512" +``` +### ClientException: The 'memory' setting for container is greater than for the task + +In the Harness Service **Container Specification** JSON, there are two settings for memory. The memory setting for the container must not be greater than the memory setting for the task: + + +``` +{ + +  "containerDefinitions" : [ { + +    "name" : "${CONTAINER_NAME}", + +    "image" : "${DOCKER_IMAGE_NAME}", + +    "memory" : 512, + +    ... + +  } ], + +  "executionRoleArn" : "${EXECUTION_ROLE}", + +  ... + +  "cpu" : "1", + +  "memory" : "512", + +  "networkMode" : "awsvpc" + +} +``` +### AmazonElasticLoadBalancingException: Rate exceeded + +You might receive this error as a result of AWS Load Balancer rate limiting. For more information, see [Limits for Your Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-limits.html) from AWS. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-workflows.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-workflows.md new file mode 100644 index 00000000000..a81222695e8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/ecs-workflows.md @@ -0,0 +1,413 @@ +--- +title: 5 - ECS Basic and Canary Workflows +description: Create a Workflow to deploy your ECS services. +sidebar_position: 600 +helpdocs_topic_id: oinivtywnl +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness Basic and Canary Workflows for ECS deployments. + +### Overview + +Workflows are the deployment steps for your ECS services, including deployment types such as Canary and Blue/Green. Workflows can involve a few steps or multiple phases each composed of several steps. + +This topic covers Canary and Basic ECS Workflows. For ECS Blue/Green Workflows, see [ECS Blue/Green Workflows](ecs-blue-green-workflows.md).ECS Harness Workflows differ according to the ECS Service Scheduler (Replica and Daemon) used by the Harness Service in the Workflow. In this section, we will set up one Workflow for each Replica and Daemon Scheduler: + +* **Workflow 1: Replica Scheduling using Canary Deployment.** + + Phase 1 - 50% Deployment: + - Setup Container: ECS Service Setup. + - Deploy Containers: Upgrade Containers. + + Phase 2 - 100% Deployment: + - Deploy Containers: Upgrade Containers. +* **Workflow 2: Daemon Scheduling using Basic Deployment.** + + ECS Daemon Service Setup Step. + +### Review: Permissions + +To create and deploy an ECS Workflow, you must belong to a Harness User Group with the following Account Permissions enabled: + +* `Workflow Update` +* `Workflow Create` + +See [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). + +### Review: Do Not Use Multiple ECS Setup Steps + +The ECS Service Setup is added to a Harness ECS Workflow automatically when you create the Workflow. + +Your Basic Workflow or Canary Workflow Phase should only use one ECS Setup step. If you use multiple ECS Setup steps, the last step overrides all previous steps, rendering them useless. + +### Replica Scheduling using Canary Deployment + +In this procedure, we will create a Workflow to deploy a Harness Service configured with a Replica Scheduling Strategy, as described in [Replica Strategy](ecs-services.md#replica-strategy). + +To create a Workflow using a Service configured with a Replica Scheduling Strategy, do the following: + +1. In your Harness Application, click **Workflows**. The **Workflows** page appears. +2. Click **Add Workflow**. The **Workflow** dialog appears.![](./static/ecs-workflows-14.png) +3. Complete the following fields. + 1. **Name** - Give the Workflow a name that describes its deployment goals, such as **ECS Replica Strategy**. + 2. **Description** - Provide details about the Workflow so other users understand its deployment goals. + 3. **Workflow Type** - Select **Canary Deployment**. + 4. **Environment** - Select the Environment you created for ECS. This is the Environment containing an [Infrastructure Definition](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) for the Harness Service you are deploying with this Workflow. You will select the Service and the Infrastructure Definition when you set up the Canary deployment's stages. + 5. Click **SUBMIT**. The new Workflow is displayed. + + ![](./static/ecs-workflows-15.png) + + Next, you will add two phases for the Canary deployment. The first phase will set up your ECS service and then upgrade ECS service instances to 50% of the available ECS service instances. +4. In the Workflow, in **Deployment Phases**, click **Add Phase**. The **Workflow Phase** dialog appears. + +If you are using Infrastructure Definitions, the dialog will look like this:![](./static/ecs-workflows-16.png) +5. Complete the following fields. + 1. **Service** - Select the Harness Service that uses the Replica Strategy. + 2. **Infrastructure Definition** — This list is populated using the Environment you selected when creating the Workflow. Select the Infrastructure Definition that describes the cluster where you will deploy the Amazon ECS service defined in the Harness Service. + 3. **Service Variable Overrides** — If the Harness Service uses variables that you want to override for this Workflow phase, such as those described in [Service Discovery](ecs-services.md#service-discovery), you can override the variable values here. +6. Click **SUBMIT**. The new **Phase 1** page appears.![](./static/ecs-workflows-17.png) +7. Click **ECS Service Setup**. The **ECS Service Setup** dialog appears. + + ![](./static/ecs-workflows-18.png) + +8. Complete the following fields. + 1. **ECS Service Name** - By default, the ECS service will be named using a concatenation of the Harness Application, Service, and Environment names. You can change the name here using text or a variable. Enter **${** in the field to see a list of all of the variables available.![](./static/ecs-workflows-19.png) + 2. **Same as already running instances** - This field displays the number of desired *ECS service instances* for this stage. By default, the ECS service will be set up using 2 ECS service instances even if the field contains **0**.During deployment, only one old version of the application will be kept. If there are more than one, Harness will reduce their instances to 0. + 3. **Fixed** - Click this option to fix the specific number of ECS service instances to use for this stage. The **Fixed Instances Count** field will appear, where you can enter the value. + 4. **Resize Strategy** - Specify how you want the new ECS service instances added and downsized. + 5. **Service Steady State Wait Timeout** - Specify how many minutes Harness should wait for the ECS service instances to reach Steady State before failing the set up. The default is 10 minutes. If you use an expression for this setting and it fails or evaluates to null, 10 minutes is used.This setting supports Harness variable expressions in Basic and Canary Workflows. They are not supported in Blue/Green Workflows or the ECS Run Task and ECS Daemon Service Setup steps. See [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) and [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + 6. **AWS Auto Scaler Configuration** - See [AWS Auto Scaling with ECS](#aws_auto_scaling_with_ecs). + 7. **Use Load Balancer** - See [Using ELB Load Balancers During Deployment](#using_elb_load_balancers_during_deployment). + 8. Close or Submit the **ECS Service Setup** dialog to return to the **Phase 1** page. + +:::note +To obtain the name of the ECS service deployed currently (from the **ECS Service Setup** step), you can use the Harness variable `${ECS__Service__Setup.serviceName}`. You might want to use the name in additional Workflow steps. +::: + +9. Click **Upgrade Containers**. The **Upgrade Containers** dialog appears. +2. In **Desired Instances**, set the number or percentage of ECS service instances to use for this stage. As this is Phase 1 of a Canary deployment, enter **50 Percent**. + + ![](./static/ecs-workflows-20.png) + + The value in **Desired Instances** relates to the number of ECS service instances set in the **ECS Service Setup** dialog. For example, if you entered **2** as the **Fixed Instances Count** in **ECS Service Setup** and then enter **50 Percent** in **Upgrade Containers**, that means, for this phase, Harness will deploy **1** ECS service instance.The timeout for the **Upgrade Containers** step is inherited from the preceding **ECS Service Setup** step.**Use Expressions:** You can use [Harness Service, Environment Override, and Workflow](https://docs.harness.io/article/9dvxcegm90-variables) variable expressions in **Desired Instances** by selecting **Use Expression** and then entering the expression, like `${workflow.variables.DesiredInstances}`. When you run the Workflow, you can provide a value for the variable. + + +3. Click **SUBMIT**. +4. Click the name of the Workflow in the breadcrumb links to return to the **Workflow** page and add the second Phase of this Canary deployment. + + ![](./static/ecs-workflows-21.png) + +5. To add **Phase 2**, click **Add Phase**. +6. In the **Workflow Phase** dialog, complete the following fields. + 1. **Service** - Select the same Harness Service that uses the Replica Strategy. + 2. **Infrastructure Definition** — Select the Infrastructure Definition that describes the cluster where you will deploy the Amazon ECS service defined in the Harness Service. + 3. **Service Variable Overrides** - If the Harness Service uses variables that you want to override for this Workflow phase, such as those described in [Service Discovery](#service_discovery), you can override the variable values here. +7. Click **SUBMIT**. The **Phase 2** page appears. + + ![](./static/ecs-workflows-22.png) + + As this is the second phase in the Canary deployment, it will only run if Phase 1 deployed successfully. Let's upgrade the number of containers to 100%. +8. Click **Upgrade Containers**. The **Upgrade Containers** dialog appears.![](./static/ecs-workflows-23.png) +9. In **Desired Instances**, enter **100**, choose **Percent**, and click **SUBMIT**. This will deploy the full count of ECS service instances. + +The Workflow is complete. You can run the Workflow to deploy the ECS service with the Replica strategy to your ECS cluster. + +### Daemon Scheduling using Basic Deployment + +In this procedure, we will create a Workflow to deploy a Harness Service configured with a Daemon Scheduling Strategy, as described in [Daemon Strategy](ecs-services.md#daemon-strategy). + +To deploy a Harness Service configured with a Daemon Scheduling Strategy, do the following: + +1. In your Harness Application, click **Workflows**. The **Workflows** page appears. +2. Click **Add Workflow**. The **Workflow** dialog appears, in one of the following formats. + We will be creating a Basic Deployment Workflow using the Harness Service configured with a Daemon Scheduling Strategy. +3. Complete the following fields. + 1. **Name** - Give the Workflow a name that describes its deployment goals, such as **ECS Daemon Strategy**. + 2. **Description** - Provide details about the Workflow so other users understand its deployment goals. + 3. **Workflow Type** - Select **Basic Deployment**. + 4. **Environment** - Select the Environment you created for ECS. This is the Environment containing an [Infrastructure Definition](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) for the Harness Service you are deploying with this Workflow. You will select the Service and Infrastructure Definition when you set up the Basic deployment. +4. Click **SUBMIT**. The new Workflow is displayed. + + ![](./static/ecs-workflows-24.png) + + This Workflow will simply set up the ECS service using a Daemon strategy. +5. Click the **ECS Daemon Service Setup** step. The **ECS Daemon Service Setup** dialog appears. + ![](./static/ecs-workflows-25.png) +6. Complete the following fields. + 1. **ECS Service Name** - The step will create the ECS service using the names of the Harness Application, Service, and Environment. + 2. **Service Steady State Wait Timeout** - Specify how many minutes Harness should wait for the instances to reach Steady State before failing the set up. + You cannot use Harness variable expressions in this setting in the ECS Daemon Service Setup step. You can use them in the Replica scheduling scenario. + 3. Click **SUBMIT**. + +The Workflow is complete. You can run the Workflow to deploy the ECS service with the Daemon strategy to your ECS cluster. + +### Using ELB Load Balancers During Deployment + +Currently, the single ELB is available in production, but the **multiple ELBs** feature is behind the Feature Flag `ECS_MULTI_LBS`. See [Multiple Load Balancers](#multiple_load_balancers) below.Harness can use one or more AWS Elastic Load Balancers (ALB and NLB only) for your Amazon ECS service to distribute traffic evenly across the tasks in your service. + +When you set up an ELB configuration in Harness, you specify the target group for the ECS service. + +When each task for your ECS service is started, the container and port combination specified in the **Service Specification** in the Harness Service is registered with that target group and traffic is routed from the load balancer to that task. + +For information about using ELB and ECS, see [Service Load Balancing](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html) from AWS.In a Harness Workflow, the **ECS Service Setup** and **ECS Daemon Service Setup** steps allow you to use ELBs when deploying your ECS service. + +To use ELBs in the **ECS Service Setup** or **ECS Daemon Service Setup** steps, do the following: + +1. In the **ECS Service Setup** or **ECS Daemon Service Setup** step, in **AWS LoadBalancer Configuration**, click **Add**. The ELB settings appear. + ![](./static/ecs-workflows-26.png) +2. Complete the following ELB settings. + 1. **IAM Role** - The role must have the [AmazonEC2ContainerServiceRole](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_IAM_role.html) policy. + 2. **Elastic Load Balancer** - Select the ELB that you want to use. The list is populated using the Infrastructure Definition in the Workflow setup. Once you select an ELB, Harness will fetch the list of target groups. + 3. **Target Group** - Select the target group for the load balancer. You associate a target group to an ECS service. Each target group is used to route requests to one or more registered targets. + 4. **Target Container Name** and **Target Port** - You can leave these fields blank. They are used if the container specification has multiple container definitions, which is not common. When you deploy your ECS service with Harness, Harness uses the container name and port from the **Service Specification** in the Harness Service. If you choose to use these fields, note that as an ECS requirement Target Container cannot be empty if Target Port is defined. + In **Target Container Name**, you can also use the `${CONTAINER_NAME}` parameter used in the Harness ECS Service spec. +3. Click **SUBMIT**. + +The ELB configuration is set. When Harness deploys the ECS service, traffic will be routed from the load balancer to the service task. + +You can use Harness [Service Config variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables), [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), and [CloudFormation Outputs](../cloudformation-category/using-cloudformation-outputs-in-workflow-steps.md) for all of these settings. + +#### Multiple Load Balancers + +Currently, this feature is behind the Feature Flag `ECS_MULTI_LBS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.To use multiple Load Balancers, simply click **Add** in **AWS LoadBalancer Configuration**. + +Complete the settings as described above. + +The IAM role selected in **IAM Role** is used for all Load Balancers. + +You can use Harness [Service Config variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables), [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), and [CloudFormation Outputs](../cloudformation-category/using-cloudformation-outputs-in-workflow-steps.md) for all of these settings. + +### AWS Auto Scaling with ECS + +For details on how Harness applies ECS Auto Scaling, see [ECS Auto Scaling](https://docs.harness.io/article/28ehkmqy3v-ecs-auto-scaling).The ECS service(s) you deploy with Harness can be configured to use AWS Service Auto Scaling to adjust its desired ECS service count up or down in response to CloudWatch alarms. For more information on using Auto Scaling with ECS, see [Target Tracking Scaling Policies](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-autoscaling-targettracking.html) from AWS. + +This is what the AWS Auto Scaling setting looks like in the ECS console: + +![](./static/ecs-workflows-27.png) + +In Harness, you configure Auto Scaling in the **ECS Service Setup** step of a Workflow (for example, Canary Deployment). + +ECS Auto Scaling is performed using the **Upgrade Containers** step. If you delete this step from a Phase, no ECS Auto Scaling is performed in that Phase. You can add the step to a previous Phase, but you must also add its corresponding **Rollback Containers** step. See [Upgrade Containers and Rollback Containers Steps are Dependent](#upgrade_containers_and_rollback_containers_steps_are_dependent).There are two AWS Auto Scaling resource types that you must be set up in Harness to use Auto Scaling with ECS: + +* **Scalable Target** - Specifies a resource that AWS Application Auto Scaling can scale. For more information, see [ScalableTarget](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html) from AWS. +* **Scalable Policy** - Defines a scaling policy that Application Auto Scaling uses to adjust your application resources. For more information, see [ScalingPolicy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalingpolicy.html) from AWS. + +Before you set up Auto Scaling for the ECS service in Harness, you need to obtain the JSON for the Scalable Target and Scalable Policy resources from AWS. + +The JSON format used in the **Auto Scaler Configurations** settings should match the AWS standards as described in [ScalableTarget](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_ScalableTarget.html) and [ScalablePolicy](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_ScalingPolicy.html).To obtain the Scalable Target, connect to an EC2 instance in your VPC and enter the following: + +`aws application-autoscaling describe-scalable-targets --service-namespace ecs` + +For more information, see [describe-scalable-targets](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/describe-scalable-targets.html) from AWS. + +To obtain the Scalable Policy, enter the following: + +`aws application-autoscaling describe-scaling-policies --service-namespace ecs` + +For more information, see [describe-scaling-policies](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/describe-scaling-policies.html) from AWS. + +To create the Scalable Target and Scalable Policy resources, see the [register-scalable-target](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) and [put-scaling-policy](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html) commands from AWS.To set up Auto Scaling for the ECS service in the Harness Workflow, do the following: + +1. In a Workflow with the **ECS Service Setup** step, open the **ECS Service Setup** step. +2. In **Auto Scaler Configurations**, the Auto Scaling property fields appear. + ![](./static/ecs-workflows-28.png) +3. In **Scalable Target**, paste the JSON for the property.This should follow the [AWS ScalableTarget JSON format](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html). + + For example: + + ``` + { + + "ServiceNamespace": "ecs", + + "ScalableDimension": "ecs:service:DesiredCount", + + "MinCapacity": 2, + + "MaxCapacity": 5, + + "RoleARN": "arn:aws:iam::448XXXXXXX7:role/aws-service-role/ecs.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ECSService" + + } + ``` + +4. In **Scaling Policy**, paste the JSON for the property.This should follow the [AWS ScalingPolicy JSON format](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalingpolicy.html). + + For example: + + ``` + { + + "ScalableDimension": "ecs:service:DesiredCount", + + "ServiceNamespace": "ecs", + + "PolicyName": "P1", + + "PolicyType": "TargetTrackingScaling", + + "TargetTrackingScalingPolicyConfiguration": { + + "TargetValue": 60.0, + + "PredefinedMetricSpecification": { + + "PredefinedMetricType": "ECSServiceAverageCPUUtilization" + + }, + + "ScaleOutCooldown": 300, + +    "ScaleInCooldown": 300 + +  } + + } + ``` + +When Harness deploys your ECS service, it will register the service with an AWS Auto Scaling Group to apply the scaling policy, scaling out (and in) using CloudWatch target tracking. + +To obtain the name of the Auto Scaling Group created by Harness, use the Harness variable `${ami.newAsgName}`. For example, you could add a Shell Script command to your Workflow that contains the command `echo ${ami.newAsgName}`. + +### Upgrade Containers and Rollback Containers Steps are Dependent + +In order for rollback to add ECS Auto Scaling to the previous, successful service, you must have both the **Upgrade Containers** and **Rollback Containers** steps in the same Phase. + +![](./static/ecs-workflows-29.png) + +Since ECS Auto Scaling is added by the **Upgrade Containers** step, if you delete **Upgrade Containers**, then **Rollback Containers** has no ECS Auto Scaling to roll back to. + +If you want to remove ECS Auto Scaling from a Phase, delete both the **Upgrade Containers** and **Rollback Containers** steps. The Phase will no longer perform ECS Auto Scaling during deployment or rollback. + +### ECS Steady State Check Command + +You can use the **ECS Steady State Check** command in an ECS Workflow to check for the steady state of a service you have deployed using a method other than the default **ECS Service Setup** or **ECS Daemon Service Setup** commands, such as a **Shell Script** command. + +The **ECS Steady State Check** command may be added to the **Deploy Containers** section of a Workflow. The **ECS Steady State Check** command dialog looks like this: + +![](./static/ecs-workflows-30.png) + +In **ECS Service Name**, enter the name of the ECS service you are deploying. + +In **Timeout**, enter how long Harness should wait for Steady State to be reached before failing the deployment. The default is **600000** ms (10 minutes). + +### Rollback Command + +When you created a Workflow for an ECS service a Rollback command is automatically added to the **Rollback Steps** section of the Workflow (and its Phases). + +If Harness needs to rollback and restore the ECS setup to its previous working version, or if you interrupt the deployment to roll it back manually, the first step is to rollback the ECS services. + +When a rollback occurs, Harness rolls back all Workflow phases in the reverse order they were deployed. This is true for ECS services deployed to EC2 or Fargate clusters. + +See [ECS Rollbacks](https://docs.harness.io/article/d7rnemtfuz-ecs-rollback). + +### Deploy ECS Workflows + +Once your ECS Workflow is complete, you can deploy it to your ECS cluster. For more information about deploying Workflows, see [Deploy a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration#deploy_a_workflow). + +Let’s look at the deployment of an ECS Workflow that deploys an ECS service using the Replica Strategy as part of a Canary deployment. Here is what the completed **Phase 1** looks like in Harness. + +![](./static/ecs-workflows-31.png) + +The **ECS Service Setup** step displays the steps executed by the Harness Delegate installed in the same AWS VPC as the ECS cluster named **example**. Here’s the output with comments explaining the deployment: + + +``` +INFO   2019-01-07 13:32:54    Begin execution of command: Setup ECS Service + +**# Cluster named example is selected** +INFO   2019-01-07 13:32:54    Cluster Name: example + +**# Artifact source is identified** +INFO   2019-01-07 13:32:54    Docker Image Name: library/nginx:stable-perl + +**# Container name is created** +INFO   2019-01-07 13:32:54    Container Name: library_nginx_stable-perl + +**# Task definition is created using the Harness Service Container Specification** +INFO   2019-01-07 13:32:54    Creating task definition ECS__Example__Default_Replica__stage__ecs with container image library/nginx:stable-perl + +INFO   2019-01-07 13:32:54    Creating ECS service ECS__Example__Default_Replica__stage__ecs__4 in cluster example + +INFO   2019-01-07 13:32:54     + +INFO   2019-01-07 13:32:54    Cleaning versions with no tasks + +**# No Auto Scaling Configured for this deployment** + +INFO   2019-01-07 13:32:54    Checking for Auto-Scalar config for existing services + +INFO   2019-01-07 13:32:54    No Auto-scalar config found for existing services + +**# Deployment Success** + +INFO   2019-01-07 13:32:54    Command execution finished with status SUCCESS + +``` +Next, the Upgrade Containers Step updates the desired count of 2 services: + + +``` +INFO   2019-01-07 13:32:56    Begin execution of command: Resize ECS Service + +**# Resize instances to 50% of 2 (1 instance)** + +INFO   2019-01-07 13:32:56    Resize service [ECS__Example__Default_Replica__stage__ecs__4] in cluster [example] from 0 to 1 instances + +INFO   2019-01-07 13:32:56    Waiting for service: ECS__Example__Default_Replica__stage__ecs__4 to reflect updated desired count: 1 + +**# Resize Complete** +INFO   2019-01-07 13:32:56    Current service desired count return from aws for Service: ECS__Example__Default_Replica__stage__ecs__4 is: 1 + +INFO   2019-01-07 13:32:56    Service update request successfully submitted. + +INFO   2019-01-07 13:32:56    Waiting for pending tasks to finish. 0/1 running ... + +INFO   2019-01-07 13:33:37    Waiting for service to be in steady state... + +**# Service reached Steady State** + +INFO   2019-01-07 13:33:37    Service has reached a steady state + +**# Pull dockerId** + +INFO   2019-01-07 13:33:37    Fetching container meta data from http://10.0.0.53:51678/v1/tasks + +INFO   2019-01-07 13:33:37    Successfully fetched dockerId + +INFO   2019-01-07 13:33:37     + +INFO   2019-01-07 13:33:37    Container IDs: + +INFO   2019-01-07 13:33:37      6452d4b0ef39 - 10.0.0.53 (new) + +INFO   2019-01-07 13:33:37     + +INFO   2019-01-07 13:33:37    Completed operation + +INFO   2019-01-07 13:33:37    ---------- + +**# Deployment Success** + +INFO   2019-01-07 13:33:48    Command execution finished with status SUCCESS +``` +This example is for Phase 1 of a Canary deployment where 50% of 2 services are deployed. Once the Canary deployment is complete and both services are deployed, you can see the deployed services in the AWS ECS console: + +![](./static/ecs-workflows-32.png) + +### Post-Production Rollback + +Harness also supports post-production rollback for cases where you want to recover from a deployment that succeeded on technical criteria, but that you want to undo for other reasons. + +See [Rollback Production Deployments](https://docs.harness.io/article/2f36rsbrve-post-deployment-rollback). + +### Next Step + +Now that you have completed a successful deployment, explore some of the other ECS and general Harness topics: + +* [ECS Blue/Green Workflows](ecs-blue-green-workflows.md) +* [ECS Setup in YAML](ecs-setup-in-yaml.md) +* [ECS Troubleshooting](ecs-troubleshooting.md) +* [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) +* [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/harness-ecs-delegate.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/harness-ecs-delegate.md new file mode 100644 index 00000000000..e88f7f0003e --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/harness-ecs-delegate.md @@ -0,0 +1,347 @@ +--- +title: 1 - Harness ECS Delegate +description: Set up a Harness Delegate for ECS deployments. +sidebar_position: 200 +helpdocs_topic_id: wrm6hpyrjl +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness ECS Delegate is software you install in your environment that connects to Harness Manager and performs Continuous Delivery tasks. + + +This topic shows you how to install the Harness ECS Delegate in a ECS cluster as an ECS service to enable the Delegate to connect to your AWS resources. + + + +### Set up an ECS Delegate + + +You can run the ECS Delegate in the same subnet as the target ECS cluster, which is often the easiest way to manage the Delegate. But this is not a requirement. The ECS cluster for the Harness ECS Delegate does not need to run in the same VPC as the target + ECS cluster where you will deploy your ECS services. + + +#### Requirements for ECS Delegate + + +The Harness ECS Delegate runs as an ECS service in an ECS cluster. The ECS setup used to host the ECS Delegate must have the following: + + +* ECS Cluster. +* ECS Cluster must have 8GB memory for each ECS Delegate service added (m5ad.xlarge minimum). +* AWS IAM Role containing the required policies. The AWS policies are described in detail in + [ECS (Existing Cluster)](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#ecs_existing_cluster). For information on adding an IAM role to the ECS Delegate task definition, see + [Trust Relationships and Roles](#trust_relationships_and_roles). + + +You can also use a Shell Script Delegate on an EC2 instance that assumes the same role as your ECS cluster. In this case, update the trust relationship of the IAM role so that the EC2 instances can assume the role. You can set this up in the **Trust relationships** tab of the IAM role: + + + + +![](./static/harness-ecs-delegate-11.png) + +The Harness ECS Delegate requires an IAM role and policies to execute its deployment tasks (API calls, etc). For ECS clusters, there are two IAM roles to consider: + + +* The IAM role assigned to the cluster. When you set up an ECS cluster you select an IAM role for the cluster. +* The IAM role assigned to the Task Definition. + + +For this tutorial, we will apply all of the roles and policies needed by the ECS Delegate to the IAM role assigned to the cluster. + + +The IAM roles and policies applied to the cluster are inherited by the EC2 hosting the cluster. Consequently, in a production deployment, you might elect to add the roles and policies to the Task Definition for the Harness ECS Delegate instead. For more + information on the best practices with ECS roles, see  + [Amazon ECS Container Instance IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) from AWS. +For information on adding the IAM role and policy, see + [Amazon ECS Container Instance IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) from AWS. + + +Ensure that you add the IAM roles and policies to your ECS cluster when you create it. You cannot add the IAM roles and policies to an existing ECS cluster. You can add policies to whatever role is already assigned to an existing ECS cluster. +**Need a Cluster?** If you do not have an AWS account, you can use an + [AWS Free Tier account](https://aws.amazon.com/free/) and create an ECS cluster by following the steps in + [ECS Getting Started](https://console.aws.amazon.com/ecs/home?region=us-east-1#/getStarted) from Amazon. If you do have an AWS account and you want to evaluate Harness with ECS, you can simply create a new ECS cluster in your AWS account. + + +Though it is not a requirement, as a best practice, install the Delegate in the same VPC as the AWS resources Harness will use. You can even choose to install the Delegate in the same subnet. Installing the Delegate in the same VPC will help you to avoid + cross-VPC networking complexities. + +#### Set up ECS Delegate in AWS + + +In most cases, your will want to run the ECS Delegate with the default launch type, EC2. You can also run it as a Fargate launch type, but this requires some additional steps. + + +There are two specs in the Harness ECS Task Spec download, **ecs-task-spec.json** and **service-spec-for-awsvpc-mode.json**. For EC2 or awsvpc and Fargate, use the **ecs-task-spec.json** spec to create the default + task definition named **harness-delegate-task-spec**. + + +For EC2, you simply reference the definition name when using the `aws ecs create-service` command. + + +For awsvpc network mode and Fargate, use the **service-spec-for-awsvpc-mode.json** service spec when using the `aws ecs create-service` command and it will reference the **harness-delegate-task-spec** task definition. + + +The following procedure describes how to set up the ECS Delegate for the common EC2 launch type scenario: + + +1. Download the ECS Delegate Task Spec. + 1. In **Harness Manager**, click **Setup**, and then click **Harness Delegates**. + 2. Click **Download Delegate**, and click the copy icon next to **ECS Task Spec**. The **Delegate Setup** dialog appears. + + ![](./static/harness-ecs-delegate-12.png) + + 3. In **Delegate Group Name**, enter the name for your Delegate. When you add more ECS Delegates in the future, you can add to this group. All Delegates in this group use the same Task Definition, and share the same Delegate settings, + including Selectors. When you change a Selector, it will apply to all Delegates running under that Group. + 4. In **Profile**, select a Profile for the Delegate. The default is named **Primary**. For more information, see + [Delegate Profiles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles). + 5. Select **Use AWS VPC Mode** if you want to run the ECS Delegate task with a + [FARGATE launch type](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html). + 6. In **Hostname**, enter a hostname for the ECS Delegate. If you do not enter a hostname, ECS will use your Docker container ID as the hostname for the ECS Delegate. If you provide a hostname, ECS uses it. + 7. Click **Download**. The ECS Task Spec is downloaded. Next, you will use the aws CLI to register the ECS Task Spec and then create the ECS service for the ECS Delegate. +2. Register the ECS Task Spec in AWS. + 1. Open a Terminal and navigate to the folder where you downloaded the ECS Task Spec.`$ cd /Users/johnsmith/Desktop/delegates/ECS` + 1. Extract the ECS Task Spec download. + `$ tar -zxvf harness-delegate-ecs.tar.gz` + 2. Navigate to the extracted folder: `cd harness-delegate-ecs`. + 3. Log into AWS using your AWS Access Key ID and AWS Secret Key ID. + + `$ aws configure` + + `AWS Access Key ID [****************LPAA]: XXXXXXX` + + `AWS Secret Access Key [****************4z52]: XXXXXXX` + + 4. Register the ECS task definition using the Harness ECS Task Spec. + + `$ aws ecs register-task-definition --cli-input-json file://ecs-task-spec.json` + + The JSON for the task is output. + + 5. View the completed task. + + `$ aws ecs list-task-definitions` + + The `taskDefinitionArns` is output. + +3. Obtain the name of the ECS cluster where you want to create the ECS service. The cluster must have a minimum of 8GB of memory (m5ad.xlarge minimum). + +4. Create the ECS service for ECS Delegate. + + 1. Create the ECS service using the task definition, providing the service name in `--service-name`, cluster name in `--cluster`, and the desired number of tasks in `--desired-count`. The cluster will need a minimum + of 8GB of memory per task. + + `$ aws ecs create-service --service-name ecs-example --task-definition harness-delegate-task-spec --cluster default --desired-count 1` + + The output will display the JSON for the new service. + + `{` + + `"service": {` + + `"status": "ACTIVE",` + + `"serviceRegistries": [],` + + `"pendingCount": 0,` + + `"launchType": "EC2",` + + `"schedulingStrategy": "REPLICA",` + + `"loadBalancers": [],` + + `"placementConstraints": [],` + + `"createdAt": 1551222417.28,` + + `"desiredCount": 1,` + + `"serviceName": "ecs-delegate",...` + + 2. View the new service. + + `$ aws ecs list-services --cluster default` + + The output will display the new service: + + `{` + + `"serviceArns": [` + + `"arn:aws:ecs:us-west-1:023826572170:service/ecs-delegate",` + + `...` + + `}` + + 3. Wait 5 to 10 minutes for ECS to allocate resources for the service. +5. View the new ECS Delegate in Harness Manager. + 1. In **Harness Manager**, in the **Installations** page. When the ECS Delegate connects to the Harness Manager, it is listed with a status of **Connected**: + + ![](./static/harness-ecs-delegate-13.png) + + Congratulations! You are done installing and running the ECS Delegate. + The following steps simply show you how to use a Selector name to identify this Delegate when making a connection to AWS. You simply instruct Harness to connect to AWS using the same IAM role as the Delegate via its Selector name. + +6. Once the Delegate is listed in Harness Manager, assign a Selector to the Delegate. + 1. Next to the **Selectors** label in the Delegate listing, click **Edit**. + 2. In the **Edit Selector** dialog, enter a Selector name, for example, **ecs-delegate**, and press **Enter**. Click **SUBMIT**. The Selector is listed. +7. Use the ECS Delegate for a Cloud provider connection. + 1. In **Harness Manager**, click **Setup**. + 2. Click **Cloud Providers**. The **Cloud Providers** page appears. + 3. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears. + 4. In **Type**, select **AWS**. + 5. In **Display Name**, enter a name for the Cloud Provider, such as **aws-ecs**. + 6. Enable the **Assume IAM Role on Delegate** option. + 7. In **Delegate Selector**, click to select the Selector you gave the Delegate. + 8. Click **SUBMIT**. The Cloud Provider is added. + +Later, when you create a Harness Service or an Infrastructure Definition in a Harness Environment, you will select this Cloud Provider and Harness will use the connection to obtain ECS cluster and networking information. + + +#### Fargate Specs and Delegates + + +For awsvpc network mode and Fargate, use the **service-spec-for-awsvpc-mode.json** service spec when using the `aws ecs create-service` command and it will reference the **harness-delegate-task-spec** task definition. + + +For the Fargate launch type, you must update the ECS task spec with the following + [standard Fargate settings](https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-containers-to-aws-fargate/): + + +* Set `requiresCompatibilities` to `FARGATE` instead of `EC2`. +* Add `executionRoleArn` to the ECS task spec. This is needed by FARGATE launch types. + + +Here is an example: + + +``` +{ + "containerDefinitions": [ + { + ... + } + ], + "requiresCompatibilities": [ + "FARGATE" + ], + "executionRoleArn": "arn:aws:iam:::role/ecsTaskExecutionRole", + "memory": "6144", + "networkMode": "awsvpc", + "cpu": "1024", + "family": "harness-delegate-task-spec" +} +``` + +##### Multiple Delegates on Delegates Page + + +For ECS Delegates using the EC2 launch type, Harness uses the hostname in the Task Spec as the Delegate name. + + +For ECS Delegates using the Fargate launch type, or when using awsvpc mode, Harness cannot use the hostname in the Task Spec because AWS does not allow it. Harness uses the container ID instead (the Docker container ID and not the Amazon ECS container + ID for the container). + + +Consequently, every time a new task comes is run, a new Delegate is registered in Harness. This can result in multiple Delegates on the Harness Manager **Harness Delegates** page. + + +#### Multiple ECS Delegates + + +You can add multiple ECS Delegates using the following methods: + + +* **Add more tasks to the ECS service** - If you installed the ECS Delegate as a ECS service, you can update the number of tasks in the ECS service. Find the task in the ECS console and click **Run more like this**. New delegates + will be created and will have same Delegate Group in the Harness Manager. +* **Add more services** - Create a new ECS service, in same or another cluster, using same ECS task definition. +* **Add more tasks** - If you are running the ECS Delegate as an individual task, and not as an ECS service, you can create more ECS tasks using same ECS task definition. It does not matter if you use the same ECS cluster. + + +#### ECS Delegate Options + + +You can use the ECS Delegate Task Spec to set up the ECS Delegate in one of two ways: + + +* **Recommended** - Create ECS services using the task definition created from the ECS Delegate Task Spec. An ECS service will spin up a new ECS Delegate task if any ECS Delegate task goes down, thus maintaining a persistent ECS Delegate. +* Create and run individual ECS tasks using the task definition created from the ECS Delegate Task Spec. + + +#### Network Modes + + +When you download the ECS Delegate Task Spec, you can select **awsvpc** as the network mode. When you create the service using the ECS Delegate Task Spec, use the service-spec-for-awsvpc-mode.json file: + + +``` +aws ecs create-service --cli-input-json file://<$PATH>/service-spec-for-awsvpc-mode.json +``` + +The ECS console will request network configuration info when you run the Delegate task or service, including subnets, security groups, and public IP (for Fargate launch type). + + +#### Change ECS Delegate Defaults + + +To change CPU, memory, port mappings, or hostname, edit the default values in **ecs-task-spec.json** file. You can also change any other JSON fields as needed. + + +#### Trust Relationships and Roles + + +If you have an IAM role that you want an ECS task to use, you need to add a trust relationship using a **taskRoleArn** definition in the ECS task definition. Consequently, if you have an IAM role that you want the ECS Delegate to use, you + need to add a **taskRoleArn** definition in the ECS Delegate task definition. + + +The taskRoleArn is the resource ARN of an IAM role that grants containers in the task permission to call AWS APIs on your behalf. + + +By default, the ECS Delegate task definition does not use a taskRoleArn, but uses the cluster-level IAM role that was used to create the existing cluster. + + +Here is an example of a ECS Delegate task definition with the taskRoleArn added before the container definition: + + +``` +{ + "ipcMode": null, + "executionRoleArn": null, + "**taskRoleArn**": "arn:aws:iam::123456789012:role/my-task-role" + "containerDefinitions": [ + { + "dnsSearchDomains": null, + "logConfiguration": null, + "entryPoint": null, + "portMappings": [ + { +... +``` + +For more information, see + [Modifying a Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_modify.html) from AWS. +### Add a Delegate Selector + + +When Harness makes a connection to your ECS cluster via its Delegates, it will select the best Delegate according to its history and + [other factors](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#how_does_harness_manager_pick_delegates). To ensure a specific Delegate is used by a Harness entity, you can scope the Delegate as explained in + [Delegate Scope](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_scope), or you can add Selectors to Delegates and then reference the tags in commands and configurations. + + +For this guide, we will use a Delegate Selector. Later, when you add an AWS Cloud Provider to your Harness account, you will use the Delegate Tag you added to ensure the Cloud Provider uses that Delegate. + + +For steps on using Delegate Selector with your ECS Delegate, see the steps in + [Set up ECS Delegate in AWS](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#set_up_ecs_delegate_in_aws). + + +### Next Step + + +* [2 - ECS Connectors and Providers Setup](ecs-connectors-and-providers-setup.md) + + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/run-an-ecs-task.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/run-an-ecs-task.md new file mode 100644 index 00000000000..ee64f9d59c0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/run-an-ecs-task.md @@ -0,0 +1,264 @@ +--- +title: Run an ECS Task +description: In addition to deploying tasks as part of your standard ECS deployment , you can use the ECS Run Task step to run individual tasks separately as a step in your ECS Workflow. The ECS Run Task step is… +sidebar_position: 1100 +helpdocs_topic_id: jr8rhn5bk5 +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +In addition to deploying tasks as part of your [standard ECS deployment](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments), you can use the ECS Run Task step to run individual tasks separately as a step in your ECS Workflow. + +The ECS Run Task step is available in all ECS Workflow types. + +For more information, see [Running tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html) from AWS.An example of when you run a task separately is a one-time or periodic batch job that does not need to keep running or restart when it finishes. + +### Before You Begin + +* [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments) +* [ECS How-tos](https://docs.harness.io/category/aws-ecs-deployments) +* [Deploy Multiple ECS Sidecar Containers](deploy-multiple-containers-in-a-single-ecs-workflow.md) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Limitations + +In the ECS Run Task Workflow step's **Inline** text area, you cannot enter multiple task definitions. You can enter multiple task definitions using the **Remote** option, described in this topic. + +### Review: Running ECS Tasks + +The ECS Run Task step is independent of the Harness Service, but it inherits configurations like ECS Launch Type, Cluster, etc., from the Infrastructure Definition in the Workflow. + +The ECS Run Task step is the same as use the [run-task command](https://docs.aws.amazon.com/cli/latest/reference/ecs/run-task.html) in the AWS ECS CLI. + +The ECS Run Task step has two stages: + +1. Harness registers the task you define in the Workflow, and verifies the registration. +2. Harness triggers the task, and determines if it was triggered successfully. + +The output in the Workflow deployment looks something like this: + + +``` +Creating a task to fetch files from git. +SuccessFully Downloaded files from Git! +1 tasks were found in task family nginx, 1 are stopped and 0 are running +Registering task definition with family => nginx +Task with family name nginx is registered => arn:aws:ecs:us-east-1:1234567891011:task-definition/nginx:12 +Triggering ECS run task arn:aws:ecs:us-east-1:1234567891011:task-definition/nginx:12 in cluster Q-FUNCTIONAL-TESTS-DO-NOT-DELETE +1 Tasks were triggered sucessfully and 0 failures were recieved. +Task => arn:aws:ecs:us-east-1:1234567891011:task/Q-FUNCTIONAL-TESTS-DO-NOT-DELETE/e04a7c7428414a69abf5bf2f801a12ce succeeded +Skipping the steady state check as expected. +``` +if you are new to ECS task scheduling and running tasks manually, review the following topics from AWS: + +* [Scheduling Amazon ECS tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html) +* [Running tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html) +* [RunTask API](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html) + +### Step 1: Add ECS Run Task to Workflow + +This step assumes you have an existing Harness ECS Workflow. If you have not created one, see [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments) and [AWS ECS Deployments](https://docs.harness.io/category/aws-ecs-deployments)how-tos. + +1. In your ECS Workflow, in the **Set up Container** section, click **Add Step**. +2. Select **ECS Run Task**. +3. In the ECS Run Task settings, enter a name. + +### Step 2: ECS Task Family Name + +1. Enter a family name. + +When Harness registers the task definition, it will use this family name. + +The first task definition that is registered into a particular family is given a revision of 1, and any task definitions registered after that are given a sequential revision number. + +If the task definition you enter later uses the `family` parameter, the value provided in that parameter will override the family name you enter in the **ECS Run Task** step. + +### Option 1: Add Inline Task Definition + +The Task Definition must follow the syntax described by AWS in [RegisterTaskDefinition](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html). + +1. In **Add Task Definition**, click **Inline**. +2. Enter the task definition. + +For example, here is a task definition from the [AWS sample repo](https://github.com/aws-samples/aws-containers-task-definitions/blob/master/nginx/nginx_ec2.json): + + +``` +{ + "requiresCompatibilities": [ + "EC2" + ], + "containerDefinitions": [ + { + "name": "nginx", + "image": "nginx:latest", + "memory": 256, + "cpu": 256, + "essential": true, + "portMappings": [ + { + "containerPort": 80, + "protocol": "tcp" + } + ], + "logConfiguration": { + "logDriver": "awslogs", + "options": { + "awslogs-group": "awslogs-nginx-ecs", + "awslogs-region": "us-east-1", + "awslogs-stream-prefix": "nginx" + } + } + } + ], + "volumes": [], + "networkMode": "bridge", + "placementConstraints": [], + "family": "nginx" +} +``` +If you have an existing Task Definition, you can paste it into the JSON. You can obtain the Task Definition from the ECS console: + +![](./static/run-an-ecs-task-36.png) + +You can also obtain the Task Definition using the AWS CLI ( [describe-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-task-definition.html)): + +`aws ecs describe-task-definition --task-definition ecsTaskDefinitionName` + +The task definitions support Harness [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and any other [Harness variables](https://docs.harness.io/article/9dvxcegm90-variables) available at the point when the ECS Task Run step is executed. + +### Option 2: Add Remote Task Definition + +The Task Definition must follow the syntax described by AWS in [RegisterTaskDefinition](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html). + +1. In **Add Task Definition**, click **Remote**. +2. In **Source Repository**, select the Harness Source Repo Provider you added. See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +3. In **Commit ID** , select **Latest from Branch** or **Specific Commit ID**. +4. In **Branch/Commit ID** (required), enter the branch or commit ID for the remote repo. +5. In **File Path**, enter the repo path to the task definition file. +For example, if the repo you set up in your Source Repo Provider is **https://github.com/aws-samples/aws-containers-task-definitions**, and the file containing your task definition is at the path **aws-samples/aws-containers-task-definitions/nginx/nginx\_ec2.json**, you would enter **nginx/nginx\_ec2.json**. + +When you deploy the Workflow, the output of the ECS Run Task step shows the git fetch: + + +``` +Fetching manifest files from git for Service +Git connector Url: https://github.com/wings-software/quality-archive.git +Branch: master + +Fetching following Files : +- ECS/Task_Definitions/Nginx_EC2/nginx_ec2.json + +Successfully fetched following files: +- ECS/Task_Definitions/Nginx_EC2/nginx_ec2.json + +Done. +``` +The task definitions support Harness [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and any other [Harness variables](https://docs.harness.io/article/9dvxcegm90-variables) available at the point when the ECS Task Run step is executed. + +#### Multiple Task Definitions + +In **File Path**, you can enter multiple task definitions, separated by commas: + + +``` +ECS/Task_Definitions/Task_Definitions_EC2/nginx_ec2.json,ECS/Task_Definitions/Task_Definitions_EC2/wildfly_ec2.json +``` +### Option 3: Skip Steady State Check + +If you do not select this option, Harness will not check to see if the task was triggered. + +If you do select this option, Harness will poll the ECS task to see if it triggered successfully. + +### Step 3: Set Timeout + +Enter a timeout for the step. Keep in mind the nature of your ECS task and whether it will take a long time to run. + +If you did not select Skip Steady State Check, and you have a brief timeout, Harness might check for steady state before your task is completed. This will result in a failure. + +You cannot use Harness variable expressions in this setting. They are supported in Basic and Canary Workflow ECS Service Setup steps when using Replica Scheduling. + +### Step 4: Deploy Workflow + +When you deploy the Workflow, the ECS Task Run steps shows a successful deployment: + +![](./static/run-an-ecs-task-37.png) + +Here is an example of the output from a deployed ECS Run Task step: + + +``` +Creating a task to fetch files from git. +SuccessFully Downloaded files from Git! +1 tasks were found in task family nginx, 1 are stopped and 0 are running +Registering task definition with family => nginx +Task with family name nginx is registered => arn:aws:ecs:us-east-1:12345678910:task-definition/nginx:12 +Triggering ECS run task arn:aws:ecs:us-east-1:12345678910:task-definition/nginx:12 in cluster Q-FUNCTIONAL-TESTS-DO-NOT-DELETE +1 Tasks were triggered sucessfully and 0 failures were recieved. +Task => arn:aws:ecs:us-east-1:12345678910:task/Q-FUNCTIONAL-TESTS-DO-NOT-DELETE/e04a7c7428414a69abf5bf2f801a12ce succeeded +Skipping the steady state check as expected. +``` +### Review: Rollbacks + +ECS returns [exit codes](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Container.html#ECS-Type-Container-exitCode) from the container. These are standard [Docker exit status codes](https://docs.docker.com/engine/reference/run/#exit-status). The exit code 0 means success. A non-zero exit code indicates failure. + +Harness checks these codes as part of deployment to determine success of failure. + +If the ECS Run Task step fails, Harness rolls back the Workflow according to its [Failure Strategy](https://docs.harness.io/article/vfp0ksdzg3-define-workflow-failure-strategy-new-template). + +Once a rollback occurs, the resources created by the ECS Run Task step still need to be explicitly cleaned up. + +You can delete the resources created by adding a [Shell Script step](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) in **Rollback Steps**. For example, using the AWS ECS CLI [delete-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/delete-service.html) command. + +If you want to execute AWS CLI commands, ensure that the Delegate host has the AWS CLI installed via a Delegate Profile. See [Common Delegate Profile Scripts](https://docs.harness.io/article/nxhlbmbgkj-common-delegate-profile-scripts). + +The **Rollback Containers** step in **Rollback Steps** only applies to the core service deployed by the Workflow. If a Workflow containing only an ECS Task Run step fails, the **Rollback Containers** step is skipped. + +![](./static/run-an-ecs-task-38.png) + +### Review: Tags Support + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Harness will remove Feature Flags for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions.You can add ECS tags to your task definition just as you would in the AWS console or CLI. + +You can use Harness [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in both keys and values. + +For example: + + +``` +... + "cpu" : "128", + "memory" : "256", + "tags" : [ { + "key": "4713abcd", + "value": "þþÿÿ" + }, + { + "key": "6422abcd", + "value": "þþÿÿ" + }, + { + "key": "7592abcd", + "value": "þþÿÿ" + }, + { + "key": "${workflow.variables.foo}", + "value": "${workflow.variables.bar}" + } +], + "inferenceAccelerators" : [ ] +} +... +``` +When the ECS task definition is registered, you will see the tags in AWS. + +Tags must meet the ECS requirements. See [tags](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html#API_RegisterTaskDefinition_RequestSyntax) in RegisterTaskDefinition from AWS. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button . + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/_fargate.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/_fargate.png new file mode 100644 index 00000000000..552cd692b05 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/_fargate.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-01.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-01.png new file mode 100644 index 00000000000..39ff6acf250 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-01.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-02.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-02.png new file mode 100644 index 00000000000..88904e61ddb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-02.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-03.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-03.png new file mode 100644 index 00000000000..23212b3fc4c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-03.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-04.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-04.png new file mode 100644 index 00000000000..15a1d50a5e6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-04.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-05.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-05.png new file mode 100644 index 00000000000..f8ba00a5516 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-05.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-06.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-06.png new file mode 100644 index 00000000000..23212b3fc4c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-06.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-07.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-07.png new file mode 100644 index 00000000000..9859be2804c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-07.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-08.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-08.png new file mode 100644 index 00000000000..1564ea4f370 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-08.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-09.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-09.png new file mode 100644 index 00000000000..0d5e20d9114 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-09.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-10.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-10.png new file mode 100644 index 00000000000..ef9ce9981c9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/deploy-multiple-containers-in-a-single-ecs-workflow-10.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-50.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-50.png new file mode 100644 index 00000000000..7e41fe2e812 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-50.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-51.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-51.png new file mode 100644 index 00000000000..2a9fa652581 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-51.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-52.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-52.png new file mode 100644 index 00000000000..c3fc824cca9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-52.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-53.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-53.png new file mode 100644 index 00000000000..fef280d7086 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-53.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-54.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-54.png new file mode 100644 index 00000000000..adddf0bdbeb Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-54.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-55.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-55.png new file mode 100644 index 00000000000..3e5bd25543a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-55.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-56.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-56.png new file mode 100644 index 00000000000..510253062fc Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-56.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-57.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-57.png new file mode 100644 index 00000000000..b7cdb277938 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-57.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-58.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-58.png new file mode 100644 index 00000000000..b43cc5dae05 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-58.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-59.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-59.png new file mode 100644 index 00000000000..be570492e0a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-59.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-60.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-60.png new file mode 100644 index 00000000000..3427314e24c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-60.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-61.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-61.png new file mode 100644 index 00000000000..eb7c20336a4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-61.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-62.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-62.png new file mode 100644 index 00000000000..3a1393bf569 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-62.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-63.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-63.png new file mode 100644 index 00000000000..341946e9d51 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-63.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-64.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-64.png new file mode 100644 index 00000000000..d5b7813156b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-64.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-65.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-65.png new file mode 100644 index 00000000000..abfbf0082d9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-65.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-66.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-66.png new file mode 100644 index 00000000000..5df7fa03544 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-66.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-67.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-67.png new file mode 100644 index 00000000000..4bda639f5fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-67.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-68.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-68.png new file mode 100644 index 00000000000..094294fd7e1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-68.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-69.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-69.png new file mode 100644 index 00000000000..46c9aa3ba1c Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-69.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-70.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-70.png new file mode 100644 index 00000000000..024de24c0ee Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-70.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-71.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-71.png new file mode 100644 index 00000000000..acd2730bfa3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-71.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-72.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-72.png new file mode 100644 index 00000000000..8a14bc723a3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-72.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-73.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-73.png new file mode 100644 index 00000000000..b7648353e9d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-73.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-74.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-74.png new file mode 100644 index 00000000000..4a47c012bb9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-74.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-75.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-75.png new file mode 100644 index 00000000000..1a787340a2a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-75.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-76.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-76.png new file mode 100644 index 00000000000..d038a4d3984 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-76.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-77.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-77.png new file mode 100644 index 00000000000..5c86989811f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-77.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-78.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-78.png new file mode 100644 index 00000000000..e1fe872dad3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-78.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-79.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-79.png new file mode 100644 index 00000000000..3dd82f635af Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-79.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-80.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-80.png new file mode 100644 index 00000000000..a463f416d68 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-80.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-81.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-81.png new file mode 100644 index 00000000000..305124d1304 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-81.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-82.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-82.png new file mode 100644 index 00000000000..c3fa55795d7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-82.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-83.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-83.png new file mode 100644 index 00000000000..11883ba1a76 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-83.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-84.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-84.png new file mode 100644 index 00000000000..d5a7a981aa2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-84.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-85.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-85.png new file mode 100644 index 00000000000..5df7fa03544 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-85.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-86.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-86.png new file mode 100644 index 00000000000..87529515fce Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-86.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-87.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-87.png new file mode 100644 index 00000000000..1768f94c4b3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-87.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-88.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-88.png new file mode 100644 index 00000000000..bb5eac36eaf Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-88.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-89.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-89.png new file mode 100644 index 00000000000..d5db9047934 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-blue-green-workflows-89.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-connectors-and-providers-setup-00.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-connectors-and-providers-setup-00.png new file mode 100644 index 00000000000..44b6bcac7af Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-connectors-and-providers-setup-00.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-environments-91.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-environments-91.png new file mode 100644 index 00000000000..50fe523d46b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-environments-91.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-39.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-39.png new file mode 100644 index 00000000000..24ba4925802 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-39.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-40.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-40.png new file mode 100644 index 00000000000..29670a2b91d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-40.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-41.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-41.png new file mode 100644 index 00000000000..a699da1ecfa Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-41.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-42.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-42.png new file mode 100644 index 00000000000..b375620354e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-42.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-43.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-43.png new file mode 100644 index 00000000000..0624091af07 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-43.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-44.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-44.png new file mode 100644 index 00000000000..135d6f2b666 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-44.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-45.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-45.png new file mode 100644 index 00000000000..0992d4a31b3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-45.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-46.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-46.png new file mode 100644 index 00000000000..4813a8653d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-46.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-47.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-47.png new file mode 100644 index 00000000000..b56c4352bde Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-services-47.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-setup-in-yaml-48.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-setup-in-yaml-48.png new file mode 100644 index 00000000000..e7f019052aa Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-setup-in-yaml-48.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-setup-in-yaml-49.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-setup-in-yaml-49.png new file mode 100644 index 00000000000..53477ca91e9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-setup-in-yaml-49.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-troubleshooting-90.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-troubleshooting-90.png new file mode 100644 index 00000000000..6a1958cff46 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-troubleshooting-90.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-14.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-14.png new file mode 100644 index 00000000000..0f3bea128c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-14.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-15.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-15.png new file mode 100644 index 00000000000..02b727945d0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-15.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-16.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-16.png new file mode 100644 index 00000000000..c3e0c13a942 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-16.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-17.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-17.png new file mode 100644 index 00000000000..1940f6c2798 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-17.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-18.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-18.png new file mode 100644 index 00000000000..dd4f473edb3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-18.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-19.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-19.png new file mode 100644 index 00000000000..0e2244ce91e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-19.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-20.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-20.png new file mode 100644 index 00000000000..941579634b2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-20.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-21.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-21.png new file mode 100644 index 00000000000..40e3f92e46a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-21.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-22.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-22.png new file mode 100644 index 00000000000..6c6536d095b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-22.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-23.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-23.png new file mode 100644 index 00000000000..05816b11acf Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-23.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-24.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-24.png new file mode 100644 index 00000000000..23ae34f4b3b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-24.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-25.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-25.png new file mode 100644 index 00000000000..95a876c4409 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-25.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-26.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-26.png new file mode 100644 index 00000000000..d9bf2d7dcc8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-26.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-27.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-27.png new file mode 100644 index 00000000000..107b0dbfc33 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-27.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-28.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-28.png new file mode 100644 index 00000000000..e6ac5b82d26 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-28.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-29.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-29.png new file mode 100644 index 00000000000..5df7fa03544 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-29.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-30.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-30.png new file mode 100644 index 00000000000..b3677a8e191 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-30.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-31.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-31.png new file mode 100644 index 00000000000..5ad072905fc Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-31.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-32.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-32.png new file mode 100644 index 00000000000..be0e99e405f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/ecs-workflows-32.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-11.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-11.png new file mode 100644 index 00000000000..d49c98abb64 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-11.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-12.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-12.png new file mode 100644 index 00000000000..d5c04733ff0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-12.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-13.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-13.png new file mode 100644 index 00000000000..e63567a14b2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/harness-ecs-delegate-13.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-36.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-36.png new file mode 100644 index 00000000000..0624091af07 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-36.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-37.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-37.png new file mode 100644 index 00000000000..8c29756706f Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-37.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-38.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-38.png new file mode 100644 index 00000000000..8d754acc05d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/run-an-ecs-task-38.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-33.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-33.png new file mode 100644 index 00000000000..92455e51734 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-33.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-34.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-34.png new file mode 100644 index 00000000000..98aa69aa790 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-34.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-35.png b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-35.png new file mode 100644 index 00000000000..47b0c012cb3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/static/use-ecs-task-and-service-definitions-in-git-repos-35.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/use-ecs-task-and-service-definitions-in-git-repos.md b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/use-ecs-task-and-service-definitions-in-git-repos.md new file mode 100644 index 00000000000..da16a34988a --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/ecs-deployment/use-ecs-task-and-service-definitions-in-git-repos.md @@ -0,0 +1,137 @@ +--- +title: Use Remote ECS Task and Service Definitions in Git Repos +description: As an alternative to entering your ECS task and/or service definitions inline , you can use your Git repo for task and/or service definition JSON files. At deployment runtime, Harness will pull these… +sidebar_position: 1200 +helpdocs_topic_id: oy6sxbgqvc +helpdocs_category_id: df9vj316ec +helpdocs_is_private: false +helpdocs_is_published: true +--- + +As an alternative to entering your [ECS task and/or service definitions inline](ecs-services.md), you can use your Git repo for task and/or service definition JSON files. At deployment runtime, Harness will pull these files and use them to create your containers and services. + +This remote definition support enables you to leverage the build tooling and scripts you use currently for updating the definitions in your repos. + +You can also use a Git repo for your entire Harness Application, and sync it unidirectionally or bidirectionally. For more information, see  [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code). There is no conflict between the Git repo used for remote definition files and the Git repo used for the entire Harness Application. + +### Before You Begin + +* [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) +* [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments) + +### Limitations + +* For EC2, the required `${DOCKER_IMAGE_NAME}` placeholder must be in your task definition. See [Review: Task Definition Placeholders](#review_task_definition_placeholders) below. +* For Fargate, the required `${EXECUTION_ROLE}` placeholder must be in your task definition. See [Review: Task Definition Placeholders](#review_task_definition_placeholders) below. +* You can use remote files for the task definition and the service definitions, or you can use a remote task definition and inline service specification. +* You cannot use an inline task definition and remote service specification. +* Remote files must be in JSON format. +* Remote files must be formatted to meet ECS JSON formatting standards. See [task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) and [service definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html) parameters from AWS. +* Remote definition files are supported for Git repos only. AWS S3 buckets will be supported in the near future. + +### Supported Platforms and Technologies + +See  [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Link Harness to Your Repo + +Add a Harness Source Repo Provider to connect Harness to the repo where your ECS definitions are located. + +See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +### Step 2: Link Remote Definitions + +1. In your Harness ECS Service, in **Deployment Specification**, click more options (**︙**), and then select **Link Remote Definitions**. +The ECS Task Definitions settings appear. +2. In **Source Repository**, select the Harness Source Repo Provider you added. +3. In **Commit ID** , select **Latest from Branch** or **Specific Commit ID**. +4. In **Branch/Commit ID** (required), enter the branch or commit ID for the remote repo. +5. In **File Folder Path to Task Definition**, enter the repo folder path to the task definition. + For example, if the repo you set up in your Source Repo Provider is **https://github.com/aws-samples/aws-containers-task-definitions**, and the folder containing your task definition is **nginx**, you would enter **nginx**. +6. If you want to enter an inline service definition, select **Use Inline Service Definition**. +7. To link to a remote service definition in the repo configured in your **Source Repository**, in **File Folder Path to Service Definition**, enter the repo folder path to the service definition. +8. Click **Submit**. + +### Review: Task Definition Placeholders + +The ECS task definition JSON uses the following placeholders. + +* [`${DOCKER_IMAGE_NAME}`](#docker_image_name) +* [`${CONTAINER_NAME}`](#container_name) +* [`${EXECUTION_ROLE}`](#execution_role) + +Ensure that the required placeholders `${DOCKER_IMAGE_NAME}` and `${EXECUTION_ROLE}` (for Fargate) are used. + +#### `${DOCKER_IMAGE_NAME}` + +**Required.** This placeholder is used with the image label in the JSON:`"image" : "${DOCKER_IMAGE_NAME}"`The placeholder is replaced with the Docker image name and tag at runtime. + +``` +...     "volumesFrom": [],      "image": "registry.hub.docker.com/library/nginx:stable-perl",      ...      "name": "library_nginx_stable-perl"    } +``` + +#### `${CONTAINER_NAME}` + +This placeholder is used with the name label in the JSON:`"name" : "${CONTAINER_NAME}"`The placeholder is replaced with a container name based on the Docker image name at runtime. + + +#### `${EXECUTION_ROLE}` + +**Required for Fargate.** This placeholder is used with the `executionRoleArn` label in the JSON. + +`"executionRoleArn" : "${EXECUTION_ROLE}"` + +At deployment runtime, the `${EXECUTION_ROLE}` placeholder is replaced with the ARN of the **Target Execution Role** used by the Infrastructure Definition of the Workflow deploying this Harness Service. + +![](./static/_fargate.png) + +You can also replace the `${EXECUTION_ROLE}` placeholder with another ARN manually in the Container Definition in the Service. This will override the **Target Execution Role** used by the Infrastructure Definition. + +Replacing the `${EXECUTION_ROLE}` placeholder manually is usually only done when using a private repo. + +In most cases, you can simply leave the placeholder as is.For more information, see  [Amazon ECS Task Execution IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) from AWS. | + +### Option 1: Using Variables for Remote Definition Paths + +You can use Service Configuration variables in the paths for the remote definition files. + +This enables you to change the paths when you deploy the Service, and to override them at the Harness Environment level. + +In the Harness Service, in **Config Variables**, click **Add Variable**. + +In **Config Variable**, enter a name, such as **task\_path**, and enter a path in **Value**. + +In **ECS Task Definitions**, in **File Folder Path to Task Definition**, enter the variable expression, such as `${serviceVariable.task_path}`: + +![](./static/use-ecs-task-and-service-definitions-in-git-repos-33.png) + +You can also use Config Variables for values in your remote definitions, but this can be more complicated to manage. + +### Option 2: Override Remote Paths in Environments + +If you have used Service Config Variables in the Task Definitions settings, you can override these values at the Harness Environment level. + +See [Override a Service Configuration in an Environment](https://docs.harness.io/article/4m2kst307m-override-service-files-and-variables-in-environments) for details. + +Basically, you select the Service Config variable and provide a new value: + +![](./static/use-ecs-task-and-service-definitions-in-git-repos-34.png) + +### Option 3: Override Remote Paths in Workflows + +To override a path in a Workflow, you can use a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in the Harness ECS Service's **Task Definition** settings. + +First, you create a Workflow variable in the Workflow that will deploy the Harness ECS Service that uses a remote task definition. For example, `${workflow.variables.new_path}`. + +Next, in the Harness ECS Service's **Task Definition** settings, you add the Workflow variable expression in the **File Folder Path to Task Definition** setting. + +![](./static/use-ecs-task-and-service-definitions-in-git-repos-35.png) + +When you deploy the Workflow (independently or in a Pipeline), you are prompted to provide a value for the Workflow variable. + +You can also pass in a Workflow variable value using a Trigger or between Workflows in a Pipeline. See [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows) and [Pass Variables between Workflows](https://docs.harness.io/article/gkmgrz9shh-how-to-pass-variables-between-workflows). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/1-delegate-and-connectors-for-lambda.md b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/1-delegate-and-connectors-for-lambda.md new file mode 100644 index 00000000000..c17efccfdd1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/1-delegate-and-connectors-for-lambda.md @@ -0,0 +1,138 @@ +--- +title: Connect to AWS for Lambda Deployments +description: Set up the Delegate and AWS Cloud Provider for Lambda. +# sidebar_position: 2 +helpdocs_topic_id: lo9taq0pze +helpdocs_category_id: 3pyb3kmkbs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).This topic sets up the Harness Delegate, Artifact Server, and Cloud Provider for your Lambda Deployment. + +### Before You Begin + +* See [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### Step 1: Create Roles and Policies + +Create an IAM role to support Harness Lambda deployments. + +You will apply this role to the host that runs the Harness Delegate you install later, or to the AWS account you use to connect with Harness. + +The AWS [IAM Policy Simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) is a useful tool for evaluating policies and access. + +##### IAM Read Access + +Ensure that the IAM role assigned to the Delegate host has the **IAMReadOnlyAccess** (arn:aws:iam::aws:policy/IAMReadOnlyAccess) policy attached. The policy provides read-only access to IAM for the Delegate so that it can confirm that it has other required policies. + +##### Amazon S3 + +The Lambda function metadata is pulled from an AWS S3 bucket and therefore the Delegate needs the **AmazonS3ReadOnlyAccess** (arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess) policy. + +##### EC2 and ECS + +The Delegate might be a Shell Script Delegate installed on an EC2 instance or an ECS Delegate installed on an ECS cluster. The required policies for the Delegate are described here: + +* [ECS (Existing Cluster)](https://docs.harness.io/article/whwnovprrb-cloud-providers#ecs_existing_cluster) - **AmazonEC2ContainerServiceforEC2Role** and a custom Harness policy. +* [AWS EC2](https://docs.harness.io/article/whwnovprrb-cloud-providers#aws_ec2) - **AmazonEC2FullAccess**. + +##### AWS Lambda Policies + +For the Delegate to perform operations with Lambda, it requires an IAM role with the following policies: + +* **AWSLambda\_FullAccess (previously AWSLambdaFullAccess)** (arn:aws:iam::aws:policy/AWSLambda\_FullAccess) +* **AWSLambdaRole** (arn:aws:iam::aws:policy/service-role/AWSLambdaRole) + +The IAM role attached to your EC2 Delegate host must have the **AWSLambdaRole** (arn:aws:iam::aws:policy/service-role/AWSLambdaRole) policy attached. The policy contains the `lambda:InvokeFunction` needed for Lambda deployments: + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "lambda:InvokeFunction" + ], + "Resource": [ + "*" + ] + } + ] +} +``` +Attach the AWSLambdaRole (arn:aws:iam::aws:policy/service-role/AWSLambdaRole) policy to the IAM role for the Delegate host in EC2 or ECS. + +For more information, see [Identity-based IAM Policies for AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/access-control-identity-based.html) from AWS. + +##### Summary + +If the IAM role assigned to the Delegate has the following roles you will encounter no related issues: + +* Shell Script Delegate on EC2 instance policies or ECS Delegate policies +* IAMReadOnlyAccess +* AWSLambdaRole +* AWSLambda\_FullAccess (previously AWSLambdaFullAccess) + +##### Policy Requirements for Serverless Dashboard + +To see your Lambda invocations on the [Serverless Dashboard](https://docs.harness.io/article/vlj9xbj315-serverless-functions-dashboard), the [Execution Role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) for the Lambda function must have the following policies: + +* AmazonEC2FullAccess +* AWSLambda\_FullAccess (previously AWSLambdaFullAccess) +* AWSLambdaVPCAccessExecutionRole +* AWSLambdaRole +* CloudWatchReadOnlyAccess + +The Function Invocations are updated every 10 minutes. + +### Step 2: Install the Harness Delegate + +The Harness Delegate runs in your AWS VPC and executes all deployment steps, such the artifact collection and commands. The Delegate makes outbound HTTPS connections to the Harness Manager only. + +The simplest method is to install a Harness Shell Script or ECS Delegate in same AWS VPC as your Lambda functions and then set up the Harness AWS Cloud Provider to use the same IAM credentials as the installed Delegate. This is described in [Add the Cloud Provider](1-delegate-and-connectors-for-lambda.md#add-the-cloud-provider) below. + +For steps on installing a Delegate in your VPC, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +#### Delegate Selector + +To ensure the IAM role applied to the Delegate you installed in the AWS VPC is used by your AWS Cloud Provider, you add Selectors to the Delegate and reference the Selector in the AWS Cloud Provider. + +![](./static/1-delegate-and-connectors-for-lambda-00.png) + +For steps on adding Selectors, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +### Step 3: Add an AWS Cloud Provider + +In this section, we will add a Harness AWS Cloud Provider to your Harness account to connect to both AWS S3, Lambda, and the VPC. You can use a single or separate AWS Cloud Providers for the connections, but using a single AWS Cloud Provider is easiest. + +As Harness provides first-class support for [CloudWatch](https://docs.harness.io/article/q6ti811nck-cloud-watch-verification-overview), you can also use the same AWS Cloud Provider for your CloudWatch connection. + +#### Permissions + +The AWS Cloud Provider in this example will assume the IAM Role associated with the Delegate you installed in your VPC. If you choose to use a AWS user account for the connection, apply the same policies to its IAM role described in [IAM Roles](1-delegate-and-connectors-for-lambda.md#iam-roles) above. + +#### Add the Cloud Provider + +For the AWS Cloud Provider in Harness, you can specify an AWS account or assume the IAM role used by the installed Harness Delegate (recommended). + +![](./static/1-delegate-and-connectors-for-lambda-01.png) + +##### AWS Cloud Provider + +To set up an AWS Cloud Provider, do the following: + +1. In the Harness Manager, click **Setup**, and then click **Cloud Providers**. +2. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears. +3. In **Type**, select **Amazon Web Services**. +4. In **Display Name**, enter the name that you will use to refer to this Cloud Provider when setting up your Harness Application, such as **AWS Cloud**. You will use the name when setting up Harness Environments, Service Infrastructures, and other settings. +5. In **Credentials**, select **Assume IAM Role on Delegate** (recommended) or **Enter AWS Access Keys manually**. If you selected **Enter AWS Access Keys manually**, enter your username and password. +If you selected **Assume IAM Role on Delegate**, in **Delegate Selector**, select the Selector that you added to the Delegate installed in your VPC. + +### Next Steps + +* [Add Lambda Functions](2-service-for-lambda.md) +* [Define your Lambda Target Infrastructure](3-lambda-environments.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/2-service-for-lambda.md b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/2-service-for-lambda.md new file mode 100644 index 00000000000..9ac0f4dd956 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/2-service-for-lambda.md @@ -0,0 +1,129 @@ +--- +title: Add Lambda Functions +description: Create a Harness Service to define your Lambda functions. +# sidebar_position: 2 +helpdocs_topic_id: qp8hk4nzbo +helpdocs_category_id: 3pyb3kmkbs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).This topic describes how to create a Harness Application and adds a Service that uses a function file, runtime, and handler information to define the Lambda function to deploy. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Review: Artifact Source Support](#review_artifact_source_support) +* [Step 1: Create a Harness Lambda Service](#step_1_create_a_harness_lambda_service) +* [Step 2: Add Lambda Functions](#step_2_add_lambda_functions) +* [Step 3: Lambda Function Specification](#step_3_lambda_function_specification) +* [Option: Lambda Environment Variables using Service Config Variables](#option_lambda_environment_variables_using_service_config_variables) +* [Next Steps](#next_steps) + +### Before You Begin + +* [Connect to AWS for Lambda Deployments](1-delegate-and-connectors-for-lambda.md) + +### Review: Artifact Source Support + +Harness supports the following artifact sources with Lambda: + +* [Jenkins](https://docs.harness.io/article/qa7lewndxq-add-jenkins-artifact-servers) +* [Artifactory](https://docs.harness.io/article/nj3p1t7v3x-add-artifactory-servers) +* [AWS S3](1-delegate-and-connectors-for-lambda.md) +* [Nexus](https://docs.harness.io/article/rdhndux2ab-nexus-artifact-sources) +* [Custom Artifact Source](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source) + +### Step 1: Create a Harness Lambda Service + +To add the Lambda Service, do the following: + +1. In your new Application, click **Services**. The **Services** page appears. +2. In the **Services** page, click **Add Service**. The **Service** dialog appears. + ![](./static/2-service-for-lambda-16.png) +3. In **Name**, enter a name for your Service, such as **aws-lambda**. You will use this name to select this Service when you set up a Harness Environment and Workflow. +4. In **Description**, enter a description for your Service. +5. In **Deployment Type**, select **AWS Lambda**. +6. Click **SUBMIT**. The new Service is displayed. + +![](./static/2-service-for-lambda-17.png) + +### Step 2: Add Lambda Functions + +An Artifact Source in a Lambda Service is the Lambda function file you want to deploy. The Artifact Source uses the AWS Cloud Provider you set up for your Harness account, as described in [Delegate and Connectors for Lambda](1-delegate-and-connectors-for-lambda.md). + +To add an Artifact Source to this Service, do the following: + +1. In your Lambda Service, click **Add Artifact Source**, and then click **Amazon S3**. For information on using a Custom Artifact Source, see [Custom Artifact Source](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source). + + The **Amazon S3 Artifact Source** dialog appears. + + ![](./static/2-service-for-lambda-18.png) + +2. In **Cloud Provider**, select the AWS Cloud Provider you set up in [Delegate and Connectors for Lambda](1-delegate-and-connectors-for-lambda.md). +3. In **Bucket**, select the S3 bucket containing the Lambda function zip file you want. +4. In **Artifact Path**, select the Lambda function zip file containing your functions. Here is how your S3 bucket and file relate to the Artifact Source dialog: + + ![](./static/2-service-for-lambda-19.png) + + The **Meta-data Only** option is selected by default. Harness will not copy the actual zip file. During runtime, Harness passes the metadata to Lambda where it is used to obtain the file. + +5. Click **SUBMIT**. The Lambda function file is added as an Artifact Source. + + ![](./static/2-service-for-lambda-20.png) + +### Step 3: Lambda Function Specification + +In **Lambda Function Specification**, you provide details about the Lambda functions in the zip file in Artifact Source. + +Click **Lambda Function Specification**. The **AWS Lambda Function Specifications** dialog appears. + +The details you provide are very similar to the options in the AWS CLI `aws lambda create-function` command. For more information, see [create-function](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) from AWS. + +![](./static/2-service-for-lambda-21.png) + +Some of the options are specified in Harness Environments and Workflows to help you reuse the Service with multiple Environments and Workflows. + +By default, the **AWS Lambda Function Specifications** dialog displays a function. If you have multiple Lambda functions in the zip file in Artifact Source, click **Add Function** and provide details for each function. + +For each function in the **Functions** section, enter the following function information: + +* **Runtime** - The [Lambda runtime](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) that executes your function. This is the runtime for all functions in this spec. AWS can change its runtime version support. For example, AWS no longer supports **nodejs6.10**. +* **Function Name** - The name of your function. This name will become part of the function ARN in Lambda. +Harness uses default variables for the name that include your Harness Application, Service, and Environment names (`${app.name}_${service.name}_${env.name}`). If you use these, you need to append a unique suffix to each function name, for example `${app.name}_${service.name}_${env.name}_my-function`. Or you can replace the entire name. +* **Handler** - The method that the runtime executes when your function is invoked. The format for this value varies per language. See [Programming Model](https://docs.aws.amazon.com/lambda/latest/dg/programming-model-v2.html) for more information. + +For example, let's look at a Node.js function in a file named **index.js**: + + +``` +exports.handler = async function(event, context) { + console.log("EVENT: \n" + JSON.stringify(event, null, 2)) + return context.logStreamName +} +``` +The value of the **Handler** setting is the file name (**index**) and the name of the exported handler module, separated by a dot. In our example, the handler is **index.handler**. This indicates the handler module that's exported by index.js. + +* **Memory Size** - The amount of memory available to the function during execution. Choose an amount [between 128 MB and 3,008 MB](https://docs.aws.amazon.com/lambda/latest/dg/limits.html) in 64 MB increments. There are two Execution Timeout settings. A default setting and a function-specific setting. +* **Execution Timeout** - The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds. There are two Execution Timeout settings. A default setting and a function-specific setting. + +When you are done, the **AWS Lambda Function Specifications** dialog will look something like this: + +![](./static/2-service-for-lambda-22.png) + +When you are done, click **Submit**. Your function is added to the Service. + +### Option: Lambda Environment Variables using Service Config Variables + +You can use [Config Variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) in your Service to create [Lambda Environment Variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html). + +Encrypted Config Variables will appear as plaintext Environment Variables in Lambda. + +When you deploy your function, Harness replaces any existing Environment variables with the variables you added as Service Config Variables. + +### Next Steps + +* [Define your Lambda Target Infrastructure](3-lambda-environments.md) +* [Create a Basic Lambda Deployment](4-lambda-workflows-and-deployments.md) +* [Troubleshooting AWS Lambda Deployments](https://docs.harness.io/article/g9o2g5jbye-troubleshooting-harness#aws_lambda) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/3-lambda-environments.md b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/3-lambda-environments.md new file mode 100644 index 00000000000..9824aa7013f --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/3-lambda-environments.md @@ -0,0 +1,119 @@ +--- +title: Define Your Lambda Target Infrastructure +description: Add a Harness Environment that describes your AWS Lambda computing service. +# sidebar_position: 2 +helpdocs_topic_id: 45dm9z3m2h +helpdocs_category_id: 3pyb3kmkbs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).Once you've added a Lambda Service to your Application, you can define Environments where your Service can be deployed. Within an Environment, you specify the following in an Infrastructure Definition: + +* The Lambda Service that contains your functions zip file and functions specs. (Set up in [Add Lambda Functions](2-service-for-lambda.md).) +* A deployment type. In this case, **Lambda**. +* The AWS Cloud Provider you set up in [Connect to AWS for Lambda Deployments](1-delegate-and-connectors-for-lambda.md). + +An Environment can be a Dev, QA, Production, or other Environment. You can deploy one or many Services to each Environment by creating an Infrastructure Definition in the Environment for each Service. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Step 1: Create an Environment](#step_1_create_an_environment) +* [Step 2: Define the Lambda Infrastructure](#step_2_define_the_lambda_infrastructure) +* [Option: Provision the Lambda Infrastructure](#option_provision_the_lambda_infrastructure) +* [Option: Override Service Settings](#option_override_service_settings) +* [Next Steps](#next_steps) + +### Before You Begin + +* [Connect to AWS for Lambda Deployments](1-delegate-and-connectors-for-lambda.md) +* [Add Lambda Functions](2-service-for-lambda.md) + +### Step 1: Create an Environment + +The following procedure creates an Environment for the Lambda Service type, as set up in [Add Lambda Functions](2-service-for-lambda.md). + +1. In your Harness Application, click **Environments**. The **Environments** page appears. +2. Click **Add Environment**. The **Environment** dialog appears. +3. In **Name**, enter a name that describes the deployment environment, for example, **Lambda**. +4. In **Environment Type**, select **Non-Production**. +5. Click **SUBMIT**. The new **Environment** page appears.![](./static/3-lambda-environments-23.png) + +### Step 2: Define the Lambda Infrastructure + +You must define one or more Infrastructure Definitions for the Environment. + +A Harness Infrastructure Definition defines the AWS VPC, subnets, and security groups to use for the Lambda deployment. + +To add the Infrastructure Definition: + +1. In the Harness Environment, click **Add Infrastructure Definition**. The **Infrastructure Definition** dialog appears. + ![](./static/3-lambda-environments-24.png) + The **Infrastructure Definition** dialog appears.![](./static/3-lambda-environments-25.png) +2. Enter a **Name** that will identify this Infrastructure Definition when you [add it to a Workflow](4-lambda-workflows-and-deployments.md). +3. In **Cloud Provider Type**, select **Amazon Web Services**. +4. In **Deployment Type**, select **AWS Lambda**. This expands the **Infrastructure Definition** dialog to look something like this: + ![](./static/3-lambda-environments-26.png) +5. Select **Use** **Already Provisioned Infrastructure**, and follow the [Define a Provisioned Infrastructure](#define_provisioned_infrastructure) steps below. + +If you are using a configured Harness [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner), instead select **Map Dynamically Provisioned Infrastructure**, and then select the provisioner. The settings below are for **Use Already Provisioned Infrastructure**. + +#### Define a Provisioned Infrastructure + +The **Infrastructure Definition** dialog's lower section defines settings similar to the `‑‑role` and `‑‑vpc-config` options in the `aws lambda create-function` command. For example: + + +``` +$ aws lambda create-function --function-name ExampleApp-aws-lambda-Lambda-my-function \ +--runtime nodejs8.10 --handler index.handler --zip-file lambda/function.zip \ +**--role** execution-role-arn \ +**--vpc-config** SubnetIds=comma-separated-vpc-subnet-ids,SecurityGroupIds=comma-separated-security-group-ids +``` +To fill out the **Infrastructure Definition** dialog's lower section: + +1. In **Cloud Provider**, select the AWS Cloud Provider you added in [Connect to AWS for Lambda Deployments](1-delegate-and-connectors-for-lambda.md). + +:::note +After your **Cloud Provider** selection, the remaining drop-down lists take a few seconds to populate. Later, some fields will again take a few seconds to repopulate based on your selections in other fields. +::: + +2. In **IAM Role**, select [IAM role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) that AWS Lambda assumes when it executes your function. +3. In **Region**, select the AWS region where your function will be used. +4. In **VPC**, to connect your function to a VPC to access private resources during execution, select the VPC. If you do not select a VPC, then the function executes in "*non-VPC*" mode. + +:::note +Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional, VPC-specific configuration information that includes private subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC. + +For more information and guidelines, see [Configuring a Lambda Function to Access Resources in an Amazon VPC](https://docs.aws.amazon.com/lambda/latest/dg/vpc.html) or [Configuring a Lambda function to access resources in a VPC](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html) from AWS. +::: + +5. In **Subnets**, select the subnet IDs for the subnets (within the VPC) where the Lambda function will access resources. AWS recommends that you choose at least two subnets for Lambda to run your functions in high availability mode. +6. In **Security Groups**, select the security group ID(s) for the Lambda function. When you set a VPC for your function to access, your Lambda function loses default Internet access. If you require external Internet access for your function, make sure that your security group allows outbound connections, and that your VPC has a NAT gateway. +7. Enable **Scope to Specific Services**, and use the adjacent drop-down to select the Harness Lambda Service you created in [Add Lambda Functions](2-service-for-lambda.md). + +:::note +Scoping is a recommended step, to make this Infrastructure Definition available to any Workflow or Phase that uses your Lambda Service. +::: + + When you are done, the dialog will look something like this: + + ![](./static/3-lambda-environments-27.png) + +8. Click **Submit**. The new Infrastructure Definition is added to the Harness environment. + +### Option: Provision the Lambda Infrastructure + +With Harness, you can use a CloudFormation template to provision the Lambda infrastructure. For more information, see [Map an AWS Lambda Infrastructure](../cloudformation-category/map-cloud-formation-infrastructure.md#option-3-map-an-aws-lambda-infrastructure). + +### Option: Override Service Settings + +Your Environment can overwrite Service Config Variables, Config Files, and other settings. This enables you to maintain a Service's native settings, but change them when the Service is used with this Environment. + +For more information, see [Override a Service Configuration](https://docs.harness.io/article/n39w05njjv-environment-configuration#override_a_service_configuration). + +### Next Steps + +* [Create a Basic Lambda Deployment](4-lambda-workflows-and-deployments.md) +* [Troubleshoot AWS Lambda Deployments](https://docs.harness.io/article/g9o2g5jbye-troubleshooting-harness#aws_lambda) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/4-lambda-workflows-and-deployments.md b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/4-lambda-workflows-and-deployments.md new file mode 100644 index 00000000000..2c9f365cae4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/4-lambda-workflows-and-deployments.md @@ -0,0 +1,224 @@ +--- +title: Create a Basic Lambda Deployment +description: Create and deploy a Basic Workflow for Lambda. +# sidebar_position: 2 +helpdocs_topic_id: 491a6etr7a +helpdocs_category_id: 3pyb3kmkbs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).By default, Harness Basic Workflows for Lambda have two steps: + +* **AWS Lambda** - This step deploys the function and also sets the Lambda aliases and tags for the function. +* **Rollback AWS Lambda** - If a deployment fails, this step uses aliases to roll back to the last successful version of a Lambda function. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Step 1: Create the Lambda Workflow](#step_1_create_the_lambda_workflow) +* [Step 2: Configure Lambda Aliases and Tags](#step_2_configure_lambda_aliases_and_tags) +* [Review: Rollback AWS Lambda Step](#review_rollback_aws_lambda_step) +* [Example: Lambda Workflow Deployment](#example_lambda_workflow_deployment) +* [Next Steps](#next_steps) + +### Before You Begin + +* [Connect to AWS for Lambda Deployments](1-delegate-and-connectors-for-lambda.md) +* [Add Lambda Functions](2-service-for-lambda.md) +* [Define your Lambda Target Infrastructure](3-lambda-environments.md) + +### Step 1: Create the Lambda Workflow + +To create a Basic Workflow for Lambda, do the following: + +1. In your Application, click **Workflows**. +2. Click **Add Workflow**. The **Workflow** dialog appears. + ![](./static/4-lambda-workflows-and-deployments-02.png) +3. In **Name**, enter a name for your Workflow, such as **Lambda Basic**. +4. In **Workflow Type**, select **Basic Deployment**. +5. In **Environment**, select the Environment you created for your Lambda deployment in [Define Your Lambda Target Infrastructure](3-lambda-environments.md). +6. In **Service**, select the Lambda Service you created in [Add Lambda Functions](2-service-for-lambda.md). +7. Select the Infrastructure Definition you created in [Define your Lambda Target Infrastructure](3-lambda-environments.md). +8. Click **SUBMIT**. The new Basic Workflow is created and pre-configured with the **AWS Lambda** step. + +### Step 2: Configure Lambda Aliases and Tags + +When you deploy the Workflow, the AWS Lambda step creates the Lambda functions defined in the Service you attached to the Workflow. This is the equivalent of the [aws lambda create-function](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) API command. + +The next time you run the Workflow, manually or as the result of a [Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2), the AWS Lambda step updates the Lambda functions. This is the equivalent of the [aws lambda update-function-configuration](https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html) API command. + +In the Workflow, click the **AWS Lambda** step. The **AWS Lambda** dialog appears. + +![](./static/4-lambda-workflows-and-deployments-03.png) + +The dialog provides settings for Lambda Aliases and Tags. + +#### Versioning with Aliases + +This topic assumes that you are familiar with [Lambda versioning](https://docs.aws.amazon.com/lambda/latest/dg/versioning-intro.html).Published Lambda functions are immutable objects (they cannot be changed), and are versioned with the latest always being published to `$LATEST`. These versions are made up of both code as well as configuration settings. Once the code and configuration are published, the function becomes immutable. + +Continuous delivery on Lambda requires that Harness manage the versioning (via aliases) and rollbacks. Since each new version is immutably pushed to `$LATEST`, rolling back to a previous version becomes complicated. + +Harness solves this complexity by keeping track of the aliases required to recreate the function, the code, and the configuration. An alias is a pointer to one or two versions. + +Harness handles the burden of managing the code and configuration in order to properly version, tag, or recreate the previous version, thus allowing for fully-automated rollbacks based on prescriptive failure strategies. + +The AWS Lambda step in the Workflow applies the alias just like you would using the AWS Lambda console: + +![](./static/4-lambda-workflows-and-deployments-04.png) + +By default, Harness names the alias with the name of the Environment by using the built-in Harness variable **${env.name}**. You can replace this with whatever alias you want, or use other built-in Harness variables by entering **$** and seeing what variables are available. + +![](./static/4-lambda-workflows-and-deployments-05.png) + +Once the Workflow is deployed and a Lambda function has been versioned using the alias in the **AWS Lambda** step, you can see the versioning in the AWS Lambda console: + +![](./static/4-lambda-workflows-and-deployments-06.png) + + + +#### Tags + +Tags are key-value pairs that you attach to AWS resources to organize them. For Lambda functions, tags simplify the process of tracking the frequency and cost of each function invocation. + +You can set the tags for your Lambda functions in the **AWS Lambda** step and, once deployed, you can see the tags in the AWS Lambda console: + +![](./static/4-lambda-workflows-and-deployments-07.png) + +##### Existing Tags are Replaced During Deployment + +When you deploy a new version of your function, Harness replaces any existing tags with the tags you added in your Workflow. If you leave the tags empty on a subsequent deployment, the tags are replaced with empty values. + +### Review: Rollback AWS Lambda Step + +In the Basic Workflow you can see the **Rollback AWS Lambda** step. + +![](./static/4-lambda-workflows-and-deployments-08.png) + +This step initiates rollback if the AWS Lambda step fails, or if a step elsewhere in the Workflow fails. + +![](./static/4-lambda-workflows-and-deployments-09.png) + +The best way to see what the Rollback AWS Lambda step does is to look at a log for a rollback. + +**Lambda rollbacks are a little unusual:** instead of rolling back to a previous, successful version, Harness takes that previous, successful version and creates a new version. The new version is deployed as the "rollback". Let's look at an example. Both the previous, successful version and the new version will have the same Sha256. + +In the following scenario, the previous, successful version of the function was **version 2**. When Harness fails to publish **version 3** (we added an HTTP call that intentionally failed), it publishes the previous version as a new version and names it **version 4**. Version 3 is never deployed. + +First, Harness gets the function configuration and VPC settings from the last successful version: + + +``` +Begin command execution. +Deploying Lambda with following configuration +Function Name: ExampleApp-aws-lambda-Lambda-my-function +S3 Bucket: harness-example +Bucket Key: lambda/function.zip +Function handler: index.handler +Function runtime: nodejs8.10 +Function memory: 128 +Function execution timeout: 3 +IAM role ARN: arn:aws:iam::00000000000:role/service-role/TestAwsLamdaRole +VPC: vpc-00a7e8ea4fd1ffd9d +Subnet: [subnet-0c945c814c09c9aed, subnet-05788710b1b06b6b1] +Security Groups: sg-05e7b8b9cad94b393 +``` +Next, Harness updates and publishes the previous version as version 4: + + +``` +Function: [ExampleApp-aws-lambda-Lambda-my-function] exists. Update and Publish + +Existing Lambda Function Code Sha256: [U+zi3X2Fu+ojXZzd58XXXXXXXXXXB05evN2U=]. + +New Lambda function code Sha256: [U+zi3X2Fu+ojXZzd58MIKDKXXXXXXXXXXAB05evN2U=] + +Function code didn't change. Skip function code update + +Updating function configuration + +Function configuration updated successfully + +Publishing new version + +Published new version: [4] + +Published function ARN: [arn:aws:lambda:us-east-1:00000000000:function:ExampleApp-aws-lambda-Lambda-my-function:4] + +Untagging existing tags from the function: [arn:aws:lambda:us-east-1:00000000000:function:ExampleApp-aws-lambda-Lambda-my-function] + +Executing tagging for function: [arn:aws:lambda:us-east-1:00000000000:function:ExampleApp-aws-lambda-Lambda-my-function] + +Successfully deployed lambda function: [ExampleApp-aws-lambda-Lambda-my-function] + +================= +Successfully completed AWS Lambda Deploy step +``` +As you can see, the rollback succeeded and version 4 is published. + +### Example: Lambda Workflow Deployment + +Now that the Basic Workflow for Lambda is set up, you can click **Deploy** in the Workflow to deploy the Lambda functions in the Harness Service to your AWS Lambda environment. + +![](./static/4-lambda-workflows-and-deployments-10.png) + +In **Start New Deployment**, in **Build / Version**, select the zip file in the S3 bucket you set up as an Artifact Source for your Harness Lambda Service: + +![](./static/4-lambda-workflows-and-deployments-11.png) + +Click **SUBMIT**. The Workflow is deployed. + +![](./static/4-lambda-workflows-and-deployments-12.png) + +To see the completed deployment, log into your AWS Lambda console. The Lambda function is listed: + +![](./static/4-lambda-workflows-and-deployments-13.png) + +You can also log into AWS and use the [aws lambda get-function](https://docs.aws.amazon.com/cli/latest/reference/lambda/get-function.html) command to view the function: + + +``` +$ aws lambda get-function --function-name ExampleApp-aws-lambda-Lambda-my-function +{ + "Code": { + "RepositoryType": "S3", + "Location": "https://prod-04-2014-tasks.s3.amazonaws.com/snapshots/..." + }, + "Configuration": { + "TracingConfig": { + "Mode": "PassThrough" + }, + "Version": "$LATEST", + "CodeSha256": "U+zi3X2Fu+ojXZzd58MIKDK56UaVASDA0KAB05evN2U=", + "FunctionName": "ExampleApp-aws-lambda-Lambda-my-function", + "VpcConfig": { + "SubnetIds": [ + "subnet-05788710b1b06b6b1", + "subnet-0c945c814c09c9aed" + ], + "VpcId": "vpc-00a7e8ea4fd1ffd9d", + "SecurityGroupIds": [ + "sg-05e7b8b9cad94b393" + ] + }, + "MemorySize": 128, + "RevisionId": "4c3d4cfd-f72b-4f4c-9c0a-031d9cfe9e46", + "CodeSize": 761, + "FunctionArn": "arn:aws:lambda:us-east-1:00000000000:function:ExampleApp-aws-lambda-Lambda-my-function", + "Handler": "index.handler", + "Role": "arn:aws:iam::00000000000:role/service-role/TestAwsLamdaRole", + "Timeout": 3, + "LastModified": "2019-06-28T22:43:32.241+0000", + "Runtime": "nodejs8.10", + "Description": "" + }, + "Tags": { + "Name": "docFunction" + } +} +``` +### Next Steps + +* [Troubleshoot AWS Lambda Deployments](https://docs.harness.io/article/g9o2g5jbye-troubleshooting-harness#aws_lambda) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/_category_.json b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/_category_.json new file mode 100644 index 00000000000..aa4fac02bda --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "AWS Lambda Deployments", "position": 40, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "AWS Lambda Deployments"}, "customProps": { "helpdocs_category_id": "3pyb3kmkbs"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/aws-lambda-overview.md b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/aws-lambda-overview.md new file mode 100644 index 00000000000..7677eb7529a --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/aws-lambda-overview.md @@ -0,0 +1,49 @@ +--- +title: AWS Lambda Deployment Summary +description: Overview of deploying functions to AWS Lambda using Harness. +# sidebar_position: 2 +helpdocs_topic_id: px87zx7lbd +helpdocs_category_id: 3pyb3kmkbs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).Setting up a Lambda deployment is as simple as adding your function zip file, configuring function compute settings, and adding aliases and tags. Harness takes care of the rest of the deployment, making it consistent, reusable, and safe with automatic rollback. + +![](./static/aws-lambda-overview-14.png) + +For a general overview of how Harness works, see [Harness Architecture](https://docs.harness.io/article/de9t8iiynt-harness-architecture) and [Application Components](https://docs.harness.io/article/bucothemly-application-configuration).Basically, the Harness setup for Lambda is akin to using the AWS CLI [aws lambda](https://docs.aws.amazon.com/cli/latest/reference/lambda/index.html#cli-aws-lambda) [create-function](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html), [update-function-code](https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html), and [update-function-configuration](https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html) commands, as well as the many other commands that are needed. + +The benefit with Harness is that you can set up your Lambda deployment once, with no scripting, and then have your Lambda functions deployed automatically as they are updated in your AWS S3 bucket. You can even templatize the deployment Environment and Workflow for use by other devops and developers in your team. + +Furthermore, Harness manages Lambda function versioning to perform rollback when needed. + +The following list describes the major steps we will cover in this guide: + +1. **Delegate** - Install the Harness Shell Script or ECS **Delegate** in your AWS VPC. +2. **AWS Cloud Provider** - Add the AWS Cloud Provider. This is a connection to your AWS account. The AWS Cloud Provider can use your user account or the IAM role assigned to the Delegate host. +The AWS Cloud Provider is used to connect Harness to your Lambda deployment environment and to Amazon S3 to obtain your Lambda code files. +3. **Harness Application** - Create the Harness Application for your Lambda CD pipeline. The Harness Application represents your Lambda code and functional spec, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your Lambda deployment using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. +4. **Harness Service** - Create the Harness Service using the Lambda type. + 1. Set up your Lambda code artifact source, Lambda Function Specification, and any config variables and files. +5. **Harness Environment** - Create the Harness Environment containing the Service Infrastructure definition of your AWS deployment environment, and any overrides of Service settings. +6. **Harness Workflow** - Create the Basic deployment Harness Workflow. This Workflow will deploy the Service (your Lambda code) to the Environment (your AWS Lambda functions for the region). +You can also add your Lambda Aliases and Tags as part of the Workflow. +7. **Deploy** the Workflow. +8. Advanced options not covered in this guide: + 1. **Harness Pipeline** - Create a Harness Pipeline for your deployment, including Workflows and Approval steps. Typically, Harness customers will deploy Lambda Pipelines with a Workflow for Dev, QA, Stage, etc: + ![](./static/aws-lambda-overview-15.png) + This example doesn't show Approval steps between Pipeline stages, which are also common. For more information, see our [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) and [Approvals](https://docs.harness.io/article/0ajz35u2hy-approvals) topics. + 2. **Harness Trigger** - Create a Harness Trigger to automatically deploy your Workflows or Pipeline according to your criteria. Typically, customers use a Trigger to execute a Lambda Pipeline using the Trigger's **On New Artifact** condition. Each time the Lambda artifact, such as a zip file, is updated in the artifact repository (AWS S3), the Pipeline is executed and the new Lambda function is deployed. + For more information, see [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2). + 3. **Harness Infrastructure Provisioners** - Create Harness Infrastructure Provisioners, such as CloudFormation and Terraform, for your deployment environments. For more information, see [Infrastructure Provisioners](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner). + 4. Continuous Verification: + 1. **Deployment Verification** - Once you have successfully deployed you can add your APM and logging apps as Verification Providers, and then add Verify Steps to your Workflows. Harness will use its machine-learning to find anomalies in your deployments. For more information, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + 2. **24/7 Service Guard** - Monitor your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard](https://docs.harness.io/article/dajt54pyxd-24-7-service-guard-overview). + +Harness fully-integrates with **AWS CloudWatch** to apply Harness machine learning to CloudWatches monitoring and operational data. See [CloudWatch Verification](https://docs.harness.io/article/q6ti811nck-cloud-watch-verification-overview). + +### Next Steps + +* [Troubleshooting AWS Lambda Deployments](https://docs.harness.io/article/g9o2g5jbye-troubleshooting-harness#aws_lambda) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/lambda-deployment-overview.md b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/lambda-deployment-overview.md new file mode 100644 index 00000000000..0e6b69d77d7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/lambda-deployment-overview.md @@ -0,0 +1,20 @@ +--- +title: AWS Lambda How-tos +description: Overview of deploying functions to AWS Lambda using Harness. +# sidebar_position: 2 +helpdocs_topic_id: z24n8ut61d +helpdocs_category_id: 3pyb3kmkbs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).Harness has first-class support for AWS Lambda deployments, enabling you to deploy your functions without having to worry about compute constraints or complexity. + +See the How-tos for connecting AWS Lambda and creating AWS Lambda deployments. + +* [Connect to AWS for Lambda Deployments](1-delegate-and-connectors-for-lambda.md) +* [Add Lambda Functions](2-service-for-lambda.md) +* [Define your Lambda Target Infrastructure](3-lambda-environments.md) +* [Create a Basic Lambda Deployment](4-lambda-workflows-and-deployments.md) +* [View Lamba Deployments in the Serverless Functions Dashboard](view-lamba-deployments-in-the-serverless-functions-dashboard.md) + diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/1-delegate-and-connectors-for-lambda-00.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/1-delegate-and-connectors-for-lambda-00.png new file mode 100644 index 00000000000..68c41817997 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/1-delegate-and-connectors-for-lambda-00.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/1-delegate-and-connectors-for-lambda-01.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/1-delegate-and-connectors-for-lambda-01.png new file mode 100644 index 00000000000..82f04bec0be Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/1-delegate-and-connectors-for-lambda-01.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-16.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-16.png new file mode 100644 index 00000000000..deb29635805 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-16.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-17.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-17.png new file mode 100644 index 00000000000..4badb9277e4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-17.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-18.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-18.png new file mode 100644 index 00000000000..516a8d9b1c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-18.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-19.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-19.png new file mode 100644 index 00000000000..4afef3b91b6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-19.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-20.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-20.png new file mode 100644 index 00000000000..8fb132e2ce2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-20.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-21.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-21.png new file mode 100644 index 00000000000..ff0eb1b24d8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-21.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-22.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-22.png new file mode 100644 index 00000000000..4277e896323 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/2-service-for-lambda-22.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-23.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-23.png new file mode 100644 index 00000000000..755588fd823 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-23.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-24.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-24.png new file mode 100644 index 00000000000..50fe523d46b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-24.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-25.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-25.png new file mode 100644 index 00000000000..a9e0d9983b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-25.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-26.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-26.png new file mode 100644 index 00000000000..b4361f49a13 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-26.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-27.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-27.png new file mode 100644 index 00000000000..7e7b521ced3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/3-lambda-environments-27.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-02.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-02.png new file mode 100644 index 00000000000..0f3bea128c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-02.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-03.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-03.png new file mode 100644 index 00000000000..e684c4592cf Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-03.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-04.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-04.png new file mode 100644 index 00000000000..b08f3ac036b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-04.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-05.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-05.png new file mode 100644 index 00000000000..693601ec60d Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-05.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-06.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-06.png new file mode 100644 index 00000000000..8fc03377c0a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-06.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-07.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-07.png new file mode 100644 index 00000000000..1ed8eb46f35 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-07.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-08.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-08.png new file mode 100644 index 00000000000..8dc3bedb2a7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-08.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-09.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-09.png new file mode 100644 index 00000000000..4f21d8fc829 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-09.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-10.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-10.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-10.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-11.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-11.png new file mode 100644 index 00000000000..bd87dc69b9e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-11.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-12.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-12.png new file mode 100644 index 00000000000..9e190bb344a Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-12.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-13.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-13.png new file mode 100644 index 00000000000..5c6a1bb1001 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/4-lambda-workflows-and-deployments-13.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/aws-lambda-overview-14.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/aws-lambda-overview-14.png new file mode 100644 index 00000000000..bb063980bc7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/aws-lambda-overview-14.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/aws-lambda-overview-15.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/aws-lambda-overview-15.png new file mode 100644 index 00000000000..fe622089fc4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/aws-lambda-overview-15.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/view-lamba-deployments-in-the-serverless-functions-dashboard-28.png b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/view-lamba-deployments-in-the-serverless-functions-dashboard-28.png new file mode 100644 index 00000000000..f4c4271ee9e Binary files /dev/null and b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/static/view-lamba-deployments-in-the-serverless-functions-dashboard-28.png differ diff --git a/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/view-lamba-deployments-in-the-serverless-functions-dashboard.md b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/view-lamba-deployments-in-the-serverless-functions-dashboard.md new file mode 100644 index 00000000000..f36343a853f --- /dev/null +++ b/docs/first-gen/continuous-delivery/aws-deployments/lambda-deployments/view-lamba-deployments-in-the-serverless-functions-dashboard.md @@ -0,0 +1,51 @@ +--- +title: View Lambda Deployments in the Serverless Functions Dashboard +description: This content is for Harness FirstGen. Switch to NextGen. Add the required policies to the Execution Role for the Lambda function to view your Lambda functions in the Serverless Functions Dashboard. I… +# sidebar_position: 2 +helpdocs_topic_id: idal1erfiv +helpdocs_category_id: 3pyb3kmkbs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).Add the required policies to the Execution Role for the Lambda function to view your Lambda functions in the Serverless Functions Dashboard. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Step 1: Add Policies to Lambda Execution Role](#step_1_add_policies_to_lambda_execution_role) +* [Step 2: View Lambda Deployments in the Serverless Functions Dashboard](#step_2_view_lambda_deployments_in_the_serverless_functions_dashboard) +* [Next Steps](#next_steps) + +### Before You Begin + +* [Connect to AWS for Lambda Deployments](1-delegate-and-connectors-for-lambda.md) +* [Add Lambda Functions](2-service-for-lambda.md) +* [Define your Lambda Target Infrastructure](3-lambda-environments.md) + +### Step 1: Add Policies to Lambda Execution Role + +To see your Lambda invocations on the Serverless Dashboard, the [Execution Role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) for the Lambda function must have the following policies: + +* AmazonEC2FullAccess +* AWSLambda\_FullAccess (previously AWSLambdaFullAccess) +* AWSLambdaVPCAccessExecutionRole +* AWSLambdaRole +* CloudWatchReadOnlyAccess + +The Function Invocations are updated every 10 minutes. + +### Step 2: View Lambda Deployments in the Serverless Functions Dashboard + +Harness Manager's Serverless Functions Dashboard offers views of your Lambda deployment data. + +Here is an individual Lambda deployment and how it is displayed on the Serverless Functions dashboard: + +![](./static/view-lamba-deployments-in-the-serverless-functions-dashboard-28.png) + +See [Serverless Functions Dashboard](https://docs.harness.io/article/vlj9xbj315-serverless-functions-dashboard). + +### Next Steps + +* [Troubleshooting AWS Lambda Deployments](https://docs.harness.io/article/g9o2g5jbye-troubleshooting-harness#aws_lambda) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/_category_.json b/docs/first-gen/continuous-delivery/azure-deployments/_category_.json new file mode 100644 index 00000000000..e999dde2e2a --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "Azure Deployments and Provisioning", "position": 30, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Azure Deployments and Provisioning"}, "customProps": { "helpdocs_category_id": "gk062j3isk", "helpdocs_parent_category_id": "1qtels4t8p"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/1-harness-account-setup.md b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/1-harness-account-setup.md new file mode 100644 index 00000000000..4a3fe9f99fd --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/1-harness-account-setup.md @@ -0,0 +1,176 @@ +--- +title: 1 - Harness Account Setup for Azure ACR to AKS +description: Set up the Harness Delegate, Artifact Server, and Cloud Provider for Azure deployments. +sidebar_position: 20 +helpdocs_topic_id: z75kx7sur5 +helpdocs_category_id: mkyr84ulx3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/m7nkbph0ac).This topic describes how to set up your Harness account settings to support an Azure deployment. + +### Limitations + +Harness uses the Azure SDK among other methods and Authenticated proxy is not supported for Azure SDK. Consequently, you cannot use Azure connections for artifacts, machine images, etc, that require proxy authentication. This is an Azure limitation, not a Harness limitation. This is a known Azure limitation with Java environment properties and their SDK. + +### Permissions and Roles + +This section discusses the permissions and roles needed for the Harness connections to Azure. The setup steps for the Harness Delegate and Cloud Providers that use these roles are provided in their respective sections. + +The Azure permissions and roles required for each of the Harness connections to Azure are as follows: + +* **Harness Kubernetes Delegate** - The Harness Kubernetes Delegate is installed in the AKS cluster where you plan to deploy. You simply need to log into your AKS cluster and install it. No additional role is required. + + The Harness Kubernetes Delegate install file will create a pod in the cluster and make an outbound connection to the Harness Manager. No Azure permissions are required. + + The minimum Delegate resource requirements in the AKS cluster are 8GB RAM and 6GB Disk Space. Your AKS cluster will need enough resources to run the Delegate and your app. For the example in this guide, we created a cluster with 4 cores and 16GB of total memory. + + For information about Harness Delegates, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation), [Delegate Server Requirements](https://docs.harness.io/article/70zh6cbrhg-harness-requirements), and [Delegate Connection Requirements](https://docs.harness.io/article/11hjhpatqz-connectivity-and-permissions-requirements). +* **Harness Kubernetes Cluster Cloud Provider** - You will use the credentials of the Harness Kubernetes Delegate you installed in AKS for the Kubernetes Cluster Cloud Provider. No Azure permissions are required. + +:::note +If you choose to use the Harness Azure Cloud Provider to connect to AKS, then you must assign the AKS **Owner** role to an Azure App Registration. The Client ID (Application ID), Tenant ID (also called the Directory ID), and Key for that Azure App Registration is then used to set up the Harness Azure Cloud Provider. +::: + +* **Harness Azure Cloud Provider** - The Azure Cloud Provider connects to the ACR container. The Azure Cloud Provider requires the following App Registration information: Client ID (Application ID), Tenant ID (also called the Directory ID), and a Key. The Azure App you use for this connection must have the **Reader** role on the ACR container you want to use. + +:::note +**Why two connections to Azure?** When you create an AKS cluster, Azure also creates a service principal to support cluster operability with other Azure resources. You can use this auto-generated service principal for authentication with an ACR registry. If you can use this method, then only the Kubernetes Cloud Provider is needed.In this guide, we create separate connections for AKS and ACR because, in some instances, you might not be able to assign the required role to the auto-generated AKS service principal granting it access to ACR. + +For more information, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks) from Azure. +::: + +To register an App and assign it a role in an ACR container, the App must exist within the scope of an Azure resource group. The resource group includes the resources that you want to manage as a group, and is typically set up by your Azure account administrator. It is a common Azure management scope. For more information, see [Deploy resources with Resource Manager templates and Azure portal](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy-portal) from Azure. + +To set up the ACR container with the Azure App and **Reader** role, do the following: + +1. In the ACR container, click **Access control (IAM)**.![](./static/1-harness-account-setup-16.png) +2. Click **Add a role assignment**.![](./static/1-harness-account-setup-17.png) +3. In **Role**, enter **Reader**. +4. In **Assign access to**, select **Azure AD user, group, or service principal**. +5. In **Select**, enter the name of the Azure App that you will use to connect Harness. In this example, the App is name **doc-app**. +6. Click the name of the App. When you are finished, the settings will look something like this:![](./static/1-harness-account-setup-18.png) +7. Click **Save**. + +When you add the Azure Cloud Provider later in this guide, you will use the Client ID (Application ID), Tenant ID (also called the Directory ID), and a Key from that App to set up the Azure Cloud Provider. Harness will use the Reader role attached to the Azure App to connect to your ACR container. + +### Delegate Setup + +The simplest method for connecting Harness to AKS is to install the Harness Kubernetes Delegate in your AKS cluster and then set up the Harness Kubernetes Cluster Cloud Provider to use the same credentials as the Delegate. + +Here is a quick summary of the steps for installing the Kubernetes Delegate in your AKS cluster: + +1. Download the Harness Kubernetes Delegate. + 1. In Harness, click **Setup**. + 2. Click **Harness Delegates**. + 3. Click **Download Delegate**, and then click **Kubernetes YAML**. + The **Delegate Setup** dialog appears.![](./static/1-harness-account-setup-19.png) + 4. In **Name**, enter a name for your Delegate, for example **harness-sample-k8s-delegate**. You will use this name later when selecting this Delegate in the **Kubernetes Cluster Cloud Provider** dialog. + 5. In **Profile**, select a Profile for the Delegate. The default is named **Primary**. + 6. Click **Download**. The Kubernetes file is downloaded to your computer. + 7. In a Terminal, navigate to the folder where the Kubernetes file was downloaded and extract the YAML file: + + `$ tar -zxvf harness-delegate-kubernetes.tar.gz` + + 8. Navigate into the folder that was extracted: + + `$ cd harness-delegate-kubernetes` + + The Kubernetes Delegate YAML file is ready to be installed in your AKS cluster. + +2. Install the Harness Kubernetes Delegate in the AKS Kubernetes cluster. The easiest way to install the Delegate is to use the Azure CLI locally. + + 1. In the same Terminal you used to extract the Kubernetes Delegate YAML file, log into your Azure Subscription: + + `$ az login -u -p ` + + 2. Connect to the AKS cluster where you plan to deploy: + + `$ az aks install-cli` + + `$ az aks get-credentials --resource-group --name myHarnessCluster` + + 3. Install the Harness Kubernetes Delegate: + + `kubectl apply -f harness-delegate.yaml` + + You will see the following output: + + `namespace/harness-delegate created` + `clusterrolebinding.rbac.authorization.k8s.io/harness-delegate-cluster-admin created` + `secret/harness-sample-k8s-delegate-proxy created` + `statefulset.apps/harness-sample-k8s-delegate-vkjrqz created` + + 4. Verify that that Delegate pod is running: + + `kubectl get pods -n harness-delegate` + + The output shows the status of the pod: + + `harness-sample-k8s-delegate-vkjrqz-0 1/1 Running 0 57s` + +3. View the Delegate in Harness. In Harness, view the **Harness Delegates** page. Once the Delegate is installed, the Delegate is listed in the Installations page in a few moments. + +![](./static/1-harness-account-setup-20.png) + +### Cloud Providers Setup + +In this section, we will add a Harness Kubernetes Cluster Cloud Provider and a Azure Cloud Provider to your account. + +#### Kubernetes Cluster Cloud Provider + +When a Kubernetes cluster is created, you specify the authentication methods for the cluster. For a Kubernetes Cluster Cloud Provider in Harness, you can use these methods to enable Harness to connect to the cluster as a Cloud Provider, or you can simply use the Harness Kubernetes Delegate installed in the cluster. + +For this guide, we will set up the Kubernetes Cluster Cloud Provider using the Delegate we installed earlier as the authentication method. + +##### Add Kubernetes Cluster Cloud Provider + +To set up the Kubernetes Cluster Cloud Provider, do the following: + +1. In Harness, click **Setup**. +2. In **Setup**, click **Cloud Providers**. +3. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears. +4. In **Type**, select **Kubernetes Cluster**. The Cloud Provider dialog changes to display the Kubernetes Cluster settings. +5. In **Display Name**, enter a name for the Cloud Provider, such as **Harness Sample K8s Cloud Provider**. You will use this name when setting up the Infrastructure Definition settings in Harness later. +6. Select **Inherit from selected Delegate**. +7. In **Delegate Name**, select the name of the Delegate you installed in your cluster earlier. When you are finished, the dialog will look something like this:![](./static/1-harness-account-setup-21.png) +8. Click **TEST** to verify the settings, and then click **SUBMIT**. The Kubernetes Cloud Provider is added. + +![](./static/1-harness-account-setup-22.png) + +#### Azure Cloud Provider + +The Azure Cloud Provider connects to the ACR container. The Azure Cloud Provider requires the following App Registration information: Client ID (Application ID), Tenant ID (also called the Directory ID), and a Key. The Azure App you use for this connection must have the **Reader** role on the ACR container you want to use. + +##### Add Azure Cloud Provider + +To set up the Azure Cloud Provider, do the following: + +1. In Harness, click **Setup**. +2. In **Setup**, click **Cloud Providers**. +3. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears. +4. In **Type**, select **Microsoft Azure**. The Cloud Provider dialog changes to display the Microsoft Azure settings. +5. In **Display Name**, enter a name for the Cloud Provider, such as **azure**. You will use this name when setting up the **Artifact Source** settings in Harness later. +6. In **Client ID**, enter the **Client/****Application ID** for the Azure app registration you are using. It is found in the Azure Active Directory **App registrations**. For more information, see [Quickstart: Register an app with the Azure Active Directory v1.0 endpoint](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v1-add-azure-ad-app) from Microsoft. + + To access resources in your Azure subscription, you must assign the Azure App registration using this Client ID to a role in that subscription. Later, when you set up an Artifact Source in a Harness Source, you will select a subscription. If the Azure App registration using this Client ID is not assigned a role in a subscription, no subscriptions will be available. For more information, see [Assign the application to a role](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#assign-the-application-to-a-role) from Microsoft. + +7. In **Tenant ID**, enter the Tenant ID of the Azure Active Directory in which you created your application. + + This is also called the **Directory ID**. For more information, see [Get tenant ID](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal#get-tenant-id) from Azure. + +8. In **Key**, enter the authentication key for your application. + + This is found in **Azure Active Directory**, **App Registrations**. Doubleclick the App name. Click **Settings**, and then click **Keys**. + + You cannot view existing key values, but you can create a new key. For more information, see [Get application ID and authentication key](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal#get-application-id-and-authentication-key) from Azure. Azure has previewed a new App Registrations blade that displays keys in the **Certificates & secrets** tab, under **Client secrets**. + +9. When you are finished the Azure Cloud Provider dialog will look something like this:![](./static/1-harness-account-setup-23.png) +10. Click **TEST** to verify the settings, and then click **SUBMIT**. The Azure Cloud Provider is added. + + ![](./static/1-harness-account-setup-24.png) + +You're all connected! Now you can start using Harness to set up CD. + +### Next Step + +* [2 - Service and Azure Artifact Source](2-service-and-artifact-source.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/2-service-and-artifact-source.md b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/2-service-and-artifact-source.md new file mode 100644 index 00000000000..3089d4ee31b --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/2-service-and-artifact-source.md @@ -0,0 +1,171 @@ +--- +title: 2 - Harness Service Setup for Azure ACR and AKS +description: Set up the Harness Kubernetes Service and Artifact Source for an Azure deployment. +sidebar_position: 30 +helpdocs_topic_id: jjd1wrre7g +helpdocs_category_id: mkyr84ulx3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/m7nkbph0ac).This topic describes how to set up the Harness Kubernetes Service and Artifact Source for an Azure deployment: + +* [Application Setup](2-service-and-artifact-source.md#application-setup) +* [Harness Service Setup](2-service-and-artifact-source.md#harness-service-setup) +* [Next Step](2-service-and-artifact-source.md#next-step) + +### Application Setup + +The following procedure creates a Harness Application for a AKS Kubernetes deployment using an ACR repository. + +An Application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. For more information, see [Application Components](https://docs.harness.io/article/bucothemly-application-configuration). + +To create the Harness Application, do the following: + +1. In **Harness**, click **Setup**. +2. Click **Add Application**. The **Application** dialog appears. +3. Give your Application a name that describes your microservice or app. For the purposes of this guide, we use the name **ACR-to-AKS**. +4. Click **SUBMIT**. The new Application is added. +5. Click the Application name to open the Application. The Application entities are displayed. + +![](./static/2-service-and-artifact-source-09.png) + +### Harness Service Setup + +There are different types of Harness Services for different deployment platforms. The Kubernetes type includes Kubernetes-specific settings. + +To add the Kubernetes Service, do the following: + +1. In your new Application, click **Services**. The **Services** page appears. +2. In the **Services** page, click **Add Service**. The **Service** dialog appears.![](./static/2-service-and-artifact-source-10.png) +3. In **Name**, enter a name for your Service, such as **Todolist-ACR**. +4. In **Description**, enter a description for your service. +5. In **Deployment Type**, select **Kubernetes**. +6. Click the **Enable Kubernetes V2** checkbox. This setting configures the Service with the latest Harness Kubernetes Service settings. +7. Click **SUBMIT**. The new Service is displayed. + +Next, we will walk through how to set up the Kubernetes manifest file and use the Service features. + +#### Add ACR Artifact Source + +An Artifact Source in a Service is the microservice or application artifact you want to deploy. + +For this Azure deployment, the Artifact Source uses the Azure Cloud Provider you set up for your Harness account to connect to ACR (as described in [Azure Cloud Provider](1-harness-account-setup.md#azure-cloud-provider)), and selects a Todo List sample app Docker image as the artifact. + +To add an Artifact Source to this Service, do the following: + +1. In the Service, click **Add Artifact Source**, and select **Azure Container Registry**. The **Artifact Source** dialog appears.![](./static/2-service-and-artifact-source-11.png) +2. Configure the following fields and click **SUBMIT**. +* **Cloud Provider** - Select the Azure Cloud Provider we set up earlier. +* **Subscription** - Select the Subscription set up in your ACR container registry. To locate the Subscription in ACR, click **Overview**, and see **Subscription**.![](./static/2-service-and-artifact-source-12.png) +* **Azure Registry Name** - Select the registry you want to use. +* **Repository Name** - Select the repository containing the Docker image you want to use. + +When you are finished, the Artifact Source dialog will look something like this: + +![](./static/2-service-and-artifact-source-13.png) + +You can add multiple Artifact Sources to a Service and view the build history for each one by clicking **Artifact History**. + +![](./static/2-service-and-artifact-source-14.png) + +#### Add Manifests + +The **Manifests** section of Service contains the configuration files that describe the desired state of your application in terms of Kubernetes object descriptions. + +![](./static/2-service-and-artifact-source-15.png) + +##### What Can I Add in Manifests? + +You can add any Kubernetes configuration files, formatted in YAML, such as object descriptions, in one or more files. + +As you can see, you can use Go templating and Harness built-in variables in combination in your Manifest files. For information about the features of **Manifests**, see [Define Kubernetes Manifests](../../kubernetes-deployments/define-kubernetes-manifests.md). + +For this guide, we will use the default manifests, with one important change for ACR: we will edit the Kubernetes **imagePullSecret** setting. + +##### Pull an Image from a Private ACR Registry + +To pull the image from the private ACR registry, Harness accesses that registry using the credentials set up in the Harness [Artifact Server](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server), but your AKS cluster might not have the needed permissions. To solve this problem, the default values.yaml file contains `dockercfg: ${artifact.source.dockerconfig}`. + +1. n your Harness Kubernetes Service, in **Manifests**, click **values.yaml**. +2. Verify that `dockercfg` key exists, and uses the `${artifact.source.dockerconfig}` expression to obtain the credentials: + ``` + dockercfg: ${artifact.source.dockerconfig} + ``` +3. Click the **deployment.yaml** file. +4. Verify that the Secret object is inside an `if` argument using `dockercfg` and the `{{.Values.dockercfg}}` value: + + ``` + {{- if .Values.dockercfg}} + apiVersion: v1 + kind: Secret + metadata: + name: {{.Values.name}}-dockercfg + annotations: + harness.io/skip-versioning: "true" + data: + .dockercfg: {{.Values.dockercfg}} + type: kubernetes.io/dockercfg + --- + {{- end}} + ``` +With these requirements met, the cluster import the credentials from the Docker credentials file in the artifact. + +That's it. In your AKS cluster at deployment runtime, Kubernetes will use the dockercfg credentials to obtain the Docker image from ACR. + +When you create an AKS cluster, Azure also creates a service principal to support cluster operability with other Azure resources. You can use this auto-generated service principal for authentication with an ACR registry. If you can use this method, then only the Kubernetes Cloud Provider is needed and the `createImagePullSecret` setting can be left as `false`. In this guide, we create separate connections for AKS and ACR because, in some instances, you might not be able to assign the required role to the auto-generated AKS service principal granting it access to ACR. For more information, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks) from Azure.Now we can set up the deployment Environment to tell Harness where to deploy the Docker image. + +##### Notes + +* When you are using a public repo, the `dockercfg: ${artifact.source.dockerconfig}` in values.yaml is ignored by Harness. You do not need to remove it. +* If you want to use a private repo and no imagePullSecret, then set `dockercfg` to empty in values.yaml. +* **Legacy imagePullSecret Method** — Previously, Harness used a `createImagePullSecret` value in values.yaml that could be set to `true` or `false`, and `dockercfg: ${artifact.source.dockerconfig}` to obtain the credentials. If `createImagePullSecret` was set to `true`, the following default Secret object in deployment.yaml would be used: + + +``` +{{- if .Values.createImagePullSecret}} +apiVersion: v1 +kind: Secret +metadata: + name: {{.Values.name}}-dockercfg + annotations: + harness.io/skip-versioning: "true" +data: + .dockercfg: {{.Values.dockercfg}} +type: kubernetes.io/dockercfg +--- +{{- end}} +``` +This legacy method is still supported for existing Services that use it, but the current method of using the default values.yaml and deployment.yaml files is recommended. + +#### Namespace Variable + +Before we set up the deployment Environment, let's look at one more interesting setting, click **values.yaml** and locate the `namespace` setting: + + +``` +namespace: ${infra.kubernetes.namespace} +``` +Next, click the **namespace.yaml** file to see the variable referenced in values.yaml: + + +``` +{{- if .Values.createNamespace}} +apiVersion: v1 +kind: Namespace +metadata: + name: {{.Values.namespace}} +{{- end}} +``` +The `${infra.kubernetes.namespace}` variable is a Harness built-in variable and it references the Kubernetes cluster namespace value enter in the Harness Environment, which you will create later. + +The `${infra.kubernetes.namespace}` variable let's you enter any value in the Environment **Namespace** setting and, at runtime, the Kubernetes Namespace manifest uses that name to create a namespace. + +#### Config Variables and Files + +For the purpose of this guide, we don't use many of the other Service settings you can use. For information on the Config Variables and Files settings, see [Configuration Variables and Files](https://docs.harness.io/article/eb3kfl8uls-service-configuration#configuration_variables_and_files). + +### Next Step + +* [3 - Azure Environment](3-azure-environment.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/3-azure-environment.md b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/3-azure-environment.md new file mode 100644 index 00000000000..4b13d37e999 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/3-azure-environment.md @@ -0,0 +1,60 @@ +--- +title: 3 - Define Your AKS Target Infrastructure +description: Define the target deployment environment for your application. +sidebar_position: 40 +helpdocs_topic_id: 7qsyj7wvpq +helpdocs_category_id: mkyr84ulx3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/m7nkbph0ac).Once you've added a Service to your Application, you can define Environments where your Service can be deployed. + +* [Create a New Harness Environment](3-azure-environment.md#create-a-new-harness-environment) +* [Add an Infrastructure Definition](3-azure-environment.md#add-an-infrastructure-definition) +* [Next Step](3-azure-environment.md#next-step) + +### Create a New Harness Environment + +In an Environment, you specify the following: + +* A Harness Service, such as the Service with a Docker image artifact you configured. +* A Cloud Provider, such as the Kubernetes Cluster Cloud Provider that you added in [Cloud Providers Setup](1-harness-account-setup.md#cloud-providers-setup). + +An Environment can be a Dev, QA, Production, or other environment. You can deploy one or many Services to each Environment. + +The following procedure creates an Environment for the Service we set up earlier. + +1. In your Harness Application, click **Environments**. The **Environments** page appears. +2. Click **Add Environment**. The **Environment** dialog appears. +3. In **Name**, enter a name that describes the deployment environment, for example, **AKS**. +4. In **Environment Type**, select **Non-Production**. +5. Click **SUBMIT**. The new **Environment** page appears. + +### Add an Infrastructure Definition + +[​Infrastructure Definitions](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) specify the target deployment infrastructure for your Harness Services, and the specific infrastructure details for the deployment, like VPC settings.  + +To add the Infrastructure Definition, do the following: + +1. In the Harness Environment, click **Add Infrastructure Definition**. The **Infrastructure Definition** dialog appears. +2. In **Name**, enter the name you will to select this Infrastructure Definition when you create a Workflow. +3. In **Cloud Provider Type**, select **Kubernetes Cluster**. +4. In **Deployment Type**, select **Kubernetes**. +5. Click **Use Already Provisioned Infrastructure**. If you were using a Harness [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner), you would select **Map Dynamically Provisioned Infrastructure**. +6. In **Cloud Provider**, select the Cloud Provider you added earlier. Ensure that you select the **Kubernetes Cluster Cloud Provider** you set up for the AKS connection and not the Azure Cloud Provider you set up for the ACR connection. +7. In **Namespace**, enter the name of the cluster namespace you want to use. As we noted in [Namespace Variable](2-service-and-artifact-source.md#namespace-variable), you can enter any value here and the Service will use it in its Namespace manifest to create the namespace at runtime. +8. In **Release Name**, you can see the expression Harness uses for tracking a release. The release name must be unique across the cluster. The Harness-generated unique identifier `${infra.kubernetes.infraId}` can be used as a suffix to ensure a unique release name. +9. In **Scope to specific Services**, select the Harness Service you created earlier. +10. Click **Submit**. + +That is all you have to do to set up the deployment Environment in Harness. + +Now that you have the Service and Environment set up. Now you can create the deployment Workflow in Harness. + +Your Environment can overwrite Service Config Variables, Config Files, and other settings. This enables you to have a Service keep its settings but have them changed when used with this Environment. For example, you might have a single Service but an Environment for QA and an Environment for Production, and you want to overwrite the values.yaml setting in the Service depending on the Environment. We don't overwrite any Services variables in this guide. For more information, see [Override Service Settings](https://docs.harness.io/article/2gffsizl8u-kubernetes-environments#override_service_settings) in the [Kubernetes Deployments](../../kubernetes-deployments/kubernetes-deployments-overview.md) doc. + +### Next Step + +* [4 - Azure Workflows and Deployments](4-azure-workflows-and-deployments.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/4-azure-workflows-and-deployments.md b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/4-azure-workflows-and-deployments.md new file mode 100644 index 00000000000..fd621646b87 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/4-azure-workflows-and-deployments.md @@ -0,0 +1,273 @@ +--- +title: 4 - Azure ACR to AKS Workflows and Deployments +description: Create a Rolling Update Workflow in Harness for AKS. +sidebar_position: 50 +helpdocs_topic_id: x87732ti68 +helpdocs_category_id: mkyr84ulx3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/m7nkbph0ac).This section will walk you through creating a Kubernetes Workflow in Harness and what the Workflow steps deployment logs include: + +* [Workflow Setup](4-azure-workflows-and-deployments.md#workflow-setup) + + [Initialize](4-azure-workflows-and-deployments.md#initialize) + + [Prepare](4-azure-workflows-and-deployments.md#prepare) + + [Apply](4-azure-workflows-and-deployments.md#apply) + + [Wait for Steady State](4-azure-workflows-and-deployments.md#wait-for-steady-state) + + [Wrap Up](4-azure-workflows-and-deployments.md#wrap-up) +* [AKS Workflow Deployment](4-azure-workflows-and-deployments.md#aks-workflow-deployment) +* [Next Step](4-azure-workflows-and-deployments.md#next-step) + +### Workflow Setup + +In this guide, the Workflow performs a simple Rolling Deployment, which is a Kubernetes Rolling Update. For a detailed explanation, see [Performing a Rolling Update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/) from Kubernetes. + +For information on other Workflow types, see [Kubernetes Deployments](https://docs.harness.io/category/kubernetes-deployments).To create a Rolling Workflow for Kubernetes, do the following: + +1. In your Application, click **Workflows**. +2. Click **Add Workflow**. The **Workflow** dialog appears. +3. In **Name**, enter a name for your Workflow, such as **Todo List AKS**. +4. In **Workflow Type**, select **Rolling Deployment**. +5. In **Environment**, select the Environment you create for your Kubernetes deployment. +6. In Infrastructure Definition, select the Infrastructure Definition you created earlier. If the Infrastructure Definition does not appear, ensure that you added the Service to the Infrastructure Definition **Scope to specific Services** setting.![](./static/4-azure-workflows-and-deployments-00.png) +7. Click **SUBMIT**. The new Rolling Workflow is pre-configured. + +![](./static/4-azure-workflows-and-deployments-01.png) + +As you can see, there is a Rollout Deployment step set up automatically. That's all the Workflow setup required. The Workflow is ready to deploy. When it is deployed, it will look like this: + +![](./static/4-azure-workflows-and-deployments-02.png) + +You can see each section of the Rollout Deployment listed on the right. To see what that Rollout Deployment step does at runtime, let's look at the logs for each section. + +#### Initialize + +The Initialize step renders the Kubernetes object manifests in the correct order and validates them. + + +``` +Initializing.. + + +Manifests [Post template rendering] : + +--- + +apiVersion: v1 +kind: Namespace +metadata: + name: example +--- +apiVersion: "v1" +kind: "Secret" +metadata: + annotations: + harness.io/skip-versioning: "true" + finalizers: [] + labels: {} + name: "harness-example-dockercfg" + ownerReferences: [] +data: + .dockercfg: "***" +stringData: {} +type: "kubernetes.io/dockercfg" +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: harness-example-config +data: + key: value +--- +apiVersion: v1 +kind: Service +metadata: + name: harness-example-svc +spec: + type: LoadBalancer + ports: + - port: 80 + targetPort: 80 + protocol: TCP + selector: + app: harness-example +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: harness-example-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: harness-example + template: + metadata: + labels: + app: harness-example + spec: + imagePullSecrets: + - name: harness-example-dockercfg + containers: + - name: harness-example + image: harnessexample.azurecr.io/todolist-sample:latest + envFrom: + - configMapRef: + name: harness-example-config + + +Validating manifests with Dry Run + +kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run +namespace/example configured (dry run) +secret/harness-example-dockercfg created (dry run) +configmap/harness-example-config created (dry run) +service/harness-example-svc configured (dry run) +deployment.apps/harness-example-deployment configured (dry run) + +Done. +``` +Note the `imagePullSecrets` settings. Harness used the Go templating in Service to fully form the correct YAML for Kubernetes. + +#### Prepare + +The Prepare section identifies the resources used and versions any for release history. Every Harness deployment creates a new release with an incrementally increasing number. Release history is stored in the Kubernetes cluster in a ConfigMap. This ConfigMap is essential for release tracking, versioning and rollback. + +For more information, see [Releases and Versioning](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations#releases_and_versioning). + + +``` +Manifests processed. Found following resources: + +Kind Name Versioned +Namespace example false +Secret harness-example-dockercfg false +ConfigMap harness-example-config true +Service harness-example-svc false +Deployment harness-example-deployment false + + +Current release number is: 3 + +No previous successful release found. + +Cleaning up older and failed releases + +kubectl --kubeconfig=config delete ConfigMap/harness-example-config-2 + +configmap "harness-example-config-2" deleted + +Managed Workload is: Deployment/harness-example-deployment + +Versioning resources. + +Done +``` +#### Apply + +The Apply section deploys the manifests from the Service **Manifests** section as one file. + + +``` +kubectl --kubeconfig=config apply --filename=manifests.yaml --record + +namespace/example unchanged +secret/harness-example-dockercfg created +configmap/harness-example-config-3 created +service/harness-example-svc unchanged +deployment.apps/harness-example-deployment configured + +Done +``` +#### Wait for Steady State + +The Wait for Steady State section shows the containers and pods rolled out. + + +``` +kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only + +kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment --watch=true + + +Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 old replicas are pending termination... +Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 pulling image "harnessexample.azurecr.io/todolist-sample:latest" Pulling +Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 Successfully pulled image "harnessexample.azurecr.io/todolist-sample:latest" Pulled +Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 Created container Created +Event : Pod harness-example-deployment-cfdb66bf4-qw5g9 Started container Started +Event : Deployment harness-example-deployment Scaled down replica set harness-example-deployment-6b8794c59 to 0 ScalingReplicaSet + +Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 old replicas are pending termination... +Event : ReplicaSet harness-example-deployment-6b8794c59 Deleted pod: harness-example-deployment-6b8794c59-2z99v SuccessfulDelete + +Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 old replicas are pending termination... + +Status : deployment "harness-example-deployment" successfully rolled out + +Done. +``` +#### Wrap Up + +The Wrap Up section shows the Rolling Update strategy used. Here is a sample: + + +``` +... +Name: harness-example-deployment +Namespace: example +CreationTimestamp: Wed, 06 Mar 2019 20:16:30 +0000 +Labels: +Annotations: deployment.kubernetes.io/revision: 3 + kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --f... + kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true +Selector: app=harness-example +Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable +StrategyType: RollingUpdate +MinReadySeconds: 0 +RollingUpdateStrategy: 25% max unavailable, 25% max surge +... +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 25m deployment-controller Scaled up replica set harness-example-deployment-86c6d74db8 to 1 + Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set harness-example-deployment-6b8794c59 to 1 + Normal ScalingReplicaSet 4s deployment-controller Scaled down replica set harness-example-deployment-86c6d74db8 to 0 + Normal ScalingReplicaSet 4s deployment-controller Scaled up replica set harness-example-deployment-cfdb66bf4 to 1 + Normal ScalingReplicaSet 1s deployment-controller Scaled down replica set harness-example-deployment-6b8794c59 to 0 + +Done. +``` +### AKS Workflow Deployment + +Now that the setup is complete, you can click **Deploy** in the Workflow to deploy the artifact to your cluster. + +![](./static/4-azure-workflows-and-deployments-03.png) + +Next, select the artifact build version and click **SUBMIT**. + +![](./static/4-azure-workflows-and-deployments-04.png) + +The Workflow is deployed. + +![](./static/4-azure-workflows-and-deployments-05.png) + +To see the completed deployment, log into your Azure AKS cluster, click **Insights**, and then click **Controllers**. + +![](./static/4-azure-workflows-and-deployments-06.png) + +If you are using a older AKS cluster, you might have to enable Insights.The container details show the Docker image deployed: + +![](./static/4-azure-workflows-and-deployments-07.png) + +You can also launch the Kubernetes dashboard to see the results: + +![](./static/4-azure-workflows-and-deployments-08.png) + +To view the Kubernetes dashboard, in your AKS cluster, click **Overview**, click **Kubernetes Dashboard**, and then follow the CLI steps. + +### Next Step + +* [5 - Azure Troubleshooting](5-azure-troubleshooting.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/5-azure-troubleshooting.md b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/5-azure-troubleshooting.md new file mode 100644 index 00000000000..10fc721f960 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/5-azure-troubleshooting.md @@ -0,0 +1,37 @@ +--- +title: 5 - Azure ACR to AKS Troubleshooting +description: General troubleshooting steps for Azure AKS deployments. +sidebar_position: 60 +helpdocs_topic_id: mesbafbntm +helpdocs_category_id: mkyr84ulx3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/m7nkbph0ac).The following troubleshooting steps should help address common issues. + +#### Failed to pull image + +Kubernetes might fail to pull the Docker image set up in your Service: + + +``` +Event : Pod harness-example-deployment-6b8794c59-2z99v Error: ErrImagePull Failed +Event : Pod harness-example-deployment-6b8794c59-2z99v Failed to pull image +"harnessexample.azurecr.io/todolist-sample:latest": rpc error: code = Unknown desc = Error response from daemon: +Get https://harnessexample.azurecr.io/v2/todolist-sample/manifests/latest: unauthorized: authentication required Failed +``` +This is caused by the `createImagePullSecret` setting set to `false` in the values.yaml file in Service **Manifests**. + +To fix this, set the `createImagePullSecret` setting set to `true`, as described in [Modify ImagePullSecret](2-service-and-artifact-source.md#modify-image-pull-secret): + + +``` +createImagePullSecret: true +``` +### Next Steps + +* [Kubernetes Deployments](https://docs.harness.io/category/kubernetes-deployments) +* [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) +* [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/_category_.json b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/_category_.json new file mode 100644 index 00000000000..10518d5e9fc --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/_category_.json @@ -0,0 +1 @@ +{"label": "Azure ACR to AKS", "position": 30, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Azure ACR to AKS"}, "customProps": { "helpdocs_category_id": "mkyr84ulx3"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/azure-deployments-overview.md b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/azure-deployments-overview.md new file mode 100644 index 00000000000..ee70556afae --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/azure-deployments-overview.md @@ -0,0 +1,89 @@ +--- +title: Azure ACR to AKS Deployments Summary +description: Overview of deploying a Docker image in ACR to an AKS cluster. +sidebar_position: 10 +helpdocs_topic_id: kiuft72fr5 +helpdocs_category_id: mkyr84ulx3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/m7nkbph0ac).This guide will walk you through deploying a Docker image in an Azure Container Registry (ACR) repo to an Azure Kubernetes Service (AKS) cluster. This scenario is very popular and a walkthrough of all the steps involved will help you set up this scenario in Harness for your own microservices and apps. + +### Deployment Summary + +For a general overview of how Harness works, see [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). For a vendor-agnostic, Harness Docker-to-Kubernetes deployment, see our [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) doc. + +| | | +| --- | --- | +| **Azure deployment in Harness Manager** | **The same deployment in Kubernetes Dashboard** | +| ![](./static/_azure-deploy-in-harness-mgr.png) | ![](./static/_azure-deploy-in-k8s.png) | + +#### Deployment Preview + +The following list describes the major steps we will cover in this guide: + +1. Install the Harness Kubernetes **Delegate** in an AKS Kubernetes cluster. +2. Add **Cloud Providers**. We will create two Harness Cloud Providers: + + 1. **Kubernetes Cloud Provider** - This is a connection to your AKS Kubernetes cluster using the Harness Delegate installed in that cluster. + + 2. **Azure Cloud Provider** - This is a connection to your Azure account to access ACR. + + For other artifact repositories, a Harness Artifact Server connection is used. For Azure, Harness uses a Cloud Provider connection. + +:::note +**Why two connections to Azure?** When you create an AKS cluster, Azure also creates a service principal to support cluster operability with other Azure resources. You can use this auto-generated service principal for authentication with an ACR registry. If you can use this method, then only the Kubernetes Cloud Provider is needed. + +In this guide, we create separate connections for AKS and ACR because, in some instances, you might not be able to assign the required role to the auto-generated AKS service principal granting it access to ACR. For more information, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks) from Azure. +::: + +3. Create the Harness **Application** for your Azure CD pipeline. + + The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your microservice using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. + +4. Create the Harness **Service** using the Kubernetes type. + + 1. Set up your Kubernetes manifests and any config variables and files. + + 2. Set the **ImagePullSecrets** setting to **true**. This will enable Kubernetes in AKS to pull the Docker image from ACR. + +5. Create the Harness **Environment** containing the Infrastructure Definition definition of your AKS cluster, and any overrides. +6. Create the Kubernetes deployment Harness **Workflow**. +7. **Deploy** the Workflow to AKS. The deployment will pull the Docker image from ACR at runtime. +8. Advanced options not covered in this guide: + + 1. Create a Harness **Pipeline** for your deployment, including Workflows and Approval steps. For more information, see [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration). + + 2. Create a Harness **Trigger** to automatically deploy your Workflows or Pipeline according to your criteria. For more information, see [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2). + + 3. Create Harness **Infrastructure Provisioners** for your deployment environments. For more information, see [Infrastructure Provisioners](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner). + +#### What Are We Going to Do? + +This guide walks you through deploying a Docker image from Azure ACR to Azure AKS using Harness. Basically, the Harness deployment does the following: + +* **Docker Image** - Pull Docker image from Azure ACR. +* **Kubernetes Cluster** - Deploy the Docker image to a Kubernetes cluster in Azure AKS in a Kubernetes Rolling Deployment. + +#### What Are We Not Going to Do? + +This is a brief guide that covers the basics of deploying ACR artifacts to AKS. It does not cover the following: + +* Basics of Docker, Kubernetes, ACR, or AKS. For great documentation on these platforms, see [Azure Container Registry Documentation](https://docs.microsoft.com/en-us/azure/container-registry/) and [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/) from Azure. +* Azure basics. This guide assumes you are familiar with Azure Resource Manager, its terminology and components, such as Resource Groups. For more information, see [Azure Resource Manager overview](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview), [Resource Groups](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups), and [Deploy resources with Resource Manager templates and Azure portal](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy-portal) from Azure. +* Using all of the artifact types Harness supports. We will focus on Docker images, as they are the most popular type. + +### Before You Begin + +The following are required: + +* **ACR repository** - An Azure account with a ACR repository you can connect to Harness. +* **AKS Kubernetes cluster** - An AKS Kubernetes cluster running in your Azure environment. + +We will walk you through the process of setting up Harness with connections to ACR and AKS. + +### Next Step + +* [1 - Harness Account Setup for Azure](1-harness-account-setup.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-16.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-16.png new file mode 100644 index 00000000000..c9240207f09 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-16.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-17.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-17.png new file mode 100644 index 00000000000..bf0a1c99444 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-17.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-18.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-18.png new file mode 100644 index 00000000000..1a00a54dd91 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-18.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-19.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-19.png new file mode 100644 index 00000000000..58bfb9d9211 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-19.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-20.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-20.png new file mode 100644 index 00000000000..043513f5ddc Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-20.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-21.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-21.png new file mode 100644 index 00000000000..82c155a6dd3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-21.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-22.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-22.png new file mode 100644 index 00000000000..acf78d4f747 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-22.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-23.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-23.png new file mode 100644 index 00000000000..fc5e8c2f4a4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-23.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-24.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-24.png new file mode 100644 index 00000000000..d74c382fa7e Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/1-harness-account-setup-24.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-09.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-09.png new file mode 100644 index 00000000000..61b98010dfa Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-09.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-10.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-10.png new file mode 100644 index 00000000000..6ac1eca51ca Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-10.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-11.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-11.png new file mode 100644 index 00000000000..c4892a614ad Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-11.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-12.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-12.png new file mode 100644 index 00000000000..10fbdf99305 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-12.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-13.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-13.png new file mode 100644 index 00000000000..f0625f38cae Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-13.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-14.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-14.png new file mode 100644 index 00000000000..5b9cbd776c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-14.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-15.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-15.png new file mode 100644 index 00000000000..1d3e8b88620 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/2-service-and-artifact-source-15.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-00.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-00.png new file mode 100644 index 00000000000..5901541c144 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-00.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-01.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-01.png new file mode 100644 index 00000000000..023fff31678 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-01.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-02.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-02.png new file mode 100644 index 00000000000..4a5fcca0740 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-02.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-03.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-03.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-03.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-04.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-04.png new file mode 100644 index 00000000000..540a6f10e97 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-04.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-05.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-05.png new file mode 100644 index 00000000000..99b0c6837ca Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-05.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-06.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-06.png new file mode 100644 index 00000000000..400e2ada086 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-06.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-07.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-07.png new file mode 100644 index 00000000000..6411a4e82d4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-07.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-08.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-08.png new file mode 100644 index 00000000000..cd297ab30cb Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/4-azure-workflows-and-deployments-08.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/_azure-deploy-in-harness-mgr.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/_azure-deploy-in-harness-mgr.png new file mode 100644 index 00000000000..becdb70fe50 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/_azure-deploy-in-harness-mgr.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/_azure-deploy-in-k8s.png b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/_azure-deploy-in-k8s.png new file mode 100644 index 00000000000..cb617f750c1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/aks-howtos/static/_azure-deploy-in-k8s.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/_category_.json b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/_category_.json new file mode 100644 index 00000000000..6aebac9f084 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/_category_.json @@ -0,0 +1 @@ +{"label": "Azure ARM Provisioning", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Azure ARM Provisioning"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "3i7h1lzlt2"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/add-azure-arm-templates.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/add-azure-arm-templates.md new file mode 100644 index 00000000000..1c41306a1f7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/add-azure-arm-templates.md @@ -0,0 +1,109 @@ +--- +title: Add Azure ARM Templates to Harness +description: This topic describes how to add your Azure ARM templates to Harness using Harness Infrastructure Provisioners. This involves providing the Git repo location of the ARM template and setting its scope… +sidebar_position: 300 +helpdocs_topic_id: naj2d3ra4n +helpdocs_category_id: 3i7h1lzlt2 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to add your Azure ARM templates to Harness using Harness Infrastructure Provisioners. This involves providing the Git repo location of the ARM template and setting its scope in Harness. + +Once you've added the template as an Infrastructure Provisioner, you can do the following: + +* **Target the Azure infrastructure:** use the Infrastructure Provisioner in a Harness Infrastructure Definition. Next, you add this Infrastructure Definition to a Workflow to define the ARM template's resources as the target infrastructure for the deployment. See [Target an Azure ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md). +* **Provision the Azure infrastructure:** use the Infrastructure Provisioner in a Harness Workflow to provision the Azure resources. This will run your ARM template and create its Azure resources. These resources could be the target infrastructure for a deployment from the Infrastructure Definition or simply other Azure resources. See [Provision using a Harness ARM Infrastructure Provisioner](provision-using-the-arm-blueprint-create-resource-step.md). + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Visual Summary](#visual_summary) +* [Supported Platforms and Technologies](#undefined) +* [Step 1: Add Harness Delegate](#step_1_add_harness_delegate) +* [Step 2: Add Source Source Provider](#step_2_add_source_source_provider) +* [Step 3: Add the Infrastructure Provisioner](#step_3_add_the_infrastructure_provisioner) +* [Configure As Code](#configure_as_code) + +### Before You Begin + +* Get an overview of provisioning with ARM in [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). +* [Set Up Your Harness Account for Azure ARM](set-up-your-harness-account-for-azure-arm.md) +* [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) + +### Visual Summary + +The following video shows you how to add an ARM template from [Azure's ARM templates GitHub account](https://github.com/Azure/azure-quickstart-templates) to Harness as a Harness Infrastructure Provisioner. + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Add Harness Delegate + +Make sure you have set up a Harness Delegate as described in [Set Up Your Harness Account for Azure ARM](set-up-your-harness-account-for-azure-arm.md). + +The Delegate must be able to connect to your Git provider to add the ARM template, and to pull it at deployment runtime. + +### Step 2: Add Source Source Provider + +Harness Source Repo Providers connect your Harness account with your Git platform accounts. + +For Azure ARM templates, you add a Harness Source Repo Provider and connect it to the Git repo for your ARM templates. + +For steps on setting up a Source Repo Provider, see [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +For example, here an Azure ARM template created my Azure and hosted on Harness' Docs repo at `https://github.com/wings-software/harness-docs/101-vm-simple-windows/azuredeploy.json`. + +Here is a Harness Source Repo Provider that uses the URL of the repo and the master branch: + +![](./static/add-azure-arm-templates-00.png)Next, you use this Source Repo Provider as the source of your Harness Infrastructure Provisioner. + +### Step 3: Add the Infrastructure Provisioner + +In your Harness Application, click **Infrastructure Provisioners**. + +Click **Add Infrastructure Provisioner**, and then click **ARM Template**. + +In **Azure Resource Type**, click **ARM**. + +Enter a name and description. + +In **Scope**, enter the scope for the template. The schema link in the template identifies the template scope: + +* Resource group: `"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#"` +* Subscription: `"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#"` +* Management group: `"$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#"` +* Tenant: `"$schema": "https://schema.management.azure.com/schemas/2019-08-01/tenantDeploymentTemplate.json#"` + +In **Source Type**, select **Git Repository** or **Template Body**. + +If you select **Template Body**, paste the template in **Template Body**, and click **Submit**. + +If you are using templates from a third-party, ensure that the templates are formatted correctly.If you select **Git Repository**, select the Harness Source Repo Provider you set up to connect Harness to your Git repo. + +In **Commit**, enter the branch or commit ID for the repo. + +Enter the branch name or commit ID. + +In **File Path**, enter path to the template JSON file in the Git repo. You don't need to enter the repo name, as that is set up in the Harness Source Repo Provider. + +Let's look at an example: + +* I have a Harness Source Repo Provider for the repo `https://github.com/wings-software/harness-docs`. +* My template is located at `https://github.com/wings-software/harness-docs/blob/main/101-vm-simple-windows/azuredeploy.json`. +* In **File Path**, I enter `101-vm-simple-windows/azuredeploy.json`. + +![](./static/add-azure-arm-templates-01.png)Click **Submit**. + +The Infrastructure Provisioner is added. + +Now you can use the Infrastructure Provisioner, you can: + +* **Target the Azure infrastructure:** use the Infrastructure Provisioner in a Harness Infrastructure Definition. Next, you add this Infrastructure Definition to a Workflow to define the ARM template's resources as the target infrastructure for the deployment. See [Target an Azure ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md). +* **Provision the Azure infrastructure:** use the Infrastructure Provisioner in a Harness Workflow to provision the Azure resources. This will run your ARM template and create its Azure resources. These resources could be the target infrastructure for a deployment or simply other Azure resources. See [Provision using a Harness ARM Infrastructure Provisioner](provision-using-the-arm-blueprint-create-resource-step.md). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the YAML editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/azure-arm-and-blueprint-how-tos.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/azure-arm-and-blueprint-how-tos.md new file mode 100644 index 00000000000..a2875f13122 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/azure-arm-and-blueprint-how-tos.md @@ -0,0 +1,54 @@ +--- +title: Azure Resource Management (ARM) How-tos +description: Harness has first-class support for Azure Resource Manager (ARM) templates as an infrastructure provisioner. You can use ARM templates to provision the deployment target environment in Azure, or to s… +sidebar_position: 100 +helpdocs_topic_id: qhnnq1mks3 +helpdocs_category_id: 3i7h1lzlt2 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness has first-class support for [Azure Resource Manager (ARM) templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) as an infrastructure provisioner. + +You can use ARM templates to provision the deployment target environment in Azure, or to simply provision any Azure infrastructure. + +See the following ARM How-tos: + +* [Set Up Your Harness Account for Azure ARM](set-up-your-harness-account-for-azure-arm.md) +* [Add Azure ARM Templates to Harness](add-azure-arm-templates.md) +* [Provision and Deploy to ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md) +* [Provision Resources using a Harness ARM Infrastructure Provisioner](provision-using-the-arm-blueprint-create-resource-step.md) +* [Use Azure ARM Template Outputs in Workflow Steps](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md) +* [Azure ARM Rollbacks](azure-arm-rollbacks.md) + +For a conceptual overview of provisioning with ARM and Blueprints, including videos, see [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). + +### Limitations + +* Harness supports the following scope types: + + Tenant + + Management Group + + Subscription + + Resource Group +* Harness ARM Template provisioning is supported in Canary and Multi-Service Workflows, and Blue/Green Workflows that [deploy Azure Web Apps](../azure-webapp-category/azure-web-app-deployments-overview.md). +If you simply want to provision infrastructure without deploying any resources to the provisioned infrastructure, simply use a Canary Workflow and an **ARM/Blueprint Create Resource** step in its **Pre-deployment Steps** and omit any further phases and steps. +* **Azure Web App deployment targets only:** you can use ARM templates with Harness to provision any Azure resources, but deployment target provisioning is limited to Azure Web App deployments. +A deployment target is defined in the Infrastructure Definition used by a Workflow (or Workflow Phase). In an Infrastructure Definition that uses the Microsoft Azure **Cloud Provider Type**, you will only see **Map Dynamically Provisioned Infrastructure** if you select **Azure Web Application** in **Deployment Type**. +See [Provision and Deploy to ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md). +* Incremental mode is supported for all Scope types (Subscription, Resource group, Management group, Tenant) and Complete mode is supported for Resource group only. +* ARM templates must be in JSON. Bicep isn't supported. +* Rollback is supported for the Resource group scope only. See [Azure ARM Rollbacks](azure-arm-rollbacks.md). + +### Azure Roles Required + +See **Azure Resource Management (ARM)** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). + +### Harness Permissions Required + +To set up a Harness ARM Provisioner, your Harness User account must belong to a User Group with the following Application Permissions: + +* **Permission Type:** `Provisioners`. +* **Application:** one or more Applications. +* **Filter:** `All Provisioners`. +* **Action:** `Create, Read, Update, Delete`. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/azure-arm-rollbacks.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/azure-arm-rollbacks.md new file mode 100644 index 00000000000..78c54c28c2b --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/azure-arm-rollbacks.md @@ -0,0 +1,55 @@ +--- +title: Azure ARM Rollbacks +description: Harness generates a template of the existing resource group and saves it before starting ARM Deployment. It uses this for rollback. +# sidebar_position: 2 +helpdocs_topic_id: 06mkvd27tu +helpdocs_category_id: 3i7h1lzlt2 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes Harness Azure ARM rollbacks. + + +### Limitations + +* **Only Resource Group scope rollback is supported:** rollbacks are supported for Azure ARM provisioning using the **Resource Group** scope only. +When you add your ARM template to Harness as an Infrastructure Provisioner, you specify its scope. See [Add Azure ARM Templates to Harness](add-azure-arm-templates.md). +If you choose a scope other than **Resource Group**, any Workflow using that Infrastructure Provisioner will not rollback provisioned Azure resources if it fails. +* **Storage account not supported for resource group rollback:** if the ARM template used to create a resource group has a storage account (`Microsoft.Storage/storageAccounts`), then rollback fails for that storage account. +This is because the template generated from Azure is not valid. During rollback you might see an error like this: + + +``` +[Resource - [Microsoft.Storage/storageAccounts/fileServices - storagewfvariables1234/default], +failed due to - [{error={code=InvalidXmlDocument,message=XML specified is not syntactically valid. +RequestId:0000-001a-0064-000-103dcd000000 Time:2021-03-03T12:25:02.5619016Z}}]  +``` +### Rollback Summary + +When you add the **ARM/Blueprint Create Resource** step to a Workflow, Harness adds ARM Rollback functionality automatically. No ARM Rollback step appears in the Workflow, but it will appear in the deployment in **Deployments** if there is a rollback. + +When running a Harness Workflow that performs provisioning using an ARM template, Harness generates a template of the existing resource group and saves it before starting ARM Deployment. + +You can see Harness saving the template in the **Execute ARM Deployment** section of the **ARM/Blueprint Create Resource** step: + + +``` +Starting template validation +Saving existing template for resource group - [harness-arm-test] +Starting ARM Deployment at Resource Group scope ... +Resource Group - [harness-arm-test] +Mode - [INCREMENTAL] +Deployment Name - [harness_558_1616014910588] +ARM Deployment request send successfully +``` +During rollback, this template is used to restore the resource group to its state before the deployment started. You can see the rollback in the **Execute ARM Deployment** of the **ARM Rollback** step: + + +``` +Starting ARM Rollback at Resource Group scope ... +Resource Group - [anil-harness-arm-test] +Mode - [COMPLETE] +Deployment Name - [harness_rollback_367_1616019421845] +ARM Rollback request send successfully +``` diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/provision-using-the-arm-blueprint-create-resource-step.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/provision-using-the-arm-blueprint-create-resource-step.md new file mode 100644 index 00000000000..cbc14849f8c --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/provision-using-the-arm-blueprint-create-resource-step.md @@ -0,0 +1,239 @@ +--- +title: Provision Resources using a Harness ARM Infrastructure Provisioner +description: You can provision Azure resources using ARM templates in your Harness Workflows. +# sidebar_position: 2 +helpdocs_topic_id: qlvrdq7uv6 +helpdocs_category_id: 3i7h1lzlt2 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can provision Azure resources using ARM templates in your Harness Workflows. Harness can provision the resources by themselves or as part of a Workflow performing other deployment steps. + +You can also use Azure ARM templates to provision the target infrastructure for some Azure deployments. Harness provisions the infrastructure and then deploys to it in the same Workflow. For steps on this process, see [Provision and Deploy to ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md). + +Currently, on [Azure Web App deployments](../azure-webapp-category/azure-web-app-deployments-overview.md) are supported for target infrastructure provisioning. + + +### Before You Begin + +* [Set Up Your Harness Account for Azure ARM](set-up-your-harness-account-for-azure-arm.md) +* [Add Azure ARM Templates to Harness](add-azure-arm-templates.md) +* For a conceptual overview of provisioning with ARM and Blueprints, see [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). + +### Visual Summary + +Here's a short video showing how to provision Azure infrastructure using ARM and Harness: + + + + + +You can use Azure ARM templates in Harness to provision any resources. + +1. **ARM Infrastructure Provisioner**: add your Azure ARM template as a Harness Infrastructure Provisioner. +2. **Workflow Provisioner Step**: there are a few ways to use Workflows to provision: + 1. Create a Canary Workflow or Multi-Service Workflow and add an **ARM/Blueprint Create Resource** step in its **Pre-deployment Steps** to provision the resources you need. You can use the Workflow to deploy anything else, or just omit any further phases and steps. + 2. Create a Canary or Blue/Green Workflow that deploys a Harness Service of the Azure Web App type. Add an **ARM/Blueprint Create Resource** step to its **Provision Infrastructure** section. +3. **Deploy:** the Workflow will provision the resource according to your ARM template. + +When you run the Workflow, it can provision the resources without deploying anything else. + +![](./static/provision-using-the-arm-blueprint-create-resource-step-02.png) + +### Limitations + +* See [Azure Resource Management (ARM) How-tos](azure-arm-and-blueprint-how-tos.md). + +### Step 1: Add the Infrastructure Provisioner + +A Harness Infrastructure Provisioner connects Harness to the Git repo where your ARM template is located. + +To set up a Harness Infrastructure Provisioner for an ARM template, follow the steps in [Add Azure ARM Templates to Harness](add-azure-arm-templates.md). + +### Step 2: Add ARM/Blueprint Create Resource Step to Workflow + +Canary, Multi-Service, and Blue/Green Workflow types contain a pre-deployment section where you can provision the infrastructure using your Harness Infrastructure Provisioner. + +Let's look at a Canary Workflow. + +In a Canary Workflow, in **Pre-deployment Steps**, click **Add Step**. + +Click **ARM/Blueprint Create Resource** and then click **Next**. + +In **Overview**, in **Provisioner**, select the Infrastructure Provisioner for your ARM template. + +In **Azure Cloud Provider**, enter the Cloud Provider for Harness to use when connecting to Azure and provisioning with the template. + +The Azure service account used with the Cloud Provider must have the Azure permissions needed to provision the resources in your template. See **Azure Resource Management (ARM)** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider).In **Subscription**, select the Azure subscription for the provisioned resources. + +In **Resource Group**, select the resource group for the provisioned resources. + +In **Mode**, select **Incremental** or **Complete**. This is the same as entering the `--mode` parameter in the `az deployment group create`. + +Incremental mode is supported for all Scope types (Subscription, Resource group, Management group, Tenant) and Complete mode is supported for Resource group only.For more information, see [Azure Resource Manager deployment modes](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-modes) from Azure. + +In **Timeout**, enter at least 20m. Provisioning Azure resources can take time. + +Click **Next**. Now you can specify template parameters. + +### Step 3: Specify Template Parameters + +In **Parameters**, you enter or link to your template parameters. + +In **Source Type**, select **Inline** or **Remote**. + +If you select **Inline**, enter the parameters in **Type/Paste JSON Configuration**. + +If you select **Remote**, in **Git Repository**, select the Harness Source Repo Provider that connects to the repo where your parameters file is located. + +For more information on the Source Repo Provider, see [Set Up Your Harness Account for Azure ARM](set-up-your-harness-account-for-azure-arm.md). + +You can specify the repo branch or commit ID and the path to the parameters JSON file. Always include the filename. + +#### Review: Parameters JSON Format + +Harness accept ARM template parameters is a specific JSON format. + +Typically, a parameters JSON file includes the `$schema` key to specify the location of the JSON schema file, and the `contentVersion` to specify the version of the template: + + +``` +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "adminUsername": { + "value": "johnsmith" + }, + "adminPassword": { + "value": "m2y&oD7k5$eE" + }, + "dnsLabelPrefix": { + "value": "genunique" + } + } +} +``` +When you use parameters text or files with Harness, you must remove the `$schema` and `contentVersion` keys. + +Harness provisioning requires you remove these keys due to limitations in the Azure Java SDK and REST APIs. Only the parameter object key:value pairs are allowed. + +Using the example above, the parameters would be provided like this in Harness: + + +``` +{ + "adminUsername": { + "value": "johnsmith" + }, + "adminPassword": { + "value": "m2y&oD7k5$eE" + }, + "dnsLabelPrefix": { + "value": "genunique" + } +} +``` +This format must be used whether the parameters are added using a remote file or inline. + +Click **Submit**. + +The **ARM/Blueprint Create Resource** is added to the Workflow. + +You can now add the remaining steps for your deployment. + +At runtime, the **ARM/Blueprint Create Resource** step will provision. + +### Option: Use Harness Variables in Template and Parameters + +You can use Harness Workflow and built-in variables in your ARM template and parameters. + +At runtime, Harness will replace the variables with the values you or Harness supplies, and then provision using the template and parameters. + +For example, here in an example of an Azure Web App template using Workflow variables: + + +``` +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "count": { + "type": "int", + "defaultValue": ${workflow.variables.count} + }, +... +``` +Here is an example of parameters using Workflow variables: + + +``` +{ + "webAppName": { + "value": "${workflow.variables.webAppParam}" + }, + "publicIPAddresses_name": { + "value": "my-publicIp" + } +} +``` +See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and [Built-in Variables List](https://docs.harness.io/article/aza65y4af6-built-in-variables-list). + +### Option: Use Template Outputs in Workflow Steps + +You can use the `${arm.}` expression to reference the template outputs relevant to Azure Web App Workflow steps. + +For details on Web App deployments, see [Azure Web App Deployments Overview](../azure-webapp-category/azure-web-app-deployments-overview.md). + +Let's look at an example of the **Slot Deployment** in a Web App [Canary Workflow deployment](../azure-webapp-category/create-an-azure-web-app-canary-deployment.md). + +Normally, you would select or enter the App Service, Deployment, and Target Slots for the Web App deployment. + +![](./static/provision-using-the-arm-blueprint-create-resource-step-03.png)When provisioning, you enter the `${arm.}` expression for each setting, mapping the outputs to the steps settings: + +![](./static/provision-using-the-arm-blueprint-create-resource-step-04.png)At runtime, Harness will substitute the output values, which in this case are taken from a parameters file, and use them for the **Slot Deployment** step. + +### Step 4: Deploy the Workflow + +Here is an example of a Blue/Green Azure Web App Workflow deployment that uses the Infrastructure Provisioner in its Infrastructure Definition and **ARM/Blueprint Create Resource** step: + +![](./static/provision-using-the-arm-blueprint-create-resource-step-05.png)In the **ARM/Blueprint Create Resource** step's **Execute ARM Deployment** section, you can see the ARM deployment: + + +``` +Starting template validation +Saving existing template for resource group - [anil-harness-arm-test] +Starting ARM Deployment at Resource Group scope ... +Resource Group - [anil-harness-arm-test] +Mode - [INCREMENTAL] +Deployment Name - [harness_533_1615316689992] +ARM Deployment request send successfully +``` +In the **ARM Deployment Steady state** section, you can see the deployment reach steady state: + + +``` +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Succeeded] + +Microsoft.Web/sites/slots - anil-dynamic-provisioner-webApp/staging :: [Succeeded] +Microsoft.Web/sites - anil-dynamic-provisioner-webApp :: [Succeeded] +Microsoft.Web/serverfarms - anil-dynamic-provisioner-webApp-ServicePlan :: [Succeeded] + +ARM Deployment - [harness_533_1615316689992] completed successfully +``` +In the **Slot Deployment** step, you will see that the values provided for the template outputs mapped to that step are used. + +Now you have provisioned the Web App target infrastructure and deployed to it using a single Workflow. + +For information on rollback, see [Azure ARM Rollbacks](azure-arm-rollbacks.md). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/set-up-your-harness-account-for-azure-arm.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/set-up-your-harness-account-for-azure-arm.md new file mode 100644 index 00000000000..0f8c98bd0bf --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/set-up-your-harness-account-for-azure-arm.md @@ -0,0 +1,76 @@ +--- +title: Set Up Your Harness Account for Azure ARM +description: Set up Harness account components for ARM provisioning. +sidebar_position: 200 +helpdocs_topic_id: l3do0np70h +helpdocs_category_id: 3i7h1lzlt2 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in integrating your ARM templates into Harness is setting up the necessary Harness account components: Delegates, Cloud Providers, and Source Repo Providers. + +This topic describes how to set up these components for ARM. + +Once you set your account for ARM, you can begin integrating your ARM templates. See [Add Azure ARM Templates to Harness](add-azure-arm-templates.md). + +In this topic: + +* [Before You Begin](set-up-your-harness-account-for-azure-arm.md#before-you-begin) +* [Limitations](set-up-your-harness-account-for-azure-arm.md#limitations) +* [Step 1: Install a Harness Delegate](set-up-your-harness-account-for-azure-arm.md#step-1-install-a-harness-delegate) +* [Step 2: Set Up the Azure Cloud Provider](set-up-your-harness-account-for-azure-arm.md#step-2-set-up-the-azure-cloud-provider) +* [Step 3: Set Up Source Repo Provider](set-up-your-harness-account-for-azure-arm.md#step-3-set-up-source-repo-provider) +* [Option: Set Up the Harness Artifact Server](set-up-your-harness-account-for-azure-arm.md#option-set-up-the-harness-artifact-server) + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* Get an overview of provisioning with ARM in [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) +* [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) + +### Limitations + +See [Azure Resource Management (ARM) How-tos](azure-arm-and-blueprint-how-tos.md). + +### Step 1: Install a Harness Delegate + +A Harness Delegate performs the ARM provisioning in your ARM templates. When installing the Delegate for your ARM provisioning, consider the following: + +* Install the Delegate where it can connect to the target infrastructure. + + If you are using ARM to provision the target infrastructure for an Azure Web App deployment, make sure this Delegate is in, or can connect to, the resource group for your Azure Web App. +* The Delegate must also be able to connect to your template repo. The Delegate will pull the templates at deployment runtime. +* All Harness Delegates types can use ARM. + +To install a Delegate, follow the steps in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). Once you install the Delegate and it registers with Harness, you'll see it on the Harness Delegates page. + +### Step 2: Set Up the Azure Cloud Provider + +A Harness Azure Cloud Provider connects to your Azure subscription using your Client ID and Tenant ID. + +Follow the steps in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider) to connect Harness to Azure. + +The Azure service account for the Azure Cloud Provider will need the roles required for the Azure resources you are provisioning. + +See **Azure Resource Management (ARM)** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). + +### Step 3: Set Up Source Repo Provider + +Harness pulls Azure ARM templates from a Git repo, such as GitHub. + +Add a Harness Source Repo Provider to connect Harness to the Git repo for your templates. + +You can also add your templates and parameters inline in Harness. In this case, you do not need a Source Repo Provider. + +See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +### Option: Set Up the Harness Artifact Server + +If you are performing ARM provisioning as part of [Azure Web App deployment](../azure-webapp-category/azure-web-app-deployments-overview.md), add a Harness Artifact Server to connect Harness to your Docker artifact server, such as a Docker Registry or Artifactory. + +See [Connect to Azure and Artifact Repo for Your Web App Deployments](../azure-webapp-category/connect-to-azure-for-web-app-deployments.md). + +If you store the artifact Docker image in Azure Container Registry, then you can use the Azure Cloud Provider you set up and skip the Artifact Server setup.If you simply going to provision resources without deploying anything, then you don't need to set up a Harness Artifact Server. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/add-azure-arm-templates-00.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/add-azure-arm-templates-00.png new file mode 100644 index 00000000000..05675a1fcf5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/add-azure-arm-templates-00.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/add-azure-arm-templates-01.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/add-azure-arm-templates-01.png new file mode 100644 index 00000000000..4ff1b725aa5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/add-azure-arm-templates-01.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-02.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-02.png new file mode 100644 index 00000000000..fef3ba904ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-02.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-03.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-03.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-03.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-04.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-04.png new file mode 100644 index 00000000000..3be9ee5690b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-04.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-05.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-05.png new file mode 100644 index 00000000000..d861ec162e1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/provision-using-the-arm-blueprint-create-resource-step-05.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-06.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-06.png new file mode 100644 index 00000000000..939e9df8280 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-06.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-07.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-07.png new file mode 100644 index 00000000000..5c97b18f6fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-07.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-08.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-08.png new file mode 100644 index 00000000000..2f07f5a348c Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-08.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-09.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-09.png new file mode 100644 index 00000000000..3d428e73f2f Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-09.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-10.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-10.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-10.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-11.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-11.png new file mode 100644 index 00000000000..3be9ee5690b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-11.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-12.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-12.png new file mode 100644 index 00000000000..d861ec162e1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/target-azure-arm-or-blueprint-provisioned-infrastructure-12.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-13.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-13.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-13.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-14.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-14.png new file mode 100644 index 00000000000..3be9ee5690b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-14.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/target-azure-arm-or-blueprint-provisioned-infrastructure.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/target-azure-arm-or-blueprint-provisioned-infrastructure.md new file mode 100644 index 00000000000..f7f49936a78 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/target-azure-arm-or-blueprint-provisioned-infrastructure.md @@ -0,0 +1,415 @@ +--- +title: Provision and Deploy to ARM Provisioned Infrastructure +description: Use Azure ARM templates to provision the target infrastructure for some Azure deployments. +# sidebar_position: 2 +helpdocs_topic_id: idqiy49prl +helpdocs_category_id: 3i7h1lzlt2 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use Azure ARM templates to provision the target infrastructure for some Azure deployments. Harness provisions the infrastructure and then deploys to it in the same Workflow. + +Currently, on [Azure Web App deployments](../azure-webapp-category/azure-web-app-deployments-overview.md) are supported for target infrastructure provisioning.For steps on using ARM templates to provision non-target infrastructure and resources, see [Provision Resources using a Harness ARM Infrastructure Provisioner](provision-using-the-arm-blueprint-create-resource-step.md). + + +### Before You Begin + +* [Set Up Your Harness Account for Azure ARM](set-up-your-harness-account-for-azure-arm.md) +* [Add Azure ARM Templates to Harness](add-azure-arm-templates.md) +* For a conceptual overview of provisioning with ARM and Blueprints, see [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). + +### Limitations + +* See [Azure Resource Management (ARM) How-tos](azure-arm-and-blueprint-how-tos.md). + +### Visual Summary + +Here's a short video showing how to provision and deploy to the same Azure infrastructure using ARM and Harness: + + + + + +Here's a diagram of how you use your Azure ARM templates in Harness to provision infra and then deploy to it: + +![](./static/target-azure-arm-or-blueprint-provisioned-infrastructure-06.png) + +1. **ARM Infrastructure Provisioner**: add your Azure ARM template as a Harness Infrastructure Provisioner. You add it by connecting to the Git repo for the ARM template. You also set the scope (Tenant, etc). You can also enter the ARM template inline without connecting to a Git repo. +2. **​Infrastructure Definition**: define a Harness Infrastructure Definition that maps your ARM outputs to the required Harness settings (Resource Group). +3. **Workflow Setup:** when you create your Workflow, you select the Infrastructure Definition you created, identifying it as the target infrastructure for the deployment. +4. **Workflow Provisioner Step:** in the Workflow, you add an **ARM/Blueprint Create Resource** step that uses the ARM Infrastructure Provisioner you set up. The Workflow will build the infrastructure according to your ARM template. You can also add ARM template parameter values here. +5. **Pre-deployment**: the pre-deployment steps are executed and provision the infrastructure using the **ARM/Blueprint Create Resource** step. +6. **Deployment:** the Workflow deploys to the provisioned infrastructure defined as its target Infrastructure Definition. + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Add the Infrastructure Provisioner + +A Harness Infrastructure Provisioner connects Harness to the Git repo where your ARM template is located. + +To set up a Harness Infrastructure Provisioner for an ARM template, follow the steps in [Add Azure ARM Templates to Harness](add-azure-arm-templates.md). + +### Step 2: Create Infrastructure Definition + +Add a new Environment as described in [Add an Environment](https://docs.harness.io/article/n39w05njjv-environment-configuration). + +Click **Add Infrastructure Definition**. + +Name the Infrastructure Definition. + +In **Cloud Provider Type**, select **Microsoft Azure**. + +In **Deployment Type**, select **Azure Web Application**. + +The **Map Dynamically Provisioned Infrastructure** option appears. This option is only available if you select **Azure Web Application** in **Deployment Type**. + +### Step 3: Select the Infrastructure Provisioner + +In Provisioner, select the Harness Infrastructure Provisioner you set up for your ARM template. + +Harness will use this Infrastructure Provisioner to locate the outputs you map to its **Resource Group** setting. + +See [Add Azure ARM Templates to Harness](add-azure-arm-templates.md). + +### Step 4: Select the Cloud Provider and Subscription + +In **Cloud Provider**, select the Harness Cloud Provider the Workflow will use to connect to the provisioned infrastructure. + +Typically, this is the same Cloud Provider used later when you add the **ARM/Blueprint Create Resource** step in the Workflow. + +In **Subscription**, enter the Azure Subscription where the ARM template infrastructure will be provisioned. + +### Step 5: Map ARM Outputs in Infrastructure Definition + +The purpose of the **Map Dynamically Provisioned Infrastructure** option is to map ARM template outputs to the settings Harness needs to provision the infrastructure. + +At runtime, Harness will pull the values for the settings from your ARM template. + +Ensure that the ARM template you added in the Infrastructure Provisioner you selected in Provisioner includes an output for Resource Group. + +For example, here are the outputs from an ARM template to provision Azure Web Apps: + + +``` +... +"outputs": { + "webApp": { + "type": "string", + "value": "[parameters('siteName')]" + }, + "slot": { + "type": "string", + "value": "[parameters('deploymentSlot')]" + }, + "resourceGroup": { + "type": "string", + "value": "harness-arm-test" + } + } +... +``` +You can see the `resourceGroup` output. You can reference that output, or any output, using the expression `${arm.}`. + +For example, to reference `resourceGroup` you can use `${arm.resourceGroup}`. + +In **Resource Group**, enter `${arm.resourceGroup}`. The value in the output is used at runtime. This is the same as providing the resource group in `az deployment group create`. + +Click **Submit**. + +The Infrastructure Provisioner is now defined as an Infrastructure Definition. + +You can now use this Infrastructure Definition in a Workflow as the target infrastructure. + +### Step 6: Select Infrastructure Definition in Workflow + +When you create the Harness Workflow that will deploy to the infrastructure in your ARM template, you will select the Infrastructure Definition you created using the ARM template's Infrastructure Provisioner. + +In a Canary or Multi-Service Workflow, you add the Infrastructure Definition in the Phase settings. + +![](./static/target-azure-arm-or-blueprint-provisioned-infrastructure-07.png) + +In a Blue/Green Workflow, you add the Infrastructure Definition in the Workflow settings. + +![](./static/target-azure-arm-or-blueprint-provisioned-infrastructure-08.png) + +Now that the Infrastructure Definition is set up as the target infrastructure for the Workflow, you can add a step to the Workflow to run the Infrastructure Provisioner and create that target infrastructure. + +### Step 7: Add ARM/Blueprint Create Resource Step to Workflow + +Canary, Multi-Service, and Blue/Green Workflow types contain a pre-deployment section where you can provision the target infrastructure using your Harness Infrastructure Provisioner. + +Let's look at a Blue/Green Workflow. + +In a Blue/Green Workflow, in **Provision infrastructure**, click **Add Step**. + +Click **ARM/Blueprint Create Resource** and then click **Next**. + +In **Overview**, enter the same settings you used in the Infrastructure Definition. + +In **Resource Group**, select the same resource group that you used in the `resourceGroup` output in your template. + +The following image shows how the settings in the Infrastructure Definition map to the settings in the **ARM/Blueprint Create Resource** step. + +![](./static/target-azure-arm-or-blueprint-provisioned-infrastructure-09.png)In **Mode**, select **Incremental** or **Complete**. This is the same as entering the `--mode` parameter in the `az deployment group create`. + +For more information, see [Azure Resource Manager deployment modes](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deployment-modes) from Azure. + +In **Timeout**, enter at least 20m. Provisioning Azure resources can take time. + +Click **Next**. + +### Step 8: Specify Template Parameters + +In **Parameters**, you enter or link to your template parameters. + +In **Source Type**, select **Inline** or **Remote**. + +If you select **Inline**, enter the parameters in **Type/Paste JSON Configuration**. + +If you select **Remote**, in **Git Repository**, select the Harness Source Repo Provider that connects to the repo where your parameters file is located. + +You can specify the repo branch or commit ID and the path to the parameters JSON file. Always include the filename. + +#### Review: Parameters JSON Format + +Harness accept ARM template parameters is a specific JSON format. + +Typically, a parameters JSON file includes the `$schema` key to specify the location of the JSON schema file, and the `contentVersion` to specify the version of the template: + + +``` +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "adminUsername": { + "value": "johnsmith" + }, + "adminPassword": { + "value": "m2y&oD7k5$eE" + }, + "dnsLabelPrefix": { + "value": "genunique" + } + } +} +``` +When you use parameters text or files with Harness, you must remove the `$schema` and `contentVersion` keys. + +Harness provisioning requires you remove these keys due to limitations in the Azure Java SDK and REST APIs. Only the parameter object key:value pairs are allowed. + +Using the example above, the parameters would be provided like this in Harness: + + +``` +{ + "adminUsername": { + "value": "johnsmith" + }, + "adminPassword": { + "value": "m2y&oD7k5$eE" + }, + "dnsLabelPrefix": { + "value": "genunique" + } +} +``` +This format must be used whether the parameters are added using a remote file or inline. + +Click **Submit**. + +The **ARM/Blueprint Create Resource** is added to the Workflow. + +You can now add the remaining steps for your deployment to the infrastructure the **ARM/Blueprint Create Resource** step will provision. + +### Step 9: Use Template Outputs in Workflow Steps + +When you added the Infrastructure Provisioner to the Infrastructure Definition you used the `${arm.resourceGroup}` expression to reference the resource group output in the ARM template. + +In the Azure Web App steps, you can use the `${arm.}` expression to reference the other outputs relevant to the Web App Workflow steps. + +For details on Web App deployments, see [Azure Web App Deployments Overview](../azure-webapp-category/azure-web-app-deployments-overview.md). + +Let's look at an example of the **Slot Setup** in a Web App [Blue/Green Workflow deployment](../azure-webapp-category/create-an-azure-web-app-blue-green-deployment.md#step-3-slot-deployment-step). + +Normally, you would select or enter the App Service, Deployment, and Target Slots for the Web App deployment. + +![](./static/target-azure-arm-or-blueprint-provisioned-infrastructure-10.png) + +When provisioning, you enter the `${arm.}` expression for each setting, mapping the outputs to the steps settings: + +![](./static/target-azure-arm-or-blueprint-provisioned-infrastructure-11.png) + +At runtime, Harness will substitute the output values, which in this case are taken from a parameters file, and use them for the **Slot Setup** step. + +If the Azure Web App Workflow uses an Infrastructure Definition that uses an Infrastructure Provisioner (such as ARM Infrastructure Provisioner) then the **Slot Setup** step must use template outputs in its settings. The **Slot Setup** step uses the Infrastructure Definition settings to pull App Service and slot information from Azure. If the Infrastructure Definition uses an Infrastructure Provisioner, then Harness cannot obtain this information until runtime. + +### Step 10: Deploy the Workflow + +Here is an example of a Blue/Green Azure Web App Workflow deployment that uses the Infrastructure Provisioner in its Infrastructure Definition and **ARM/Blueprint Create Resource** step: + +![](./static/target-azure-arm-or-blueprint-provisioned-infrastructure-12.png)In the **ARM/Blueprint Create Resource** step's **Execute ARM Deployment** section, you can see the ARM deployment: + + +``` +Starting template validation +Saving existing template for resource group - [anil-harness-arm-test] +Starting ARM Deployment at Resource Group scope ... +Resource Group - [anil-harness-arm-test] +Mode - [INCREMENTAL] +Deployment Name - [harness_533_1615316689992] +ARM Deployment request send successfully +``` +In the **ARM Deployment Steady state** section you can see the deployment reach steady state: + + +``` +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Running] +Deployment Status for - [harness_533_1615316689992] is [Succeeded] + +Microsoft.Web/sites/slots - anil-dynamic-provisioner-webApp/staging :: [Succeeded] +Microsoft.Web/sites - anil-dynamic-provisioner-webApp :: [Succeeded] +Microsoft.Web/serverfarms - anil-dynamic-provisioner-webApp-ServicePlan :: [Succeeded] + +ARM Deployment - [harness_533_1615316689992] completed successfully +``` +In the **Slot Setup** step, you will see that the values provided for the template outputs mapped to that step are used. + +Now you have provisioned the Web App target infrastructure and deployed to it using a single Workflow. + +For information on rollback, see [Azure ARM Rollbacks](azure-arm-rollbacks.md). + +### Sample ARM Template and Parameters + +Here is a sample ARM template for creating Azure Web App deployments. You will need to update the outputs for your environment. + + +``` +{ + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "siteName": { + "type": "string", + "metadata": { + "description": "The name of the web app that you wish to create." + } + }, + "deploymentSlot": { + "type": "string", + "metadata": { + "description": "The name of the deployment slot that you wish to create." + } + } + }, + "variables": { + "servicePlanName": "[concat(parameters('siteName'), '-ServicePlan')]" + }, + "resources": [ + { + "apiVersion": "2016-09-01", + "type": "Microsoft.Web/serverfarms", + "kind": "linux", + "name": "[variables('servicePlanName')]", + "location": "[resourceGroup().location]", + "properties": { + "name": "[variables('servicePlanName')]", + "reserved": true, + "numberOfWorkers": "1" + }, + "dependsOn": [], + "sku": { + "Tier": "Standard", + "Name": "S1" + } + }, + { + "apiVersion": "2016-08-01", + "type": "Microsoft.Web/sites", + "name": "[parameters('siteName')]", + "location": "[resourceGroup().location]", + "properties": { + "siteConfig": { + "name": "[parameters('siteName')]", + "appSettings": [ + { + "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE", + "value": "false" + } + ], + "linuxFxVersion": "DOCKER|nginx:alpine" + }, + "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('servicePlanName'))]" + }, + "dependsOn": [ + "[resourceId('Microsoft.Web/serverfarms', variables('servicePlanName'))]" + ] + }, + { + "apiVersion": "2020-06-01", + "type": "Microsoft.Web/sites/slots", + "name": "[concat(parameters('siteName'), '/', parameters('deploymentSlot'))]", + "kind": "app", + "location": "[resourceGroup().location]", + "comments": "This specifies the web app slots.", + "tags": { + "displayName": "WebAppSlots" + }, + "properties": { + "siteConfig": { + "name": "[parameters('siteName')]", + "appSettings": [ + { + "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE", + "value": "false" + } + ], + "linuxFxVersion": "DOCKER|nginx:alpine" + }, + "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('servicePlanName'))]" + }, + "dependsOn": [ + "[resourceId('Microsoft.Web/Sites', parameters('siteName'))]" + ] + } + ], + "outputs": { + "webApp": { + "type": "string", + "value": "[parameters('siteName')]" + }, + "slot": { + "type": "string", + "value": "[parameters('deploymentSlot')]" + }, + "resourceGroup": { + "type": "string", + "value": "MyResourceGroup" + } + } +} +``` +Here is the parameters file for the template. You will need to update the values for your environment. + + +``` +{ + "siteName": { + "value": "myWebApp" + }, + "deploymentSlot": { + "value": "staging" + } +} +``` +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/use-azure-arm-and-blueprint-parameters-in-workflow-steps.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/use-azure-arm-and-blueprint-parameters-in-workflow-steps.md new file mode 100644 index 00000000000..047b29e1d3b --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-arm/use-azure-arm-and-blueprint-parameters-in-workflow-steps.md @@ -0,0 +1,164 @@ +--- +title: Use Azure ARM Template Outputs in Workflow Steps +description: You can use the template outputs in some Workflow step settings, or simply echo their values. +# sidebar_position: 2 +helpdocs_topic_id: 69fgm2e21d +helpdocs_category_id: 3i7h1lzlt2 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +When you use an ARM template in a Harness Workflow to provision Azure resources, you can use the template outputs in some Workflow step settings, or simply echo their values. + +This topic describes how to reference template outputs, how they are used in Azure Web App Workflow steps, and how to echo their values. + +In this topic: + +* [Before You Begin](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md#before-you-begin) +* [Limitations](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md#limitations) +* [Step 1: Use an ARM Template in a Workflow](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md#step-1-use-an-arm-template-in-a-workflow) +* [Review: Referencing ARM Template Outputs](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md#review-referencing-arm-template-outputs) +* [Option 1: Use Outputs in the Slot Setup Step](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md#option-1-use-outputs-in-the-slot-setup-step) +* [Option 2: Echo Template Outputs](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md#option-2-echo-template-outputs) +* [Configure As Code](use-azure-arm-and-blueprint-parameters-in-workflow-steps.md#configure-as-code) + +### Before You Begin + +* [Set Up Your Harness Account for Azure ARM](set-up-your-harness-account-for-azure-arm.md) +* [Add Azure ARM Templates to Harness](add-azure-arm-templates.md) +* [Provision and Deploy to ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md) +* [Provision Resources using a Harness ARM Infrastructure Provisioner](provision-using-the-arm-blueprint-create-resource-step.md) +* For a conceptual overview of provisioning with ARM and Blueprints, see [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). + +### Limitations + +* See [Azure Resource Management (ARM) How-tos](azure-arm-and-blueprint-how-tos.md). + +### Step 1: Use an ARM Template in a Workflow + +Once you have added an ARM template as a Harness Infrastructure Provisioner, you can use the Infrastructure Provisioner in a Workflow in the following ways: + +* **Target the Azure infrastructure:** use the Infrastructure Provisioner in a Harness Infrastructure Definition. Next, you add this Infrastructure Definition to a Workflow to define the ARM template's resources as the target infrastructure for the deployment. +See [Target an Azure ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md). +* **Provision the Azure infrastructure:** use the Infrastructure Provisioner in a Harness Workflow to provision the Azure resources. This will run your ARM template and create its Azure resources. These resources could be the target infrastructure for a deployment from the Infrastructure Definition or simply other Azure resources. +See [Provision using a Harness ARM Infrastructure Provisioner](provision-using-the-arm-blueprint-create-resource-step.md). + +### Review: Referencing ARM Template Outputs + +You reference ARM template outputs in Workflow steps using the format `${arm.}`. First, you need to use the Infrastructure Provisioner that links to your ARM template. + +You use the Harness ARM Infrastructure Provisioner in a Workflow in the **ARM/Blueprint Create Resource** step. + +Once you have set up the the **ARM/Blueprint Create Resource** step, you can reference the template's outputs. + +Ensure that the ARM template you added in the Infrastructure Provisioner you selected in **Provisioner** includes outputs. + +For example, here are the outputs from an ARM template to provision Azure Web Apps: + + +``` +... +"outputs": { + "webApp": { + "type": "string", + "value": "[parameters('siteName')]" + }, + "slot": { + "type": "string", + "value": "[parameters('deploymentSlot')]" + }, + "resourceGroup": { + "type": "string", + "value": "harness-arm-test" + } + } +... +``` +You can see the `resourceGroup` output. You can reference that output, or any output, using the expression `${arm.}`. + +For example, to reference `resourceGroup` you can use `${arm.resourceGroup}`. + +At runtime, Harness will pull the values for the settings from your ARM template. + +### Option 1: Use Outputs in the Slot Setup Step + +Using outputs in the Slot Setup step as part of an Azure Web App deployment is described in detail in [Provision and Deploy to ARM Provisioned Infrastructure](target-azure-arm-or-blueprint-provisioned-infrastructure.md).If the Azure Web App Workflow uses an Infrastructure Definition that uses an Infrastructure Provisioner (such as ARM Infrastructure Provisioner) then the **Slot Setup** step must use template outputs in its settings. + +The **Slot Setup** step uses the Infrastructure Definition settings to pull App Service and slot information from Azure. If the Infrastructure Definition uses an Infrastructure Provisioner, then Harness cannot obtain this information until runtime. + +For details on Web App deployments, see [Azure Web App Deployments Overview](../azure-webapp-category/azure-web-app-deployments-overview.md). + +Let's look at an example of the **Slot Setup** in a Web App [Blue/Green Workflow deployment](../azure-webapp-category/create-an-azure-web-app-blue-green-deployment.md#step-3-slot-deployment-step). + +Normally, you would select or enter the App Service, Deployment, and Target Slots for the Web App deployment. + +![](./static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-13.png)When provisioning, you enter the `${arm.}` expression for each setting, mapping the outputs to the steps settings: + +![](./static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-14.png)At runtime, Harness will substitute the output values, which in this case are taken from a parameters file, and use them for the **Slot Setup** step. + +### Option 2: Echo Template Outputs + +You can reference outputs in a Workflow step by simply echoing them. + +For example, here are some outputs from a template: + + +``` +"outputs": { + "storageAccount": { + "type": "string", + "value": "anilstoragetestharness" + }, + "ipAddresses": { + "type": "array", + "copy": { + "count": "[parameters('count')]", + "input": "[reference(concat('nic-', copyIndex())).ipConfigurations[0].properties.privateIPAddress]" + } + }, + "publicIp": { + "type": "string", + "value": "[parameters('publicIPAddresses_name')]" + }, + "appPlan": { + "type": "string", + "value": "[variables('appServicePlanName')]" + }, + "webApp": { + "type": "string", + "value": "[variables('webAppPortalName')]" + }, + "slot": { + "type": "string", + "value": "[concat(variables('webAppPortalName'), '/testSlot')]" + } + } +``` +Here is a bash script for a Shell Script step that references the outputs: + + +``` +echo "***************** ARM output *****************" +echo "Storage account:" ${arm.storageAccount} +echo "Public Ip:" ${arm.publicIp} +echo "App Service Plan:" ${arm.appPlan} +echo "Web App Name:" ${arm.webApp} +echo "Web App Slot:" ${arm.slot} +``` +Here is the [Shell Script step](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) log from the deployment showing the echo of the outputs: + + +``` +Executing command ... +***************** ARM output ***************** +Storage account: anilstoragetestharness +Public Ip: anil-publicIp +App Service Plan: AppServicePlan-anil-paramWebapp-git +Web App Name: anil-paramWebapp-git-webapp +Web App Slot: anil-paramWebapp-git-webapp/testSlot +Command completed with ExitCode (0) +``` +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/_category_.json b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/_category_.json new file mode 100644 index 00000000000..ea3b99a44e7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/_category_.json @@ -0,0 +1 @@ +{"label": "Azure Blueprint Provisioning", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Azure Blueprint Provisioning"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "4on0a5avqo"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/add-azure-blueprints-to-harness.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/add-azure-blueprints-to-harness.md new file mode 100644 index 00000000000..4c8b8551cd8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/add-azure-blueprints-to-harness.md @@ -0,0 +1,129 @@ +--- +title: Add Azure Blueprints to Harness +description: Add your Azure Blueprint definitions to Harness using Harness Infrastructure Provisioners. +# sidebar_position: 2 +helpdocs_topic_id: u4dej7jsix +helpdocs_category_id: 4on0a5avqo +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to add your Azure Blueprint definitions to Harness using Harness Infrastructure Provisioners. This involves providing the Git repo location of the Blueprint definitions and setting its scope in Harness. + +Once you've added the definition as an Infrastructure Provisioner, you can use the Infrastructure Provisioner in a Harness Workflow to provision the Azure resources. Harness will run your Blueprint definition and create its Azure resources. + +In this topic: + +* [Before You Begin](add-azure-blueprints-to-harness.md#before-you-begin) +* [Limitations](add-azure-blueprints-to-harness.md#limitations) +* [Visual Summary](add-azure-blueprints-to-harness.md#visual-summary) +* [Supported Platforms and Technologies](add-azure-blueprints-to-harness.md#undefined) +* [Step 1: Add Harness Delegate](add-azure-blueprints-to-harness.md#step-1-add-harness-delegate) +* [Step 2: Add Source Source Provider](add-azure-blueprints-to-harness.md#step-2-add-source-source-provider) +* [Step 3: Add the Infrastructure Provisioner](add-azure-blueprints-to-harness.md#step-3-add-the-infrastructure-provisioner) +* [Next Step](add-azure-blueprints-to-harness.md#next-step) +* [Configure As Code](add-azure-blueprints-to-harness.md#configure-as-code) + +### Before You Begin + +* Get an overview of provisioning with ARM in [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). +* [Set Up Your Harness Account for Azure Blueprint](set-up-harness-for-azure-blueprint.md) + +### Limitations + +* See [Azure Blueprint How-tos](azure-blueprint-how-tos.md). +* Unlike other provisioners supported by Harness, Azure Blueprint definitions cannot be added to Infrastructure Definitions. Blueprint definitions cannot be used as deployment targets. You can simply use them in a Workflow to provision resources. +You can use ARM templates to provision deployment target environments. See [Provision and Deploy to ARM Provisioned Infrastructure](../azure-arm/target-azure-arm-or-blueprint-provisioned-infrastructure.md). + +### Visual Summary + +The following video shows you how to add an ARM template from [Azure's ARM templates GitHub account](https://github.com/Azure/azure-quickstart-templates) to Harness as a Harness Infrastructure Provisioner. + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Add Harness Delegate + +Make sure you have set up a Harness Delegate as described in [Set Up Your Harness Account for Azure Blueprint](set-up-harness-for-azure-blueprint.md). + +The Delegate must be able to connect to your Git provider to add the Blueprint folder, and to pull its package at deployment runtime. + +### Step 2: Add Source Source Provider + +Harness Source Repo Providers connect your Harness account with your Git platform accounts. + +For Azure Blueprint, you add a Harness Source Repo Provider and connect it to the Git repo for your definitions. + +For steps on setting up a Source Repo Provider, see [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +Next, you use this Source Repo Provider as the source of your Harness Infrastructure Provisioner. + +### Step 3: Add the Infrastructure Provisioner + +In your Harness Application, click **Infrastructure Provisioners**. + +Click **Add Infrastructure Provisioner**, and then click **ARM Template**. Blueprints are also managed under this Infrastructure Provisioner type. + +In **Azure Resource Type**, select **Blueprint**. + +In **Scope**, enter the scope for the definition. The `targetScope` in the blueprint identifies its scope. + +Your assign.json file must have a `scope` property (`properties.scope`). The `scope` is the target subscription of the Blueprint assignment (format: `/subscriptions/{subscriptionId}`). For management group level assignments, the property is required. + +For example: + + +``` +{ + "identity": { + "type": "SystemAssigned" + }, + "location": "westus2", + "properties": { + "blueprintId": "/providers/Microsoft.Management/managementGroups/HarnessARMTest/providers/Microsoft.Blueprint/blueprints/101-boilerplate-mng/versions/v2", + "resourceGroups": { + "SingleRG": { + "name": "mng-001", + "location": "eastus" + } + }, + "locks": { + "mode": "none" + }, + "parameters": { + "principalIds": { + "value": "0000000-0000-0000-0000-0000000000" + }, + "genericBlueprintParameter": { + "value": "test" + } + }, + "scope": "/subscriptions/0000000-0000-0000-0000-0000000000" + } +} +``` +In **Source Type**, **Git Repository** is selected. It is the only option currently. + +If you select **Git Repository**, select the Harness Source Repo Provider you set up to connect Harness to your Git repo. + +In **Commit**, enter the branch or commit ID for the repo. + +Enter the branch name or commit ID. + +In **File Path**, enter path to the definition folder in the Git repo. You don't need to enter the repo name, as that is set up in the Harness Source Repo Provider. + +Click **Submit**. + +The Infrastructure Provisioner is added. + +Now you can use the Infrastructure Provisioner in a Harness Workflow to provision the Azure resources. Harness will run your Blueprint definition and create its Azure resources. + +### Next Step + +* [Provision using Azure Blueprint Definitions](provision-using-azure-blueprint-definitions.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/azure-blueprint-how-tos.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/azure-blueprint-how-tos.md new file mode 100644 index 00000000000..ee63a4619c7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/azure-blueprint-how-tos.md @@ -0,0 +1,100 @@ +--- +title: Azure Blueprint How-tos +description: Harness has first-class support for Azure Blueprints as an infrastructure provisioner. Harness takes a Blueprint definition, publishes it using the version specified in assign.json file, and creates… +sidebar_position: 100 +helpdocs_topic_id: sq1xy00oaa +helpdocs_category_id: 4on0a5avqo +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness has first-class support for Azure Blueprints as an infrastructure provisioner. + +Harness takes a Blueprint definition, publishes it using the version specified in assign.json file, and creates the assignment. + +See the following Blueprint How-tos: + +* [Set Up Your Harness Account for Azure Blueprint](set-up-harness-for-azure-blueprint.md) +* [Add Azure Blueprints to Harness](add-azure-blueprints-to-harness.md) +* [Provision using Azure Blueprint Definitions](provision-using-azure-blueprint-definitions.md) + +For a conceptual overview of provisioning with ARM and Blueprints, including videos, see [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). + +### Limitations + +* Rollback is not supported for Azure Blueprints. +* All information about the assignment must be provided in assign.json file. +* "Unassignment" and updating already exiting Blueprints assignments is not supported. +* Azure Blueprint outputs cannot be referenced as variables in Harness. This limitation is the result of there being no outputs available when the Blueprint assignment is done. It is not supported by Azure. Consequently, Azure Blueprint cannot be used for mapping dynamically provisioned infrastructure while creating Harness Infrastructure Definitions. +* Before reviewing JSON requirements and limitations, let's review the key parameters: + + `identity`: Blueprint uses a managed identity to deploy the artifacts specified by the Blueprint definition. It can use the `SystemAssigned` or `UserAssigned` value. + + `location`: - Blueprint uses a managed Identity to deploy resources, which requires a location. + + `blueprintId`: for example: `/providers/Microsoft.Management/managementGroups/HarnessARMTest/providers/Microsoft.Blueprint/blueprints/101-boilerplate-mng/versions/v1.` Here is a description of each part: + + `/providers/Microsoft.Management/managementGroups/HarnessARMTest`: Blueprint definition scope including management group name that is `HarnessARMTest`. + + `/providers/Microsoft.Blueprint/blueprints/`: default Blueprint provider information. + + `101-boilerplate-mng`: Blueprint definition name. + + `v1`: new Blueprint version number. + + `locks`: determines whether users, groups, and service principals with permissions can modify and delete resources deployed by the Blueprint service principal. + + `parameters`: list of dynamic parameters that are applied to resources during deployment. See [Creating dynamic blueprints through parameters](https://docs.microsoft.com/en-us/azure/governance/blueprints/concepts/parameters) from Azure. + + `scope`: the subscription where blueprint definition will be assigned. +* Your assign.json file must have a `scope` property (`properties.scope`) for management group level assignments. The `scope` is the target subscription of the Blueprint assignment (format: `/subscriptions/{subscriptionId}`). For management group level assignments, the property is required. +For example: +``` +{ + "identity": { + "type": "SystemAssigned" + }, + "location": "westus2", + "properties": { + "blueprintId": "/providers/Microsoft.Management/managementGroups/HarnessARMTest/providers/Microsoft.Blueprint/blueprints/101-boilerplate-mng/versions/v2", + "resourceGroups": { + "SingleRG": { + "name": "mng-001", + "location": "eastus" + } + }, + "locks": { + "mode": "none" + }, + "parameters": { + "principalIds": { + "value": "0000000-0000-0000-0000-0000000000" + }, + "genericBlueprintParameter": { + "value": "test" + } + }, + "scope": "/subscriptions/0000000-0000-0000-0000-0000000000" + } +} +``` + +The `scope` property is required for deployment using management groups. If the Blueprint definition begins with `/providers/Microsoft.Management/managementGroups/{managmentGroupName}` , the definition will be created at the management group scope, but the assignment is done at the subscription in `scope`. +* The subscription provided in the `scope` property must be a descendant of the management group provided in the `blueprintId`. +* `blueprintId` must follow the next pattern `/{resourceScope}/providers/Microsoft.Blueprint/blueprints/{blueprintName}/versions/{versionId}`. If not, an exception is thrown. +* If the Blueprint definition is created and published at the management group scope, Harness only supports assignment to one subscription during deployment. Harness doesn't support assignment to multiple subscriptions. You can only state one subscription as the value of `scope` property in the assign.json file. +* The assignment name is generated automatically. +* The artifacts name is taken from artifact `name` property in artifact.json. If `name` doesn’t exist in artifact.json, the file name is used. The artifact name is important because of its use with the `dependsOn` property. See [Understand the deployment sequence in Azure Blueprints](https://docs.microsoft.com/en-us/azure/governance/blueprints/concepts/sequencing-order) from Azure. +* If the blueprint.json file contains the `name` property and that name is not the same as the `name` provided in the assign.json file, this error message is shown: +``` +Not match blueprint name found in blueprint json file with properties.blueprintId property in assign json file. +Found name in blueprint json: boilerplate-101, and properties.blueprintId: boilerplate-201" +``` +* If `identity` has the `SystemAssigned` value, then the Azure service principal used for the Harness Azure Cloud Provider must have the **Owner** role in the subscription where the assignment will be created. If the service principal uses a System-assigned user identity, then you are responsible for managing the right and lifecycle of a user-managed identity. +* During deployment, each Workflow checks whether the version number specified in the assign.json file already exits on Azure. +If the version does already exist, only a new assignment will be created. +If the version does not exist, the deployment process (creating or updating Blueprint definition and artifacts, publishing, and assignment) starts. + +### Azure Roles Required + +See **Azure Blueprint** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). + +### Harness Permissions Required + +To set up a Harness Blueprint Provisioner, your Harness User account must belong to a User Group with the following Application Permissions: + +* **Permission Type:** `Provisioners`. +* **Application:** one or more Applications. +* **Filter:** `All Provisioners`. +* **Action:** `Create, Read, Update, Delete`. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/provision-using-azure-blueprint-definitions.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/provision-using-azure-blueprint-definitions.md new file mode 100644 index 00000000000..125f539a8b8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/provision-using-azure-blueprint-definitions.md @@ -0,0 +1,167 @@ +--- +title: Provision using Azure Blueprint Definitions +description: You can provision Azure resources using Blueprint definitions in your Harness Workflows. +# sidebar_position: 2 +helpdocs_topic_id: vhit72svmc +helpdocs_category_id: 4on0a5avqo +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can provision Azure resources using Blueprint definitions in your Harness Workflows. Harness can provision the resources by themselves or as part of a Workflow performing other deployment steps. + +Harness takes a Blueprint definition [you set up as a Harness Infrastructure Provisioner](add-azure-blueprints-to-harness.md), publishes it using the version specified in assign.json file and creates the assignment. + +### Before You Begin + +* [Set Up Your Harness Account for Azure Blueprint](set-up-harness-for-azure-blueprint.md) +* [Add Azure Blueprints to Harness](add-azure-blueprints-to-harness.md) +* For a conceptual overview of provisioning with ARM and Blueprints, see [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). + +### Visual Summary + +Here's a short video showing how to provision Azure infrastructure using Blueprint definitions and Harness: + + + + + +You can use Azure Blueprint definitions in Harness to provision any resources. + +1. **ARM Infrastructure Provisioner**: add your Azure Blueprint definition as a Harness Infrastructure Provisioner. +2. **Workflow Provisioner Step**: create a Canary Workflow or Multi-Service Workflow and add an **ARM/Blueprint Create Resource** step in its **Pre-deployment Steps** to provision the resources you need. You can use the Workflow to deploy anything else, or just omit any further phases and steps. +3. **Deploy:** the Workflow will provision the resource according to your Blueprint definitions. + +When you run the Workflow, it can provision the resources without deploying anything else. + +![](./static/provision-using-azure-blueprint-definitions-00.png) + +### Limitations + +* See [Azure Resource Management (ARM) How-tos](../azure-arm/azure-arm-and-blueprint-how-tos.md). +* Unlike other provisioners supported by Harness, Azure Blueprint definitions cannot be added to Infrastructure Definitions. Blueprint definitions cannot be used as deployment targets in Harness Workflows. You can simply use them in a Workflow to provision resources. +You can use ARM templates to provision deployment target environments. See [Provision and Deploy to ARM Provisioned Infrastructure](../azure-arm/target-azure-arm-or-blueprint-provisioned-infrastructure.md). + +### Step 1: Add the Infrastructure Provisioner + +A Harness Infrastructure Provisioner connects Harness to the Git repo where your Blueprint definition is located. + +To set up a Harness Infrastructure Provisioner for a Blueprint, follow the steps in [Add Azure Blueprints to Harness](add-azure-blueprints-to-harness.md). + +### Step 2: Add ARM/Blueprint Create Resource Step to Workflow + +Canary, Multi-Service, and Blue/Green Workflow types contain a pre-deployment section where you can provision infrastructure using your Harness Infrastructure Provisioner. + +Let's look at a Canary Workflow. + +In a Canary Workflow, in **Pre-deployment Steps**, click **Add Step**. + +Click **ARM/Blueprint Create Resource** and then click **Next**. + +In **Overview**, in **Provisioner**, select the Infrastructure Provisioner for your Blueprint. + +In **Azure Cloud Provider**, enter the Cloud Provider for Harness to use when connecting to Azure and provisioning with the Blueprint. + +The Azure service account used with the Cloud Provider must have the Azure permissions needed to provision the resources in your Blueprint. See **Azure Blueprint** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider).In **Timeout**, enter at least 20m. Provisioning Azure resources can take time. + +Click **Submit**. + +### Step 3: Deploy the Workflow + +Here's an example of a Canary Workflow deployment that uses the ARM/Blueprint Create Resource step: + +![](./static/provision-using-azure-blueprint-definitions-01.png)In the **ARM/Blueprint Create Resource** step's **Execute Blueprint Deployment** section, you can see the Blueprint assignment created: + + +``` +Starting Blueprint deployment +- Blueprint Name: [101-boilerplate-mng] +- Assignment Name: [Assignment-101-boilerplate-mng-1615468935700] +- Version ID: [v2] + +Start getting exiting blueprint definition at scope +- Scope: [/providers/Microsoft.Management/managementGroups/HarnessARMTest] +- Blueprint Name: [101-boilerplate-mng] +Found blueprint definition at requested scope +Start getting already published blueprint with version - [v2] +Found already published blueprint with display name - [null], continue assignment +Start granting the rights to Azure Blueprints service principal +Assignment is using system-assigned managed identity. Owner rights need to be assign to Azure Blueprints service principal +Start getting Azure Blueprints Service Principal details at scope - [/subscriptions/0000000-0000-0000-0000-0000000000] +Azure Blueprints Service Principal details successfully obtained - Azure Blueprints SP Object ID: [992cad90-3eb5-4d04-9ac2-3a73d16efd65] +Start creating role assignment for Azure Blueprints Service Principal on subscription +- Role Assignment Name: [333513df-e63d-4dc2-9894-29d3bb2a73be] +- Built In Role: [Owner] +- Azure Blueprints SP Object ID: [992cad90-3eb5-4d04-9ac2-3a73d16efd65] +- Subscription Id: [0000000-0000-0000-0000-0000000000] +Role assignment successfully created +- Principal ID: [992cad90-3eb5-4d04-9ac2-3a73d16efd65] +- Role Definition ID: [/subscriptions/0000000-0000-0000-0000-0000000000/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635] +- Scope: [/subscriptions/0000000-0000-0000-0000-0000000000] + +Start creating assignment +- Scope: [/subscriptions/0000000-0000-0000-0000-0000000000] +- Assignment Name: [Assignment-101-boilerplate-mng-1615468935700] +Blueprint assignment request sent successfully + +Blueprint is assigned successfully. +``` +In the **Blueprint Deployment Steady state** section you can see the deployment reach steady state: + + +``` +Deployment Status for - [Assignment-101-boilerplate-mng-1615468935700] is [creating] +Deployment Status for - [Assignment-101-boilerplate-mng-1615468935700] is [creating] +Deployment Status for - [Assignment-101-boilerplate-mng-1615468935700] is [waiting] +Deployment Status for - [Assignment-101-boilerplate-mng-1615468935700] is [waiting] +Deployment Status for - [Assignment-101-boilerplate-mng-1615468935700] is [waiting] +Deployment Status for - [Assignment-101-boilerplate-mng-1615468935700] is [deploying] +Deployment Status for - [Assignment-101-boilerplate-mng-1615468935700] is [succeeded] + +Deployment Jobs: + +- Job Id: [DeploymentJob:b71b6336:2D49f4:2D42e9:2Db4ae:2Db2a25dc6e991] +- Job Kind: [system] +- Job State: [succeeded] +- Job Created Resource IDs: [/subscriptions/0000000-0000-0000-0000-0000000000/providers/Microsoft.Authorization/roleAssignments/3ce1a35a-9c05-4ad2-c74a-a076e4fbd79a] +- Job Result Error: [] + +- Job Id: [DeploymentJob:32907ca6:2D7416:2D454e:2Db6ea:2Dd42c1c68290b] +- Job Kind: [azureResource] +- Job State: [succeeded] +- Job Created Resource IDs: [/subscriptions/0000000-0000-0000-0000-0000000000/providers/Microsoft.Authorization/roleAssignments/9a0111c8-f675-0c97-6805-4b43332bd0ae] +- Job Result Error: [] + +- Job Id: [DeploymentJob:ea8b2d3e:2Df7ce:2D4960:2D913f:2D67015c335de9] +- Job Kind: [azureResource] +- Job State: [succeeded] +- Job Created Resource IDs: [/subscriptions/0000000-0000-0000-0000-0000000000/resourceGroups/mng-001] +- Job Result Error: [] + +- Job Id: [DeploymentJob:e0827aea:2Df93f:2D4763:2D90a2:2D0ff88e8329cd] +- Job Kind: [azureResource] +- Job State: [succeeded] +- Job Created Resource IDs: [/subscriptions/0000000-0000-0000-0000-0000000000/resourcegroups/mng-001/providers/Microsoft.Authorization/policyAssignments/cc9921013c83e1da9876c1d0ca898a33a4ef0a94fba55a900e6ee41da3373387] +- Job Result Error: [] + +- Job Id: [DeploymentJob:4bfd4524:2Dc4c4:2D45cb:2D9d28:2D37e0b6552ecf] +- Job Kind: [azureResource] +- Job State: [succeeded] +- Job Created Resource IDs: [] +- Job Result Error: [] + +- Job Id: [DeploymentJob:3b954a5f:2D2bb6:2D40ee:2D9939:2D4b1bf8f4b1b1] +- Job Kind: [system] +- Job State: [succeeded] +- Job Created Resource IDs: [] +- Job Result Error: [] + +Blueprint Deployment - [Assignment-101-boilerplate-mng-1615468935700] completed successfully +``` +Now you've provisioned the Azure infrastructure and deployed to it using a single Workflow. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the YAML editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/set-up-harness-for-azure-blueprint.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/set-up-harness-for-azure-blueprint.md new file mode 100644 index 00000000000..c2ee85caf38 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/set-up-harness-for-azure-blueprint.md @@ -0,0 +1,62 @@ +--- +title: Set Up Your Harness Account for Azure Blueprint +description: The first step in integrating your Blueprint definitions into Harness is setting up the necessary Harness account components -- Delegates, Cloud Providers, and Source Repo Providers. This topic describ… +sidebar_position: 200 +helpdocs_topic_id: 0qc8f6w5qn +helpdocs_category_id: 4on0a5avqo +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in integrating your Blueprint definitions into Harness is setting up the necessary Harness account components: Delegates, Cloud Providers, and Source Repo Providers. + +This topic describes how to set up these components for Blueprint. + +Once you set your account for Blueprint, you can begin integrating your Blueprint definitions. See [Add Azure Blueprints to Harness](add-azure-blueprints-to-harness.md). + + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* Get an overview of provisioning with Blueprint in [Azure ARM and Blueprint Provisioning with Harness](../../concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md). +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) +* [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) + +### Limitations + +See [Azure Blueprint How-tos](azure-blueprint-how-tos.md). + +### Step 1: Install a Harness Delegate + +A Harness Delegate performs the Blueprint provisioning in your Blueprint definitions. When installing the Delegate for your Blueprint provisioning, consider the following: + +* The Delegate must also be able to connect to your definition repo. The Delegate will pull the definition at deployment runtime. +* All Harness Delegates types can use Blueprint. + +To install a Delegate, follow the steps in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). Once you install the Delegate and it registers with Harness, you'll see it on the Harness Delegates page. + +### Step 2: Set Up the Azure Cloud Provider + +A Harness Azure Cloud Provider connects to your Azure subscription using your Client ID and Tenant ID. + +Follow the steps in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider) to connect Harness to Azure. + +The Azure service account for the Azure Cloud Provider will need the roles required for the Azure resources you are provisioning. + +See **Azure Blueprint** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). + +### Step 3: Set Up Source Repo Provider + +Harness pulls Azure Blueprint definitions from a Git repo, such as GitHub. + +Add a Harness Source Repo Provider to connect Harness to the Git account/repo for your blueprints. + +If Source Repo Provider connects to a Git account, you can specify the repo in each Infrastructure Provisioner that uses that Source Repo Provider. + +See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +### Next Step + +[Add Azure Blueprints to Harness](add-azure-blueprints-to-harness.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/static/provision-using-azure-blueprint-definitions-00.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/static/provision-using-azure-blueprint-definitions-00.png new file mode 100644 index 00000000000..fef3ba904ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/static/provision-using-azure-blueprint-definitions-00.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/static/provision-using-azure-blueprint-definitions-01.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/static/provision-using-azure-blueprint-definitions-01.png new file mode 100644 index 00000000000..c36150ff64b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-blueprint-provisioning/static/provision-using-azure-blueprint-definitions-01.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/_category_.json b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/_category_.json new file mode 100644 index 00000000000..d78792ed281 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/_category_.json @@ -0,0 +1 @@ +{"label": "Azure Web Apps", "position": 20, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Azure Web Apps"}, "customProps": { "helpdocs_category_id": "mfdyp6tf0v"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/add-a-non-containerized-artifacts-for-azure-web-app-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/add-a-non-containerized-artifacts-for-azure-web-app-deployment.md new file mode 100644 index 00000000000..4e1a45e0059 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/add-a-non-containerized-artifacts-for-azure-web-app-deployment.md @@ -0,0 +1,142 @@ +--- +title: Add Non-Containerized Artifacts for Azure Web App Deployment +description: Add non-containerized artifacts for Azure Web App deployments. +# sidebar_position: 2 +helpdocs_topic_id: rflkjqxod2 +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flags `AZURE_WEBAPP`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Setting up a Harness Azure Web App deployment is a simple process. You add your artifact's repo and settings to a Harness Service, and then add any of the Web App's application settings and connection strings. + +These repo and settings are used when Harness deploys your Web App. + +This topic covers adding a non-containerized artifact. For steps on adding a Docker image for Web App Deployment, see [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md). + +### Before You Begin + +* [Azure Web App Deployments Overview](azure-web-app-deployments-overview.md) +* Make sure that you have connected Harness to your Azure subscription as described in [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Limitations + +The Harness Azure Web Application Service type supports the following repos: + +* **Docker Registry:** see [Add Docker Registry Artifact Servers](https://docs.harness.io/article/tdj2ghkqb0-add-docker-registry-artifact-servers). +* **Artifactory:** see [Add Artifactory Servers](https://docs.harness.io/article/nj3p1t7v3x-add-artifactory-servers). +* **Amazon S3:** see [Add Amazon Web Services (AWS) Cloud Provider](https://docs.harness.io/article/wt1gnigme7-add-amazon-web-services-cloud-provider). +* **Jenkins:** see [Add Jenkins Artifact Servers](https://docs.harness.io/article/qa7lewndxq-add-jenkins-artifact-servers). +* **Azure Artifact:** see [Add an Azure DevOps Artifact Source](https://docs.harness.io/article/rbfjmko1og-add-an-azure-dev-ops-artifact-source). + + You can use Maven and NuGet. If you choose the Maven package type you can also use ZIP or WAR. If you use ZIP or WAR, then select ZIP or WAR as the type in your Harness Service Artifact Type. + +Harness supports JAR files from the following repos: + +* Artifactory +* Amazon S3 +* Azure Artifact + +Harness zips the JAR and then deploys it and unzips it and installs the App. + +### Step 1: Create the Harness Service + +The Harness Service represents your Azure Web App. + +You identify the artifact for the app, configuration settings, and any secrets and configuration variables. + +In your Harness Application, click **Services**. + +Click **Add Service**. The Service settings appear. + +Enter a name for your Service. Typically, the same name as the Web App. + +In **Deployment Type**, select **Azure Web Application**. + +In **Artifact Type**, select a non-containerized type. + +![](./static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-24.png) + +Click **Submit**. The Service is created. + +### Step 2: Add the Artifact + +You will add the same artifact you use in your Web App. + +Ensure you have set up a Harness Artifact Server or Cloud Provider (for Azure Container Registry or AWS S3) that connects to the image's repo. See [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). + +In the Harness Service, click **Add Artifact Source**. + +Select the Artifact Server type. + +Fill out the Artifact Source settings. + +For **Jenkins**, you will select the job and artifact. Harness will run the job and obtain the artifact metadata needed to pull the artifact at deployment runtime.For details on configuring the supported Artifact Source types, see [Service Types and Artifact Sources](https://docs.harness.io/article/qluiky79j8-service-types-and-artifact-sources). + +Here's an example of an Artifactory Artifact Source used to pull a WAR file: + +![](./static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-25.png) + +When you are done, click **Submit**. + +Next, click **Artifact History** to see the artifacts and builds Harness pulls from the repo. + +### Option: Startup Script + +You can use **Script** to add a startup script for your app. + +See [What are the expected values for the Startup File section when I configure the runtime stack?](https://docs.microsoft.com/en-us/azure/app-service/faq-app-service-linux#what-are-the-expected-values-for-the-startup-file-section-when-i-configure-the-runtime-stack-) from Azure. + +### Option: App Service Configuration + +In Azure App Service, app settings are variables passed as environment variables to the application code. + +See [Configure an App Service app in the Azure portal](https://docs.microsoft.com/en-us/azure/app-service/configure-common) from Azure. + +You can set these using the Azure CLI: + + +``` +az webapp config appsettings set --resource-group --name --settings DB_HOST="myownserver.mysql.database.azure.com" +``` +Or via the portal: + +![](./static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-26.png) + +You can also set **Application settings** and **Connection strings** in the Harness Service. + +Here's an example of setting **Application settings** in the Harness Service: + +![](./static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-27.png) + +This is the same as setting them in the Azure portal, in **Configuration**, **Application Settings**, **Advanced edit**. + +#### Important Note + +* If you add App Service Configuration settings in the Harness Service, you must include a **name** (`"name":`), and the name must be unique. This is the same requirement in Azure App Services. + +#### Using Secrets and Variables Settings + +You can use Harness secrets and Service or Workflow variables in the **Application settings** and **Connection strings** in the Harness Service. + +These settings use JSON, so ensure that you use quotes around the variable or secret reference: + + +``` + { + "name": "PASSWORD", + "value": "${$secret.getValue('secret_key')}", + "slotSetting": false + }, +``` +### Next Step + +* [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the YAML editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/add-your-docker-image-for-azure-web-app-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/add-your-docker-image-for-azure-web-app-deployment.md new file mode 100644 index 00000000000..f8bd74b5b8e --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/add-your-docker-image-for-azure-web-app-deployment.md @@ -0,0 +1,147 @@ +--- +title: Add Your Docker Image for Azure Web App Deployment +description: Add Docker Images for Azure Web App deployments. +# sidebar_position: 2 +helpdocs_topic_id: 8s766bhiec +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_WEBAPP`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.Setting up a Harness Azure Web App deployment is a simple process. You add your Web App's Docker repo and image settings to a Harness Service, and then add any of the Web App's application settings and connection strings. + +These Docker image and settings are used when Harness deploys your Web App. + +This topic covers adding a Docker image. For steps on adding a non-containerized artifact for Web App Deployment, see [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md). + +### Before You Begin + +* [Azure Web App Deployments Overview](azure-web-app-deployments-overview.md) +* Make sure that you have connected Harness to your Azure subscription as described in [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). + +### Visual Summary + + + + + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Limitations + +The Harness Azure Web Application Service type supports the following repos: + +* **Docker Registry:** see [Add Docker Registry Artifact Servers](https://docs.harness.io/article/tdj2ghkqb0-add-docker-registry-artifact-servers). +* **Artifactory:** see [Add Artifactory Servers](https://docs.harness.io/article/nj3p1t7v3x-add-artifactory-servers). +* **Azure Container Registry:** see [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). +* **Amazon S3:** see [Add Amazon Web Services (AWS) Cloud Provider](https://docs.harness.io/article/wt1gnigme7-add-amazon-web-services-cloud-provider). +* **Jenkins:** see [Add Jenkins Artifact Servers](https://docs.harness.io/article/qa7lewndxq-add-jenkins-artifact-servers). + +### Step 1: Create the Harness Service + +The Harness Service represents your Azure Web App. + +You identify the Docker image artifact for the app, configuration settings, and any secrets and configuration variables. + +In your Harness Application, click **Services**. + +Click **Add Service**. The Service settings appear. + +Enter a name for your Service. Typically, the same name as the Web App. + +In **Deployment Type**, select **Azure Web Application**. + +Click **Submit**. The Service is created. + +### Step 2: Add the Docker Image Artifact + +You will add the same Docker image you use in your Web App. + +Ensure you have set up a Harness Artifact Server or Azure Cloud Provider that connects to the image's repo. See [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). + +In the Harness Service, click **Add Artifact Source**. + +Select the Artifact Server type. + +Fill out the Artifact Source settings. + +For details on configuring the supported Artifact Source types, see [Add a Docker Image](https://docs.harness.io/article/gxv9gj6khz-add-a-docker-image-service). + +The settings for the Harness Artifact Server and Artifact Source are a combination of the container settings in your Azure Web App. + +Here's the Harness [Docker Registry Artifact Server](https://docs.harness.io/article/tdj2ghkqb0-add-docker-registry-artifact-servers): + +![](./static/add-your-docker-image-for-azure-web-app-deployment-12.png) + +The above example uses a public repo, and it requires no username or password. + +In the Harness Service, the Artifact Source uses this Artifact Server and points to the Docker Image Name: + +![](./static/add-your-docker-image-for-azure-web-app-deployment-13.png) + +The above example uses a [publicly available Docker image from Harness](https://hub.docker.com/r/harness/todolist-sample/tags?page=1&ordering=last_updated). You might want to use that the first time you set up an Azure Web App deployment.When are done, click **Submit**. + +Next, click **Artifact History** to see the artifacts and builds Harness pulls from the repo. + +### Option: Startup Script + +You can use **Script** to add a startup script for your app. + +See [What are the expected values for the Startup File section when I configure the runtime stack?](https://docs.microsoft.com/en-us/azure/app-service/faq-app-service-linux#what-are-the-expected-values-for-the-startup-file-section-when-i-configure-the-runtime-stack-) from Azure. + +### Option: App Service Configuration + +In Azure App Service, app settings are variables passed as environment variables to the application code. + +See [Configure an App Service app in the Azure portal](https://docs.microsoft.com/en-us/azure/app-service/configure-common) from Azure. + +You can set these using the Azure CLI: + + +``` +az webapp config appsettings set --resource-group --name --settings DB_HOST="myownserver.mysql.database.azure.com" +``` +Or via the portal: + +![](./static/add-your-docker-image-for-azure-web-app-deployment-14.png) + +You can also set **Application settings** and **Connection strings** in the Harness Service. + +Here's an example of setting **Application settings** in the Harness Service: + +![](./static/add-your-docker-image-for-azure-web-app-deployment-15.png) + +This is the same as setting them in the Azure portal **Advanced edit**. + +![](./static/add-your-docker-image-for-azure-web-app-deployment-16.png) + +#### Important Notes + +* If you add App Service Configuration settings in the Harness Service, you must include a **name** (`"name":`), and the name must be unique. This is the same requirement in Azure App Services. +* Do not set Docker settings in the Harness Service **App Service Configuration**. Harness will override these using the Docker settings in the Harness Artifact Server and Artifact Source. + +#### Using Secrets and Variables Settings + +You can use Harness secrets and Service or Workflow variables in the **Application settings** and **Connection strings** in the Harness Service. + +These settings use JSON, so ensure that you use quotes around the variable or secret reference: + + +``` + { + "name": "PASSWORD", + "value": "${$secret.getValue('secret_key')}", + "slotSetting": false + }, +``` +### Next Step + +* [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the YAML editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/azure-web-app-deployment-rollback.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/azure-web-app-deployment-rollback.md new file mode 100644 index 00000000000..ea0d24368c9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/azure-web-app-deployment-rollback.md @@ -0,0 +1,111 @@ +--- +title: Azure Web App Deployment Rollback +description: Rollback Summary Rollback All Phases at Once Post-Production Rollback Rollback Order in Multi-Phase Workflow +# sidebar_position: 2 +helpdocs_topic_id: b922byhcn4 +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_WEBAPP`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.Harness performs rollback on failed [Azure Web App deployments](azure-web-app-deployments-overview.md). + +### Limitations + +#### Rollback Limitations for both Azure Container and Non-Containerized Rollbacks + +For Non-Containerized artifacts deployments, see [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md).* **Harness Post-Deployment Rollback:** to see if Post-Production Rollback is supported for Azure Web App deployments, see [Rollback Production Deployments](https://docs.harness.io/article/2f36rsbrve-post-deployment-rollback). +* Rollback only restores the state of the **stage** slot. If a step that follows the **Swap Slot** step fails, such as a failed Approval or Shell Script step, Harness only rolls back the **stage** slot. The target slot is not changed. + +#### Rollback Limitations for Non-Containerized Rollbacks + +Rollback for Non-Containerized artifact deployments is not supported for the first two deployments because the necessary artifact details are not available to perform a rollback. + +#### Streaming Logs Limitations for both Azure Container and Non-Containerized Deployments + +You might face timeout issues as a result of limitations with streaming Web App slot deployment logs. For example, you might see `java.net.SocketTimeoutException: timeout` or some other socket errors as a result of the Azure SDK client. + +Harness is working with the Azure team for a resolution (see [issue 27221](https://github.com/Azure/azure-sdk-for-java/issues/27221)). At this time, you can use a Harness [HTTP step](https://docs.harness.io/article/m8ksas9f71-using-the-http-command) to verify that the slot is up and ready. + +### Rollback Summary + +For [Azure Web App deployments](azure-web-app-deployments-overview.md), Harness saves the previous Docker or non-containerized app details that were running on the slot. + +In case of an Azure Web App deployment failure, Harness rollback redeploys the previous instance. + +### Slot Rollback + +Harness initially deploys to the source (staging) slot and then swaps slots with the target (production) slot. + +As the source slot is the slot is where Harness deploys the new Web App version, Harness rolls back the app version in the source slot only. + +Harness does not rollback the app version in the target slot. + +### Traffic Rollback + +Harness returns all traffic to the previous, pre-deployment percentages. + +If the pre-deployment traffic was arranged with the source slot at 20% and the target slot at 80%, rollback will return network traffic to these percentages. + +### Rollback Example for Non-Containerized Rollbacks + +Here's an example of a rollback. + +**Update Slot Configuration Settings**: + +![](./static/azure-web-app-deployment-rollback-10.png) + +**Deploy to Slot**: + +![](./static/azure-web-app-deployment-rollback-11.png) + +### Rollback Logs + +Here's the log activity from a rollback with the timestamps removed: + + +``` +Sending request for stopping deployment slot - [stage] +Operation - [Stop Slot] was success +Request sent successfully + +Start updating Container settings for slot - [stage] +Start cleaning existing container settings + +Current state for deployment slot is - [Stopped] +Deployment slot stopped successfully + +Start updating application configurations for slot - [stage] +Deployment slot configuration updated successfully + +Existing container settings deleted successfully +Start cleaning existing image settings + +Existing image settings deleted successfully +Start updating Container settings: +[[DOCKER_REGISTRY_SERVER_URL]] + +Container settings updated successfully +Start updating container image and tag: +[library/nginx:1.19-alpine-perl], web app hosting OS [LINUX] + +Image and tag updated successfully for slot [stage] +Deployment slot container settings updated successfully + +Sending request for starting deployment slot - [stage] +Operation - [Start Slot] was success +Request sent successfully + +Sending request to shift [0.00] traffic to deployment slot: [stage] + +Current state for deployment slot is - [Running] +Deployment slot started successfully + +Traffic percentage updated successfully + +The following task - [SLOT_ROLLBACK] completed successfully +``` +### See Also + +* [Resume Pipeline Deployments](../../concepts-cd/deployments-overview/resume-a-pipeline-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/azure-web-app-deployments-overview.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/azure-web-app-deployments-overview.md new file mode 100644 index 00000000000..ba50914a7b0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/azure-web-app-deployments-overview.md @@ -0,0 +1,95 @@ +--- +title: Azure Web App Deployments Overview +description: Currently, this feature is behind the Feature Flag AZURE_WEBAPP. Contact Harness Support to enable the feature. Azure Web Apps use deployment slots to host different versions of your app. You can the… +# sidebar_position: 2 +helpdocs_topic_id: lluikqw7q7 +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_WEBAPP`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.Azure Web Apps use deployment slots to host different versions of your app. You can then swap these deployment slots without causing any downtime for your users. + +Harness deploys your Azure Web Apps using their existing deployment slots. Using Harness, you can incrementally increase traffic to your source slot, and then perform a standard slot swap. + +For detailed instructions on deploying an Azure Web App using Harness, see the following how-tos. They are listed in the order they are commonly performed. + +* [Connect to Azure for Web App Deployments](connect-to-azure-for-web-app-deployments.md) +* [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md) +* [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md) +* [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md) +* [Create an Azure Web App Blue/Green Deployment](create-an-azure-web-app-blue-green-deployment.md) +* [Create an Azure Web App Canary Deployment](create-an-azure-web-app-canary-deployment.md) + +Basic deployments are also supported.The following topic covers related deployment concepts: + +* [Azure Web App Deployment Rollback](azure-web-app-deployment-rollback.md) + +### Before You Begin + +Before learning about Harness Azure Web App deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### Limitations + +* Harness deploys Docker images and non-containerized artifacts for Azure Web Apps. To see what's supported, when you create a Harness Service, see **Artifact Type**:![](./static/azure-web-app-deployments-overview-00.png) +* Harness uses the Azure SDK among other methods and Authenticated proxy is not supported for Azure SDK. Consequently, you cannot use Azure connections for artifacts, machine images, etc, that require proxy authentication. This is an Azure limitation, not a Harness limitation. This is a known Azure limitation with Java environment properties and their SDK. + +#### Azure Limitations + +* App Service on Linux isn't supported on [Shared](https://azure.microsoft.com/pricing/details/app-service/plans/) pricing tier. +* You can't mix Windows and Linux apps in the same App Service plan. +* Within the same resource group, you can't mix Windows and Linux apps in the same region. + +See [Limitations](https://docs.microsoft.com/en-us/azure/app-service/overview#limitations) from Azure. + +### What Does Harness Need Before You Start? + +To deploy an Azure Web App using Harness, you only need the following: + +* **An existing Azure Web App using a Docker image or non-containerized artifact:** you can create one in minutes in Azure. Here is an example: + + +| | | +| --- | --- | +| + 1. Basics | | +| + 1. Docker | | +* **A Docker image or non-containerized artifact:** this is the same image or artifact you used when you created the Azure Web App. +* Azure account connection information. +* **App Service Plan:** the name of the Azure App Service configured for your existing Web App.![](./static/azure-web-app-deployments-overview-01.png) +* **Two or more running deployment slots for production and staging:** the slots created for your existing Azure Web App:![](./static/azure-web-app-deployments-overview-02.png) + +### What Does Harness Deploy? + +Harness deploys the Docker image or non-containerized artifact you select for your Web App to a source deployment slot. + +You can then increase traffic to the source slot. Once you have determined that the new image and slot is working, you can swap the source and target slots. + +### What Operating Systems are Supported? + +Linux and Windows are both supported. + +Use a different resource group for each OS. Linux App Service Plans should use a different resource group than Windows App Service Plans. This is a limitation of Azure. + +See [Limitations](https://docs.microsoft.com/en-us/azure/app-service/overview#limitations) from Azure. + +### What Does a Harness Web App Deployment Involve? + +The following list describes the major steps of a Harness Web App deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Links** | +| 1 | Install a Harness Delegate that can connect to your target Azure region. | [Connect to Azure for Web App Deployments](connect-to-azure-for-web-app-deployments.md) | +| 2 | Add the Docker image or non-containerized artifact Harness will use for the Web App. | [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md)[Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md) | +| 3 | Select the Subscription and Resource Group to use when Harness deploys a new Web App version. | [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md) | +| 4 | Create a Harness Workflow to perform deployment. | [Create an Azure Web App Blue/Green Deployment](create-an-azure-web-app-blue-green-deployment.md)[Create an Azure Web App Canary Deployment](create-an-azure-web-app-canary-deployment.md)Basic deployments are also supported. | + +### Azure Web App In Harness Services Dashboard + +The Harness Services dashboard shows the new version of the Azure Web App that you deployed regardless of what slot it is deployed in. + +![](./static/azure-web-app-deployments-overview-03.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/connect-to-azure-for-web-app-deployments.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/connect-to-azure-for-web-app-deployments.md new file mode 100644 index 00000000000..fb7c6776643 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/connect-to-azure-for-web-app-deployments.md @@ -0,0 +1,92 @@ +--- +title: Connect to Azure and Artifact Repo for Your Web App Deployments +description: Currently, this feature is behind the Feature Flag AZURE_WEBAPP. Contact Harness Support to enable the feature. You connect Harness to your Azure account to deploy Azure Web Apps. You make the connec… +# sidebar_position: 2 +helpdocs_topic_id: e9k7ngaqiu +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_WEBAPP`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.You connect Harness to your Azure account to deploy Azure Web Apps. You make the connection using a Harness Cloud Provider. + +You deploy your Web App using a Docker image or non-containerized artifact. You connect to your image or artifact's repo using a Harness Artifact Server or Cloud Provider (for AWS S3 or Azure ACR). + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Supported Platforms and Technologies](#supported_platforms_and_technologies) +* [Review: Azure Connection Options](#review_azure_connection_options) +* [Step 1: Install a Harness Delegate](#step_1_install_a_harness_delegate) +* [Step 2: Set Up the Azure Cloud Provider](#step_2_set_up_the_azure_cloud_provider) +* [Step 3: Set Up the Harness Artifact Server](#step_3_set_up_the_harness_artifact_server) +* [Next Steps](#next_steps) +* [See Also](#see_also) + +### Before You Begin + +* [Azure Web App Deployments Overview](azure-web-app-deployments-overview.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Supported Platforms and Technologies + +See  [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Review: Azure Connection Options + +As covered in [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts), you need to install a Harness Delegate in your infrastructure before setting up your Harness deployment. + +There are several types of Delegates you can use for an Azure App Service deployment, described in [Delegate Installation Overview](https://docs.harness.io/article/igftn7rrtg-delegate-installation-overview). + +Shell Script, Docker, Kubernetes, and Helm Delegates are all options. + +### Step 1: Install a Harness Delegate + +Follow the installation steps for the Harness Delegate you want to install. See [Delegate Installation Overview](https://docs.harness.io/article/igftn7rrtg-delegate-installation-overview) for the available options. + +Make sure this Delegate is in, or can connect to, the resource group for your Azure Web App. + +### Step 2: Set Up the Azure Cloud Provider + +A Harness Azure Cloud Provider connects to your Azure subscription using your Client ID and Tenant ID. + +Follow the steps in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider) to connect Harness to Azure. + +That's all the setup you need to connect Harness to your account and start your deployment. + +If you store the Docker image in Azure Container Registry, then you can use this Azure Cloud Provider you set up and skip the next step. + +### Step 3: Set Up the Harness Artifact Server + +If you store the Docker image in Azure Container Registry, then you can use the Azure Cloud Provider you set up and skip the Artifact Server setup.A Harness Azure Web App deployment uses a Docker image or non-containerized artifact. You connect Harness to the same repo you use in your Web App in Azure. You made this connection using a Harness Artifact Server or Cloud Provider. + +The Harness Azure Web Application Service type supports the following repos: + +* **Docker Registry:** see [Add Docker Registry Artifact Servers](https://docs.harness.io/article/tdj2ghkqb0-add-docker-registry-artifact-servers). +* **Artifactory:** see [Add Artifactory Servers](https://docs.harness.io/article/nj3p1t7v3x-add-artifactory-servers). +* **Azure Container Registry:** see [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). You can use the Azure Cloud Provider you set up in the previous step. +* **Amazon S3:** see [Add Amazon Web Services (AWS) Cloud Provider](https://docs.harness.io/article/wt1gnigme7-add-amazon-web-services-cloud-provider). +* **Jenkins:** see [Add Jenkins Artifact Servers](https://docs.harness.io/article/qa7lewndxq-add-jenkins-artifact-servers). + +For example, here are the Docker Hub settings in an Azure Web App: + +Here's the Harness [Docker Registry Artifact Server](https://docs.harness.io/article/tdj2ghkqb0-add-docker-registry-artifact-servers): + +![](./static/connect-to-azure-for-web-app-deployments-22.png) + +The above example uses a public repo, and it requires no username or password. + +Later, in the Harness Service, you'll add an Artifact Source that uses this Artifact Server and points to the Docker Image Name: + +![](./static/connect-to-azure-for-web-app-deployments-23.png) + +The above example uses a [publicly available Docker image from Harness](https://hub.docker.com/r/harness/todolist-sample/tags?page=1&ordering=last_updated). You might want to use that the first time you set up an Azure Web App deployment.### Next Steps + +* [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md) +* [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md) + +### See Also + +* [Run a custom container in Azure](https://docs.microsoft.com/en-us/azure/app-service/quickstart-custom-container?pivots=container-linux) from Azure. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/create-an-azure-web-app-blue-green-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/create-an-azure-web-app-blue-green-deployment.md new file mode 100644 index 00000000000..b2e1516507e --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/create-an-azure-web-app-blue-green-deployment.md @@ -0,0 +1,182 @@ +--- +title: Create an Azure Web App Blue/Green Deployment +description: Before You Begin. Visual Summary. Supported Platforms and Technologies. Review -- Slot Requirements. Step 1 -- Create the Blue/Green Workflow. Step 2 -- Slot Setup Step. Option -- Use Variable Expressions in… +# sidebar_position: 2 +helpdocs_topic_id: qpfddekbax +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_WEBAPP`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.A Harness Azure Web App Blue/Green deployment swaps traffic from one deployment slot to another. + +If you are new to Azure Web App deployment slot swapping, see [What happens during a swap](https://docs.microsoft.com/en-us/azure/app-service/deploy-staging-slots#what-happens-during-a-swap) from Azure. + +If you want to shift traffic incrementally as part of the deployment, see [Create an Azure Web App Canary Deployment](create-an-azure-web-app-canary-deployment.md). + +### Before You Begin + +Make sure you have read the following: + +* [Azure Web App Deployments Overview](azure-web-app-deployments-overview.md) +* Make sure that you have connected Harness to your Azure subscription as described in [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). +* [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md) +* [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md) +* [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md) +* [Azure Web App Deployment Rollback](azure-web-app-deployment-rollback.md) + +### Visual Summary + +The following short video walks you through a Harness Azure Web App Blue/Green Workflow setup. + + + + + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Collect Azure Web App Information + +The Harness Workflow will use the existing Deployment slots from your Azure Web App. + +In the Azure portal, click your Web App, and then click **Deployment slots**. You can see the Deployment slots for your Web App. + +Click **Swap**. You can see the Source and Target slots. + +![](./static/create-an-azure-web-app-blue-green-deployment-06.png) + +You'll use these slot names in your Harness Workflow. + +Don't click the **Swap** button. Click **Close**. + +### Step 2: Create the Blue/Green Workflow + +In your Harness Application, click **Workflows**, and then click **Add Workflow**. + +Enter the following settings and click **Submit**. + +* **Name:** the name for this Workflow. +* **Workflow Type:** select **Blue/Green Deployment**. +* **Environment:** select the Harness Environment you added in [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md). +* **Service:** select the Service you set up in [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md) or [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md). +* **Infrastructure Definition:** select the Infrastructure Definition you created in [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md). + +The new Workflow is created. + +### Step 3: Slot Deployment Step + +The Slot Deployment step is where you select the Web App and source and target deployment slots for the deployment. + +Open the **Slot Deployment** step. + +Enter the following settings and click **Submit**. + +* **Name:** enter a name for the step. +* **App Service:** select the Azure Web App for deployment. Harness pulls the list of Web Apps using the credentials of the Azure Cloud Provider you selected in the Workflow's Infrastructure Definition. +* **Deployment Slot:** select the Source slot for the deployment. This slot is where Harness deploys the new Web App version.Make sure the slot you select is running. Harness shows all slots regardless of their status. +* **Target Slot:** select the Target slot for the deployment. This slot is where Harness will swap the App content and configurations elements during the **Swap Slot** step.Make sure the slot you select is running. Harness shows all slots regardless of their status. +* **Slot Steady State Timeout:** enter a minimum of **30m**. The slot deployment relies on Azure and can take time. + +When you're done, the step will look like this: + +![](./static/create-an-azure-web-app-blue-green-deployment-07.png) + +### Option: Use Variable Expressions in Settings + +You can use built-in Harness or custom Workflow variable expressions in the **Slot Deployment** step. See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +Variables are often used for templating the Workflow. See [Create Pipeline Templates](https://docs.harness.io/article/60j7391eyy-templatize-pipelines). + +### Option: Add a Health Check after Slot Deployment + +In the Workflow **Verify Service** section, add a health check to ensure that the Docker container or non-containerized app is running correctly. + +The Slot Deployment step is considered successful once the slot is in a running state. + +A running state does not ensure that your new app is accessible. It can take some time for new content to become available on Azure. + +Also, the slot deployment might succeed but the Docker container or non-containerized artifact could be corrupted. + +A health check after Slot Deployment can ensure a successful deployment. + +### Step 4: Swap Slot Step + +The final step in the Workflow is Swap Slot. This step performs the Web App deployment slot swap. It's like doing a swap in the Azure portal or via the Azure CLI: + + +``` +az webapp deployment slot swap -n "web app name" -g "resource group name" -s "source slot name" --target-slot "target slot" +``` +Here's an example of the swap in the deployment logs: + + +``` +Sending request for swapping source slot: [stage] with target slot: [production] +Operation name : [Apply Web App Slot Configuration] +Status : [Succeeded] +Description : [Applied configuration settings from slot 'Production' to a site with deployment id 'anil-DemoWebApp' in slot 'stage'] +Operation name : [Microsoft.Web/sites/slots/StartSlotWarmup/action] +Status : [Succeeded] +Description : [Initial state for slot swap operation is (Source slot: 'stage', DeploymentId:'anil-DemoWebApp') (TargetSlot: 'production', DeploymentId:'anil-demowebapp__f3c3')'. Operation:93559f4a-d5c1-4ad5-beb5-a43bbc0578f2] +Operation name : [Microsoft.Web/sites/slots/StartSlotWarmup/action] +Status : [Succeeded] +Description : [Initial state for slot swap operation is (Source slot: 'stage', DeploymentId:'anil-DemoWebApp') (TargetSlot: 'production', DeploymentId:'anil-demowebapp__f3c3')'. Operation:93559f4a-d5c1-4ad5-beb5-a43bbc0578f2] +Operation name : [Microsoft.Web/sites/slots/StartSlotWarmup/action] +Status : [Succeeded] +Description : [Initial state for slot swap operation is (Source slot: 'stage', DeploymentId:'anil-DemoWebApp') (TargetSlot: 'production', DeploymentId:'anil-demowebapp__f3c3')'. Operation:93559f4a-d5c1-4ad5-beb5-a43bbc0578f2] +Operation name : [Microsoft.Web/sites/slots/StartSlotWarmup/action] +Status : [Succeeded] +Description : [Initial state for slot swap operation is (Source slot: 'stage', DeploymentId:'anil-DemoWebApp') (TargetSlot: 'production', DeploymentId:'anil-demowebapp__f3c3')'. Operation:93559f4a-d5c1-4ad5-beb5-a43bbc0578f2] +Operation name : [Microsoft.Web/sites/slots/EndSlotWarmup/action] +Status : [Succeeded] +Description : [Finished warming of site with deploymentId 'anil-DemoWebApp'] +Operation name : [Microsoft.Web/sites/slots/EndSlotWarmup/action] +Status : [Succeeded] +Description : [Finished warming of site with deploymentId 'anil-DemoWebApp'] +Operation name : [Microsoft.Web/sites/slots/EndSlotWarmup/action] +Status : [Succeeded] +Description : [Finished warming of site with deploymentId 'anil-DemoWebApp'] +Operation name : [Microsoft.Web/sites/slots/SlotSwap/action] +Status : [Succeeded] +Description : [Finished swapping site. New state is (Slot: 'stage', DeploymentId:'anil-demowebapp__f3c3'), (Slot: 'Production', DeploymentId:'anil-DemoWebApp')'. Operation:93559f4a-d5c1-4ad5-beb5-a43bbc0578f2] +Swapping slots done successfully +Operation - [Swap Slots] was success +Swapping request returned successfully +``` +The Workflow is complete. You can now deploy. + +### Step 5: Deploy the Workflow + +Click **Deploy**, select an artifact, and then click **Submit**. + +The Workflow deploys: + +![](./static/create-an-azure-web-app-blue-green-deployment-08.png) + +You can see the swap succeeded in the logs: + + +``` +... +Description : [Finished swapping site. New state is (Slot: 'stage', DeploymentId:'anil-demowebapp__f3c3'), (Slot: 'Production', DeploymentId:'anil-DemoWebApp')'. Operation:93559f4a-d5c1-4ad5-beb5-a43bbc0578f2] +... +``` +And the same information is displayed in the Azure portal Activity log: + +![](./static/create-an-azure-web-app-blue-green-deployment-09.png) + +### Option: Templatize the Workflow + +See [Create Pipeline Templates](https://docs.harness.io/article/60j7391eyy-templatize-pipelines). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the YAML editor button. + +### See Also + +* [Azure Web App Deployment Rollback](azure-web-app-deployment-rollback.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/create-an-azure-web-app-canary-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/create-an-azure-web-app-canary-deployment.md new file mode 100644 index 00000000000..61d31ded7b5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/create-an-azure-web-app-canary-deployment.md @@ -0,0 +1,223 @@ +--- +title: Create an Azure Web App Canary Deployment +description: Before You Begin. Visual Summary. Supported Platforms and Technologies. Step 1 -- Create the Canary Workflow. Name. Workflow Type. Environment. Submit. Step 2 -- Create Phase 1 Step 3 -- Slot Setup Step. O… +# sidebar_position: 2 +helpdocs_topic_id: x0etkdg62q +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_WEBAPP`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.A Harness Azure Web App Canary deployment shifts traffic from one deployment slot to another incrementally. + +First, you select the deployment (live) and target (stage) slots to use. Next, you add Traffic Shift steps to incrementally increase traffic to the target slot. + +Finally, you swap entirely to the target slot, making it the deployment slot for this release. Azure swaps the Virtual IP addresses and URLs of the deployment and target slots. + +You can perform a Web App Canary deployment using a single or multi-phase Workflow. In either method, make sure the **Swap Slot** step is in the final phase of the Workflow. + +### Before You Begin + +Make sure you have read the following: + +* [Azure Web App Deployments Overview](azure-web-app-deployments-overview.md) +* Make sure that you have connected Harness to your Azure subscription as described in [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). +* [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md) +* [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md) +* [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md) +* [Azure Web App Deployment Rollback](azure-web-app-deployment-rollback.md) + +### Visual Summary + +The following short video walks you through a Harness Azure Web App Canary Workflow setup. + + + + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Collect Azure Web App Information + +The Harness Workflow will use the existing Deployment slots from your Azure Web App. + +In the Azure portal, click your Web App, and then click **Deployment slots**. You can see the Deployment slots for your Web App. + +Click **Swap**. You can see the Source and Target slots. + +![](./static/create-an-azure-web-app-canary-deployment-17.png) + +You'll use these slot names in your Harness Workflow. + +Don't click the **Swap** button. Click **Close**. + +### Step 2: Create the Canary Workflow + +Next we'll create a single phase Canary Workflow. + +You can perform a Web App Canary deployment using a single or multi-phase Workflow. In either method, make sure the **Swap Slot** step is in the final phase of the Workflow.In your Harness Application, click **Workflows**, and then click **Add Workflow**. + +Enter the following settings and click **Submit**. + +* **Name:** the name for this Workflow. +* **Workflow Type:** select **Canary Deployment**. +* **Environment:** select the Harness Environment you added in [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md). + +### Step 3: Create Phase 1 + +In your new Workflow, click **Add Phase**. + +In Workflow Phase, enter the following settings and click **Submit**. + +* **Service:** select the Harness Service you set up in [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md) or [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md). +* **Infrastructure Definition:** select the Web App Infrastructure Definition you set up in [Define Your Azure Web App Infrastructure](define-your-azure-web-app-infrastructure.md). + +Harness generate the steps needed for the phase. + +### Step 4: Slot Deployment Step + +The Slot Deployment step is where you select the Web App and source and target deployment slots for the deployment. + +Open the **Slot Deployment** step. + +Enter the following settings and click **Submit**. + +* **Name:** enter a name for the step. +* **App Service:** select the Azure Web App for deployment. Harness pulls the list of Web Apps using the credentials of the Azure Cloud Provider you selected in the phase's Infrastructure Definition. +* **Deployment Slot:** select the Source slot for the deployment. This slot is where Harness deploys the new Web App version.Make sure the slot you select is running. Harness shows all slots regardless of their status. +* **Target Slot:** select the Target slot for the deployment. This slot is where Harness will swap the App content and configurations elements during the **Swap Slot** step.Make sure the slot you select is running. Harness shows all slots regardless of their status. +* **Slot Steady State Timeout:** enter a minimum of **30m**. The slot deployment relies on Azure and can take some time. + +When you are done, the step will look similar to this: + +![](./static/create-an-azure-web-app-canary-deployment-18.png) + +### Option: Use Variable Expressions in Settings + +You can use built-in Harness or custom Workflow variable expressions in the **Slot Deployment** step. See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +Variables are often used for templating the Workflow. See [Create Pipeline Templates](https://docs.harness.io/article/60j7391eyy-templatize-pipelines). + +### Option: Add a Health Check after Slot Setup + +In the Workflow **Verify Service** section, add a health check to ensure that the Docker container or non-containerized app is running correctly. + +The Slot Deployment step is considered successful once the slot is in a running state. + +A running state does not ensure that your new app is accessible. It can take some time for new content to become available on Azure. + +Also, the slot deployment might succeed but the Docker container or non-containerized artifact could be corrupted. + +A health check after Slot Deployment can ensure a successful deployment. + +### Step 5: Traffic % Step + +The **Traffic %** step shifts network traffic to the new Web App version in the deployment (source) slot. + +**Traffic % steps are** **not** **cumulative.** If you set 25% in one and 25% in the next one, only 25% of traffic is routed. + +Open the **Traffic %** step. + +In **Traffic Percentage**, enter a number (without the % character). + +Click **Submit**. + +You can use multiple **Traffic %** steps to incrementally increase traffic. In-between each Traffic % step, you can add a health check and/or Approval step. Here is an example: + +![](./static/create-an-azure-web-app-canary-deployment-19.png) + +The Script in this example is: + + +``` +curl -Is | head -n 1 +``` +The Script output is: + + +``` +INFO 2021-02-02 12:01:05 Executing command ... +INFO 2021-02-02 12:01:17 HTTP/1.1 200 OK +INFO 2021-02-02 12:01:17 Command completed with ExitCode (0) +``` +### Step 6: Swap Slot + +You can perform a Web App Canary deployment using a single or multi-phase Workflow. In either method, make sure the **Swap Slot** step is in the final phase of the Workflow.The final step in the phase is Swap Slot. This step performs the Web App deployment slot swap. It is similar to doing a swap in the Azure portal or via the Azure CLI: + + +``` +az webapp deployment slot swap -n "web app name" -g "resource group name" -s "source slot name" --target-slot "target slot" +``` +Here is an example of the swap in the deployment logs: + + +``` +Sending request for swapping source slot: [stage] with target slot: [production] +Operation name : [Apply Web App Slot Configuration] +Status : [Succeeded] +Description : [Applied configuration settings from slot 'Production' to a site with deployment id 'anil-demowebapp__f3c3' in slot 'stage'] +Operation name : [Microsoft.Web/sites/slots/StartSlotWarmup/action] +Status : [Succeeded] +Description : [Initial state for slot swap operation is (Source slot: 'stage', DeploymentId:'anil-demowebapp__f3c3') (TargetSlot: 'production', DeploymentId:'anil-DemoWebApp')'. Operation:db1d5ed2-edba-471d-a8f2-d0421cdbe43f] +Operation name : [Microsoft.Web/sites/slots/StartSlotWarmup/action] +Status : [Succeeded] +Description : [Initial state for slot swap operation is (Source slot: 'stage', DeploymentId:'anil-demowebapp__f3c3') (TargetSlot: 'production', DeploymentId:'anil-DemoWebApp')'. Operation:db1d5ed2-edba-471d-a8f2-d0421cdbe43f] +Operation name : [Microsoft.Web/sites/slots/EndSlotWarmup/action] +Status : [Succeeded] +Description : [Finished warming of site with deploymentId 'anil-demowebapp__f3c3'] +Operation name : [Microsoft.Web/sites/slots/EndSlotWarmup/action] +Status : [Succeeded] +Description : [Finished warming of site with deploymentId 'anil-demowebapp__f3c3'] +Operation name : [Microsoft.Web/sites/slots/EndSlotWarmup/action] +Status : [Succeeded] +Description : [Finished warming of site with deploymentId 'anil-demowebapp__f3c3'] +Operation - [Swap Slots] was success +Swapping request returned successfully +Operation name : [Microsoft.Web/sites/slots/EndSlotWarmup/action] +Status : [Succeeded] +Description : [Finished warming of site with deploymentId 'anil-demowebapp__f3c3'] +Operation name : [Microsoft.Web/sites/slots/SlotSwap/action] +Status : [Succeeded] +Description : [Finished swapping site. New state is (Slot: 'stage', DeploymentId:'anil-DemoWebApp'), (Slot: 'Production', DeploymentId:'anil-demowebapp__f3c3')'. Operation:db1d5ed2-edba-471d-a8f2-d0421cdbe43f] +Swapping slots done successfully +``` +The Workflow phase is complete. You can now deploy. + +### Review: Artifact Check Step + +When you navigate back to the main Workflow view, you will see that an Artifact Check step has been added. Harness adds this step automatically to ensure that the deployment does not proceed unless the artifact can be obtained. + +### Step 7: Deploy the Workflow + +Click **Deploy**, select an artifact, and then click **Submit**. + +The Workflow deploys: + +![](./static/create-an-azure-web-app-canary-deployment-20.png) + +You can see the swap succeeded in the logs: + + +``` +... +Description : [Finished swapping site. New state is (Slot: 'stage', DeploymentId:'anil-DemoWebApp'), (Slot: 'Production', DeploymentId:'anil-demowebapp__f3c3')'. Operation:db1d5ed2-edba-471d-a8f2-d0421cdbe43f] +Swapping slots done successfully +``` +And the same information is displayed in the Azure portal Activity log: + +![](./static/create-an-azure-web-app-canary-deployment-21.png) + +### Option: Templatize the Workflow + +See [Create Pipeline Templates](https://docs.harness.io/article/60j7391eyy-templatize-pipelines). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the YAML editor button. + +### See Also + +* [Azure Web App Deployment Rollback](azure-web-app-deployment-rollback.md) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/define-your-azure-web-app-infrastructure.md b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/define-your-azure-web-app-infrastructure.md new file mode 100644 index 00000000000..acec834a9bf --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/define-your-azure-web-app-infrastructure.md @@ -0,0 +1,104 @@ +--- +title: Define Your Azure Web App Infrastructure +description: Currently, this feature is behind the Feature Flag AZURE_WEBAPP. Contact Harness Support to enable the feature. The target Azure environment for your Harness Web App deployment is defined in a Harnes… +# sidebar_position: 2 +helpdocs_topic_id: 2n35dber6l +helpdocs_category_id: mfdyp6tf0v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_WEBAPP`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.The target Azure environment for your Harness Web App deployment is defined in a Harness Environment's Infrastructure Definition. + +You simply select the Web App's Subscription and Resource Group. + +You can also provision the infrastructure and Web App as part of your Workflow. + +### Before You Begin + +* [Azure Web App Deployments Overview](azure-web-app-deployments-overview.md) +* Make sure that you have connected Harness to your Azure subscription as described in [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). +* [Add Your Docker Image for Azure Web App Deployment](add-your-docker-image-for-azure-web-app-deployment.md) +* [Add Non-Containerized Artifacts for Azure Web App Deployment](add-a-non-containerized-artifacts-for-azure-web-app-deployment.md) + +### Visual Summary + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Create an Environment + +In your Harness Application, click **Environments**. + +Click **Add Environment**. + +Enter a name and select an **Environment Type** for your Environment, and click **Submit**. + +The Environment Type here doesn't relate to the production or non-production slots of your Web App. + +A Harness Environment Type is simply a way to organize your work environments. + +### Step 2: Create an Infrastructure Definition + +In the new Environment, click **Add Infrastructure Definition**. + +Enter the following settings: + +#### Name + +Provide a name for the Infrastructure Definition. You'll select this name when defining target infrastructures for Workflows and their Phases. + +#### Cloud Provider Type + +Select **Microsoft Azure**. + +#### Deployment Type + +Select **Azure Web Application**. + +#### Cloud Provider + +Select the Azure Cloud Provider you set up for your Azure Web App deployment. The Cloud Provider determines which subscriptions and resource groups appear in the other settings. + +See [Connect to Azure and Artifact Repo for Your Web App Deployments](connect-to-azure-for-web-app-deployments.md). + +#### Subscription + +Select the Azure subscription used by your Web App. + +The subscription is located in the Web App **Overview** section of the Azure portal. + +![](./static/define-your-azure-web-app-infrastructure-04.png) + +#### Resource Group + +Select the resource group used by your Web App. + +The resource group is located in the Web App **Overview** section of the Azure portal. + +![](./static/define-your-azure-web-app-infrastructure-05.png) + +Within the same resource group, you can't mix Windows and Linux apps in the same region. See [Limitations](https://docs.microsoft.com/en-us/azure/app-service/overview#limitations) from Azure. + +### Option 1: Scope to Specific Services + +The **Scope to specific Services** setting in the Infrastructure Definition enables you to scope this Infrastructure Definition to specific Harness Services. + +See [Add an Infrastructure Definition](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions). + +### Option: Dynamically Provision Infrastructure + +You can use Terraform scripts to provision the target Azure environment and Web App for your Harness Web App deployment. + +See [Map an Azure Web App](../../terraform-category/mapgcp-kube-terraform-infra.md#option-7-map-an-azure-web-app). + +### Next Steps + +* [Create an Azure Web App Canary Deployment](create-an-azure-web-app-canary-deployment.md) +* [Create an Azure Web App Blue/Green Deployment](create-an-azure-web-app-blue-green-deployment.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the YAML editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/_container-settings.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/_container-settings.png new file mode 100644 index 00000000000..ed40d219543 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/_container-settings.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/_deployment-center.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/_deployment-center.png new file mode 100644 index 00000000000..472662a5dd5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/_deployment-center.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-24.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-24.png new file mode 100644 index 00000000000..f4547cc0e2c Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-24.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-25.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-25.png new file mode 100644 index 00000000000..32568ef342c Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-25.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-26.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-26.png new file mode 100644 index 00000000000..4a7a80b5c71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-26.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-27.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-27.png new file mode 100644 index 00000000000..d421e331cbe Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-a-non-containerized-artifacts-for-azure-web-app-deployment-27.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-12.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-12.png new file mode 100644 index 00000000000..b6311969acb Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-12.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-13.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-13.png new file mode 100644 index 00000000000..066f9011022 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-13.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-14.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-14.png new file mode 100644 index 00000000000..b2caf1648b2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-14.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-15.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-15.png new file mode 100644 index 00000000000..b6c829939dd Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-15.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-16.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-16.png new file mode 100644 index 00000000000..06c05da44ba Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/add-your-docker-image-for-azure-web-app-deployment-16.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployment-rollback-10.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployment-rollback-10.png new file mode 100644 index 00000000000..bed6a6ee33a Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployment-rollback-10.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployment-rollback-11.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployment-rollback-11.png new file mode 100644 index 00000000000..ff2e24b2b33 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployment-rollback-11.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-00.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-00.png new file mode 100644 index 00000000000..5d653a8b0d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-00.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-01.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-01.png new file mode 100644 index 00000000000..454cea1de31 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-01.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-02.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-02.png new file mode 100644 index 00000000000..ffdfe12b51e Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-02.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-03.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-03.png new file mode 100644 index 00000000000..8f4f6c50466 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/azure-web-app-deployments-overview-03.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/connect-to-azure-for-web-app-deployments-22.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/connect-to-azure-for-web-app-deployments-22.png new file mode 100644 index 00000000000..b6311969acb Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/connect-to-azure-for-web-app-deployments-22.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/connect-to-azure-for-web-app-deployments-23.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/connect-to-azure-for-web-app-deployments-23.png new file mode 100644 index 00000000000..066f9011022 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/connect-to-azure-for-web-app-deployments-23.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-06.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-06.png new file mode 100644 index 00000000000..ca1ab86c5b0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-06.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-07.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-07.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-07.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-08.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-08.png new file mode 100644 index 00000000000..9e35c212158 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-08.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-09.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-09.png new file mode 100644 index 00000000000..054b4f00786 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-blue-green-deployment-09.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-17.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-17.png new file mode 100644 index 00000000000..ca1ab86c5b0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-17.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-18.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-18.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-18.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-19.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-19.png new file mode 100644 index 00000000000..3520c011337 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-19.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-20.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-20.png new file mode 100644 index 00000000000..14e1b60c096 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-20.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-21.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-21.png new file mode 100644 index 00000000000..4b31d7b81cb Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/create-an-azure-web-app-canary-deployment-21.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/define-your-azure-web-app-infrastructure-04.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/define-your-azure-web-app-infrastructure-04.png new file mode 100644 index 00000000000..4f8e6c1a5cd Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/define-your-azure-web-app-infrastructure-04.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/define-your-azure-web-app-infrastructure-05.png b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/define-your-azure-web-app-infrastructure-05.png new file mode 100644 index 00000000000..c14e74c4318 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/azure-webapp-category/static/define-your-azure-web-app-infrastructure-05.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/add-azure-arm-templates-00.png b/docs/first-gen/continuous-delivery/azure-deployments/static/add-azure-arm-templates-00.png new file mode 100644 index 00000000000..05675a1fcf5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/add-azure-arm-templates-00.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/add-azure-arm-templates-01.png b/docs/first-gen/continuous-delivery/azure-deployments/static/add-azure-arm-templates-01.png new file mode 100644 index 00000000000..4ff1b725aa5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/add-azure-arm-templates-01.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-02.png b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-02.png new file mode 100644 index 00000000000..fef3ba904ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-02.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-03.png b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-03.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-03.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-04.png b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-04.png new file mode 100644 index 00000000000..3be9ee5690b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-04.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-05.png b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-05.png new file mode 100644 index 00000000000..d861ec162e1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/provision-using-the-arm-blueprint-create-resource-step-05.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-06.png b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-06.png new file mode 100644 index 00000000000..939e9df8280 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-06.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-07.png b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-07.png new file mode 100644 index 00000000000..5c97b18f6fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-07.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-08.png b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-08.png new file mode 100644 index 00000000000..2f07f5a348c Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-08.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-09.png b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-09.png new file mode 100644 index 00000000000..3d428e73f2f Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-09.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-10.png b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-10.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-10.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-11.png b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-11.png new file mode 100644 index 00000000000..3be9ee5690b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-11.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-12.png b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-12.png new file mode 100644 index 00000000000..d861ec162e1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/target-azure-arm-or-blueprint-provisioned-infrastructure-12.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-13.png b/docs/first-gen/continuous-delivery/azure-deployments/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-13.png new file mode 100644 index 00000000000..ea91d02a291 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-13.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-14.png b/docs/first-gen/continuous-delivery/azure-deployments/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-14.png new file mode 100644 index 00000000000..3be9ee5690b Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/static/use-azure-arm-and-blueprint-parameters-in-workflow-steps-14.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/_category_.json b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/_category_.json new file mode 100644 index 00000000000..b172df06f4f --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/_category_.json @@ -0,0 +1 @@ +{"label": "Azure Virtual Machine Scale Sets (VMSS)", "position": 10, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Azure Virtual Machine Scale Sets (VMSS)"}, "customProps": { "helpdocs_category_id": "4o8zim2tfr"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/add-your-azure-vm-image-for-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/add-your-azure-vm-image-for-deployment.md new file mode 100644 index 00000000000..2ad0ef95b44 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/add-your-azure-vm-image-for-deployment.md @@ -0,0 +1,99 @@ +--- +title: Add Your Azure VM Image for Deployment +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature.. During an Azure virtual machine scale set (VMSS) deployment, Harness creates a new VMSS… +# sidebar_position: 2 +helpdocs_topic_id: c43hmoj6ic +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. During an Azure virtual machine scale set (VMSS) deployment, Harness creates a new VMSS using a VM image definition from your Shared Image Gallery. + +In a Harness Service of the **Azure Virtual Machine Scale Set** deployment type, you select the image definition for Harness to use. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Visual Summary](#visual_summary) +* [Step 1: Title](#simple_slug) +* [Step 2: Title](#another_slug) +* [Limitations](#limitations) +* [Next Steps](#next_steps) + +### Before You Begin + +* [Azure Virtual Machine Scale Set Deployments Overview](azure-virtual-machine-scale-set-deployments.md) +* [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +Ensure you have connected Harness to your Azure subscription as described in [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md). + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Ensure You Have an Image Definition + +Azure image definitions are simple to create. For steps on setting one up for the first time, see [Create an Azure Shared Image Gallery using the portal](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/shared-images-portal) or [Tutorial: Create a custom image of an Azure VM with the Azure CLI](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-custom-images) from Azure. + +Open you Shared Image Gallery and note the following information: + +* Subscription +* Resource group +* Image gallery name +* Image definition name + +You can see this information in the Shared Image Gallery: + +![](./static/add-your-azure-vm-image-for-deployment-18.png) + +You can also see this information in the gallery **Properties**. + +### Step 2: Create the Harness VMSS Service + +1. In Harness, create or open an Application. See [Create an Application](https://docs.harness.io/article/bucothemly-application-configuration). +2. Select **Services**, and then click **Add Service**. +3. In the **Add Service** settings, name your Service. +4. In **Deployment Type**, select **Azure Virtual Machine Scale Set**. +5. Click **Submit**. + +The new Service is created. + +The Service only requires the image definition you want to use when creating your new VMSS. + +### Step 3: Add the Image Definition Artifact Source + +1. In the Harness Service, click **Add Artifact Source**. +2. In the Artifact Source settings, in **Cloud Provider**, select the Azure Cloud Provider you added in [Connect to Your Azure VMSS](connect-to-your-azure-vmss.md). +3. In **Subscription**, select the subscription used in your image definition. +4. In **Resource Group**, select resource group using in your image definition. +5. In **Image Gallery**, select the image gallery containing your image definition. +6. In **Image Definition**, select the image definition to use when creating your new MVSS. +7. Click **Submit**. + +The image definition information is added. + +At deployment runtime, Harness will use this image definition to create the VMs in the VMSS. + +If Harness cannot obtain this information, verify that the Client ID used in the Azure Cloud Provider has permissions to read the image gallery. + +If you delete an image it might still show up here until Harness cleans up deleted images. Harness cleans up deleted images every 2 hours. + +### Next Steps + +* [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md) + +### See Also + +See the following docs from Azure: + +* [Troubleshooting shared image galleries in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting-shared-images) +* [Shared Image Gallery overview](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/shared-image-galleries) +* [Azure virtual machine scale sets FAQs](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/azure-virtual-machine-scale-set-deployments.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/azure-virtual-machine-scale-set-deployments.md new file mode 100644 index 00000000000..07ad7975ece --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/azure-virtual-machine-scale-set-deployments.md @@ -0,0 +1,65 @@ +--- +title: Azure Virtual Machine Scale Set Deployments Overview +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature. To deploy an Azure virtual machine scale set (VMSS) using Harness, you only need to provi… +# sidebar_position: 2 +helpdocs_topic_id: 1h0723zsvm +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature.To deploy an Azure virtual machine scale set (VMSS) using Harness, you only need to provide two things: an instance image and a base VMSS template. + +Harness creates a new VMSS from the base VMSS template and adds instances using the instance image you provided. + +For detailed instructions on deploying a VMSS using Harness, see the following how-tos. They are listed in the order they are commonly performed. + +* [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md) +* [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) +* [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md) +* [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md) +* [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md) +* [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md) + +Harness uses tagging and naming for versioning. See [Azure VMSS Versioning and Naming](azure-vmss-versioning-and-naming.md). + +### Before You Begin + +Before learning about Harness VMSS deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### Limitations + +Harness uses the Azure SDK among other methods and Authenticated proxy is not supported for Azure SDK. Consequently, you cannot use Azure connections for artifacts, machine images, etc, that require proxy authentication. This is an Azure limitation, not a Harness limitation. This is a known Azure limitation with Java environment properties and their SDK. + +### What Does Harness Need Before You Start? + +A Harness VMSS deployment requires the following: + +* A working Azure VM instance image that Harness will use to create your instances. +* A working VMSS that Harness will use as a template for the new VMSS(s) it creates. +* An Azure VM to host the Harness Delegate that will perform the deployment tasks. +* Azure subscription you will use to connect Harness to your Azure platform. The subscription must have a Reader role at minimum. This role is only used by the Harness Delegate when it uses the Azure APIs to discover target VMs. +* SSH key for Harness to set up on the new VMSS instances. This enables users to log into the new instances. + +### What Does Harness Deploy? + +Harness takes the instance image and base VMSS you provide and creates a new VMSS and populates it with instances using the image. You can specify the desired, min, and max instances for the new VMSS, resize strategy, and other settings in Harness. + +### What Operation Systems are Supported? + +Linux and Windows VMSS deployments are supported. + +### What Does a Harness VMSS Deployment Involve? + +The following list describes the major steps of a Harness VMSS deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Links** | +| 1 | Install a Harness Delegate on a VM in your target Azure subnet. | [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md) | +| 2 | Add the VM instance image Harness will use for creating new instances in the new VMSS. | [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) | +| 3 | Select an existing VMSS to use as template when Harness creates a new VMSS. | [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md) | +| 4 | Create a Harness Workflow to perform deployment. | Select the [deployment strategy](../../concepts-cd/deployment-types/deployment-concepts-and-strategies.md) you want to perform:

•  [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md)

•  [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md)

•  [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md) | + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/azure-vmss-versioning-and-naming.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/azure-vmss-versioning-and-naming.md new file mode 100644 index 00000000000..378a3d40c91 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/azure-vmss-versioning-and-naming.md @@ -0,0 +1,61 @@ +--- +title: Azure VMSS Versioning and Naming +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature.. In this topic, we cover how Harness names, tags, and versions the VMSS and instances you… +# sidebar_position: 2 +helpdocs_topic_id: w67zx6mv87 +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. In this topic, we cover how Harness names, tags, and versions the VMSS and instances you deploy. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [VMSS and Instance Names](#vmss_and_instance_names) +* [Harness Revision Tags](#harness_revision_tags) + +### Before You Begin + +* [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md) +* [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md) +* [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md) +* [Azure Virtual Machine Scale Set Deployments Overview](azure-virtual-machine-scale-set-deployments.md) +* [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md) +* [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) +* [Connect to Your Azure VMSS](connect-to-your-azure-vmss.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### VMSS and Instance Names + +The VMSS and its new instances created by Harness are named using the VMSS name you entered in the **Azure Virtual Machine Scale Set Setup** Workflow step, and given an incremental suffix. + +For example, if the VMSS is named `doc__basic`, the first instance is named `doc__basic__1`, and the second `doc__basic__2`. + +Each subsequent deployment using the same Harness Infrastructure Definition will increment the suffix on the name of the deployed VMSS, regardless of the name of VMSS. + +For example, here are three VMSS deployments: + +![](./static/azure-vmss-versioning-and-naming-13.png) + +The first two `doc__basic` deployments are Basic Workflows and the `doc__canary` VMSS is a Canary Workflow. The `doc__canary` VMSS has the suffix `__3` because it used the same Infrastructure Definition as the `doc__basic` Workflows. + +### Harness Revision Tags + +Harness adds three Azure tags to each VMSS it deploys. These tags are used for revision tracking. + +Do not delete these tags.You can see the tags on the VMSS: + +![](./static/azure-vmss-versioning-and-naming-14.png) + +The tags are: + +* `HARNESS_REVISION` — The unique revision number of the VMSS, with an incremental suffix. +* `Name` — The name of the VMSS, with an incremental suffix. +* `Created` — The timestamp of the VMSS creation. + +With each deployment of a VMSS using the same Harness Infrastructure Definition, the suffixes of `HARNESS_REVISION` and `Name` tags are incremented: + +![](./static/azure-vmss-versioning-and-naming-15.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/connect-to-your-azure-vmss.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/connect-to-your-azure-vmss.md new file mode 100644 index 00000000000..4b7d597c467 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/connect-to-your-azure-vmss.md @@ -0,0 +1,69 @@ +--- +title: Connect to Azure for VMSS Deployments +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature.. You connect Harness to your Azure account to deploy virtual machine scale sets using a H… +# sidebar_position: 2 +helpdocs_topic_id: d5hob1zuip +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. You connect Harness to your Azure account to deploy [virtual machine scale sets](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview) using a Harness Azure Cloud Provider. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Supported Platforms and Technologies](#undefined) +* [Review: Azure Connection Options](#review_azure_connection_options) +* [Step 1: Install a Harness Delegate](#step_1_install_a_harness_delegate) +* [Step 2: Set Up the Azure Cloud Provider](#step_2_set_up_the_azure_cloud_provider) +* [Next Steps](#next_steps) + +### Before You Begin + +* [Azure Virtual Machine Scale Set Deployments Overview](azure-virtual-machine-scale-set-deployments.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Supported Platforms and Technologies + +See  [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Review: Azure Connection Options + +As covered in [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts), you need to install a Harness Delegate in your target infrastructure before setting up your Harness deployment. + +There are several types of Delegates you can use for a virtual machine scale set deployment, described in [Delegate Installation Overview](https://docs.harness.io/article/igftn7rrtg-delegate-installation-overview). + +Shell Script, Docker, Kubernetes, and Helm Delegates are all options. + +The simplest option for most users is to install the Harness Shell Script Delegate on a VM in the same resource group, virtual network, and subnet where your virtual machine scale set will be deployed. + +### Step 1: Install a Harness Delegate + +Follow the installation steps for the Harness Delegate you want to install. See [Delegate Installation Overview](https://docs.harness.io/article/igftn7rrtg-delegate-installation-overview) for the available options. + +Ensure this Delegate is in or can connect to the resource group, virtual network, and subnet where your virtual machine scale set will be deployed. + +### Step 2: Set Up the Azure Cloud Provider + +A Harness Azure Cloud Provider connects to your Azure using your Client ID and Tenant ID. + +Follow the steps in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider) to connect Harness to Azure. + +That's all the setup you need to connect Harness to your account and start your virtual machine scale set deployment. + +A virtual machine scale set deployment uses an Azure Shared Image Gallery and image. Access to those resources use the same Azure Cloud Provider. + +### Next Steps + +* [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) + +### See Also + +See the following docs from Azure: + +* [Troubleshooting shared image galleries in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting-shared-images) +* [Shared Image Gallery overview](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/shared-image-galleries) +* [Azure virtual machine scale sets FAQs](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq) + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-basic-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-basic-deployment.md new file mode 100644 index 00000000000..ba09398b33b --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-basic-deployment.md @@ -0,0 +1,277 @@ +--- +title: Create an Azure VMSS Basic Deployment +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature.. A Basic virtual machine scale set (VMSS) deployment sets up a new VMSS using the image y… +# sidebar_position: 2 +helpdocs_topic_id: 74htogyjad +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. A Basic virtual machine scale set (VMSS) deployment sets up a new VMSS using the image you supplied in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) and the base VMSS template you selected in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +You specify the range of instances you want for the new VMSS and then the percentage or count for the actual deployment. + +For other deployment strategies, see [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md), and [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md). + + +### Before You Begin + +* [Azure Virtual Machine Scale Set Deployments Overview](azure-virtual-machine-scale-set-deployments.md) +* [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md) +* [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) +* [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Create the Basic Workflow + +In your Harness Application, click **Workflows**, and then click **Add Workflow**. + +Enter the new Workflow's settings. + +#### Name + +Enter a name for the Workflow. You will use this name to locate the Workflow in Deployments and to add it to [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration). + +#### Workflow Type + +Select **Basic**. See [Deployment Concepts and Strategies](../../concepts-cd/deployment-types/deployment-concepts-and-strategies.md). + +For other deployment strategies, see [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md), and [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md). + +#### Environment + +Select the Environment you created in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +#### Service + +Select the Service you created in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). + +#### Infrastructure Definition + +Select the Infrastructure Definition you created in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +#### Submit + +When you are done, click **Submit**. + +The steps for the Basic Workflow VMSS deployment are generated automatically. + +Next, we'll take a look at each step's settings and how you can change them. + +### Step 2: Azure Virtual Machine Scale Set Setup + +The Azure Virtual Machine Scale Set Setup step is where you specify the default settings for the new VMSS. + +In particular, you specify the min, max, and desired number of instances for the new VMSS. + +These correspond to the **Instance limits** settings in **Auto created scale condition** in VMSS: + +![](./static/create-an-azure-vmss-basic-deployment-16.png) + +Later, in the **Upgrade Virtual Machine Scale Set** step, you will upgrade the number of instances by a percentage or count of the desired instances. + +#### Name + +Enter a name for the Workflow step. + +#### Virtual Machine Scale Set Name + +Enter a name for the new VMSS. This is the name that will appear in the **Virtual machine scale sets** blade in Azure. + +Hyphens in the names are converted to double underscores in Azure. For example, if you enter `doc-basic` the name in Azure will be `doc__basic`.The first time you deploy, the name of the new VMSS is given the suffix `__1`. Each time you deploy a new VMSS using the same Harness Infrastructure Definition, the suffix is incremented, such as `__2`. + +You can use the default name, which is a concatenation of the names of your Application, Service, and Environment: `${app.name}_${service.name}_${env.name}`. + +For information on naming and versioning, see [Azure VMSS Versioning and Naming](azure-vmss-versioning-and-naming.md). + +#### Instances + +Select **Fixed** or **Same as already running Default Instances**. + +For **Same as already running Default instances**, Harness determines if there is a previous VMSS deployment for the same Infrastructure Definition. If one is present, Harness takes the number of instances from there. If there is no previous deployment, Harness uses the default of 6. + +If there is more than one scaling policy attached to the already running, previously deployed VMSS, Harness uses the policy named **Auto created scale condition** or **Profile1**. + +**Fixed** allows you to set the min, max, and desired number of instances for the new VMSS. + +##### No Autoscaling Policy Attached to Base VMSS + +When deploying a new VMSS from a base VMSS with no autoscaling policies (manual scaling), the following occurs: + +* The first deployment will create a default of six instances. +* The next deployment instance count will be the same as previous number of running instances. +* This new VMSS will still have no autoscaling policy attached since the base VMSS has none. Only the instance number will change depending on the number of running instances of the previous deployment. + +#### Maximum Instances + +Specify maximum instance count. + +#### Minimum Instances + +Specify minimum instance count. + +#### Desired Instances + +Specify the desired instance count. This is the same as default instance count in VMSS: + + +> In case there is a problem reading the resource metrics and the current capacity is below the default capacity, then to ensure the availability of the resource, Autoscale will scale out to the default. + + +> If the current capacity is already higher than the default capacity, Autoscale will not scale in. + +#### Resize Strategy + +Select whether you want Harness to resize the new VMSS instances first, or after it has downsized the old instances. + +#### Auto Scaling Steady State Timeout + +Enter how long you want Harness to wait for this step to finish. If the step's execution exceeds this timeout, Harness fails the deployment. + +### Option: Use Variable Expressions in Settings + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in certain step settings. + +When you deploy the Workflow, alone, in a Pipeline, or by a [Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2), you will be prompted to provide values for the variables. + +To see if a Workflow variable can be used in a setting, enter `$` or `${workflow.variables` and see the available expressions. + +### Step 3: Upgrade Virtual Machine Scale Set + +Use the Upgrade Virtual Machine Scale Set step to set the desired instances for the new VMSS. + +You can select a percentage or count. + +This is the same as the **Scale mode** settings in **Auto created scale condition** in VMSS: + +![](./static/create-an-azure-vmss-basic-deployment-17.png) + +#### Name + +Enter a name for the Workflow step. + +#### Desired Instances + +Set the number of instances that the VMSS will attempt to deploy and maintain. + +* If you select **Count**, enter the actual number of instances. +* If you select **Percent**, enter a percentage of the available capacity. + +Your setting cannot exceed your **Maximum Instances** setting in the Workflow's preceding **Azure Virtual Machine Scale Set Setup** step. + +This setting corresponds to the **Maximum** setting in **Instance limits** in VMSS. + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in this setting. + +### Step 4: Deploy + +1. When you have set up your Workflow, click **Deploy**. +2. In **Artifacts**, select the image you supplied in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). +3. Click **Submit**. + +#### Azure Virtual Machine Scale Set Setup + +In **Azure Virtual Machine Scale Set Setup**, you can see Harness look for a VMSS with the same name. + + +``` +Starting Azure Virtual Machine Scale Set Setup +Getting all Harness managed Virtual Machine Scale Sets +Found [0] Harness managed Virtual Machine Scale Sets +New revision of Virtual Machine Scale Set: [1] +New Virtual Machine Scale Set will be created with name: [doc__basic__1] +Using user defined input min: [1], max: [2] and desired: [1] for deployment +``` +If it finds a VMSS with the same name, it prepares to create a new VMSS and increment its suffix by 1. Here is the second deployment of doc\_\_basic. You can see a new name `doc__basic__2`. + + +``` +Starting Azure Virtual Machine Scale Set Setup +Getting all Harness managed Virtual Machine Scale Sets +Found [1] Harness managed Virtual Machine Scale Sets +Getting the most recent active Virtual Machine Scale Set with non zero capacity +Found most recent active Virtual Machine Scale Set: [doc__basic__1] +New revision of Virtual Machine Scale Set: [2] +New Virtual Machine Scale Set will be created with name: [doc__basic__2] +Using user defined input min: [1], max: [2] and desired: [1] for deployment +``` +If Harness finds an old VMSS with a non-zero capacity, it will downscale it to 0 as part of the Resize Strategy. + +Next, you can see Harness create the new VMSS using the base VMSS template to selected in the Infrastructure Definition and the image you selected in the Service: + + +``` +Start getting gallery image references id [/subscriptions/1234567891011/resourceGroups/devVMSSResourceGroup/providers/Microsoft.Compute/galleries/devVMSSGallery/images/devVMLinuxDefinition/versions/1.0.0] +Using gallery image id [/subscriptions/1234567891011/resourceGroups/devVMSSResourceGroup/providers/Microsoft.Compute/galleries/devVMSSGallery/images/devVMLinuxDefinition/versions/1.0.0], publisher [Harness], offer [Harness-Offer],sku [Harness-SKU], osState [Generalized] +Getting base Virtual Machine Scale Set [devScaleSet] +Creating new Virtual Machine Scale Set: [doc__basic__1] +New Virtual Machine Scale Set: [doc__basic__1] created successfully +``` +#### Upgrade Virtual Machine Scale Set + +In Upgrade Virtual Machine Scale Set, you can see the new VMSS upscaled to its desired instances: + + +``` +Starting Azure VMSS Deploy +Clearing scaling policy for scale set: [doc__basic__1] +Set VMSS: [doc__basic__1] desired capacity to [1] + +Successfully set desired capacity + +Checking the status of VMSS: [doc__basic__1] VM instances + +Virtual machine instance: [doc__basic__1_1] provisioning state: [Creating] +Virtual machine instance: [doc__basic__1_0] provisioning state: [Updating] + +Received success response from Azure for VMSS: [doc__basic__1] update capacity + +Virtual machine instance: [doc__basic__1_0] provisioning state: [Updating] +Virtual machine instance: [doc__basic__1_0] provisioning state: [Updating] +Virtual machine instance: [doc__basic__1_0] provisioning state: [Provisioning succeeded] + +All the VM instances of VMSS: [doc__basic__1] are provisioned successfully +Attaching scaling policy to VMSS: [doc__basic__1] as number of Virtual Machine instances has reached to desired capacity + +Total number of new instances deployed for Scale Set: [doc__basic__1] is [1] +Total number of instances of old Scale Set: [] is [0] + +No deployment error. Execution success +No scale set found with the name = [], hence skipping +No scale set found with the name = [], hence skipping +``` +Lastly, Harness downscales the previous version. In the following example, we have deployed `doc__basic__2` and so `doc__basic__1` is downscaled: + + +``` +Clearing scaling policy for scale set: [doc__basic__1] +Set VMSS: [doc__basic__1] desired capacity to [0] +Successfully set desired capacity + +Checking the status of VMSS: [doc__basic__1] VM instances +Virtual machine instance: [doc__basic__1_0] provisioning state: [Provisioning succeeded] +Virtual machine instance: [doc__basic__1_0] provisioning state: [Deleting] +Virtual machine instance: [doc__basic__1_0] provisioning state: [Deleting] +All the VM instances of VMSS: [doc__basic__1] are deleted successfully +Not attaching scaling policy to VMSS: [doc__basic__1] while down sizing it +``` +Congratulations. Your deployment was successful. + +For information on naming and versioning, see [Azure VMSS Versioning and Naming](azure-vmss-versioning-and-naming.md). + +### Option: Templatize the Workflow + +You can parameterize the Workflow's settings to turn it into a template. When it is deployed, values are provided for the parameters. + +See [Templatize a Workflow](https://docs.harness.io/article/bov41f5b7o-templatize-a-workflow-new-template). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-blue-green-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-blue-green-deployment.md new file mode 100644 index 00000000000..d22d38ec0ee --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-blue-green-deployment.md @@ -0,0 +1,291 @@ +--- +title: Create an Azure VMSS Blue/Green Deployment +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature.. A Blue/Green virtual machine scale set (VMSS) deployment uses a load balancer with two b… +# sidebar_position: 2 +helpdocs_topic_id: 9op1u6dgks +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. A [Blue/Green](../../concepts-cd/deployment-types/deployment-concepts-and-strategies.md) virtual machine scale set (VMSS) deployment uses a load balancer with two backend pools: one production pool and one stage pool. You identify the pools during Workflow setup. + +When you deploy the Blue/Green Workflow, it sets up a new VMSS using the image you supplied in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) and the base VMSS template you selected in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +At first, the Workflow uses the stage pool. Once the deployment using the stage pool is successful, the Workflow detaches the stage pool and attaches (swaps) the production pool to the new VMSS. + +For other deployment strategies, see [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md), and [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md). + +### Before You Begin + +* [Azure Virtual Machine Scale Set Deployments Overview](azure-virtual-machine-scale-set-deployments.md) +* [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md) +* [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) +* [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Visual Summary + +Here is the Azure load balancer with two pools set up: + +![](./static/create-an-azure-vmss-blue-green-deployment-00.png) + +Here is a successful Blue/Green VMSS deployment, showing the swap from the stage to prod pool: + +![](./static/create-an-azure-vmss-blue-green-deployment-01.png) + +Here is the final, deployed VMSS with its prod pool: + +![](./static/create-an-azure-vmss-blue-green-deployment-02.png) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Review: Load Balancer Requirements + +A Harness Blue/Green VMSS deployment requires an Azure load balancer with two backend pools. + +![](./static/create-an-azure-vmss-blue-green-deployment-03.png) + +The load balancer distributes inbound flows that arrive at the load balancer's front end to the stage and production backend pool instances. + +The backend pool instances are instances in the virtual machine scale set Harness creates in the deployment. + +See [What is Azure Load Balancer?](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview) from Azure. + +### Step 1: Create the Blue/Green Workflow + +In your Harness Application, click **Workflows**, and then click **Add Workflow**. + +Enter the new Workflow's settings. + +#### Name + +Enter a name for the Workflow. You will use this name to locate the Workflow in Deployments and to add it to [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration). + +#### Workflow Type + +Select **Blue/Green**. See [Deployment Concepts and Strategies](../../concepts-cd/deployment-types/deployment-concepts-and-strategies.md). + +For other deployment strategies, see [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md), and [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md). + +#### Environment + +Select the Environment you created in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +#### Service + +Select the Service you created in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). + +#### Infrastructure Definition + +Select the Infrastructure Definition you created in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +#### Submit + +When you are done, click **Submit**. + +The steps for the Blue/Green Workflow VMSS deployment are generated automatically. + +Next, we'll take a look at each step's settings and how you can change them. + +### Step 2: Azure Virtual Machine Scale Set Setup + +The Azure Virtual Machine Scale Set Setup step is where you specify the default settings for the new VMSS, the load balancer, and the stage and production backend pools. + +You will define the new VMSS by specifying the min, max, and desired number of instances for the new VMSS. + +These correspond to the **Instance limits** settings in **Auto created scale condition** in VMSS: + +![](./static/create-an-azure-vmss-blue-green-deployment-04.png) + +Later, in the **Upgrade Virtual Machine Scale Set** step, you will upgrade the number of instances by a percentage or count of the desired instances. + +#### Name + +Enter a name for the Workflow step. + +#### Virtual Machine Scale Set Name + +Enter a name for the new VMSS. This is the name that will appear in the **Virtual machine scale sets** blade in Azure. + +Hyphens in the names are converted to double underscores in Azure. For example, if you enter `doc-basic` the name in Azure will be `doc__basic`.The first time you deploy, the name of the new VMSS is given the suffix `__1`. Each time you deploy a new VMSS using the same Harness Infrastructure Definition, the suffix is incremented, such as `__2`. + +You can use the default name, which is a concatenation of the names of your Application, Service, and Environment: `${app.name}_${service.name}_${env.name}`. + +For information on naming and versioning, see [Azure VMSS Versioning and Naming](azure-vmss-versioning-and-naming.md). + +#### Instances + +Select **Fixed** or **Same as already running Default Instances**. + +For **Same as already running Default instances**, Harness determines if there is a previous VMSS deployment for the same Infrastructure Definition. If one is present, Harness takes the number of instances from there. If there is no previous deployment, Harness uses the default of 6. + +If there is more than one scaling policy attached to the already running, previously deployed VMSS, Harness uses the policy named **Auto created scale condition** or **Profile1**. + +**Fixed** allows you to set the min, max, and desired number of instances for the new VMSS. + +#### Maximum Instances + +Specify maximum instance count. + +#### Minimum Instances + +Specify minimum instance count. + +#### Desired Instances + +Specify the desired instance count. This is the same as default instance count in VMSS: + + +> In case there is a problem reading the resource metrics and the current capacity is below the default capacity, then to ensure the availability of the resource, Autoscale will scale out to the default. + + +> If the current capacity is already higher than the default capacity, Autoscale will not scale in. + +#### Resize Strategy + +Select whether you want Harness to resize the new VMSS instances first, or after it has downsized the old instances. + +#### Auto Scaling Steady State Timeout + +Enter how long you want Harness to wait for this step to finish. If the step's execution exceeds this timeout, Harness fails the deployment. + +#### Azure Load Balancer + +Select the Azure Load Balancer to use for your new VMSS. Ensure that this Azure Load Balancer has two backend pools. + +Harness quires Azure for the list of Azure Load Balancers using the Harness Azure Cloud Provider you selected in the Infrastructure Definition for this Workflow. + +#### Production Backend Pool + +Select the backend pool to use for live, production traffic. + +#### Stage Backend Pool + +Select the backend pool to use for stage traffic. + +### Option: Use Variable Expressions in Settings + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in certain step settings. + +When you deploy the Workflow, alone, in a Pipeline, or by a [Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2), you will be prompted to provide values for the variables. + +To see if a Workflow variable can be used in a setting, enter `$` or `${workflow.variables` and see the available expressions. + +### Step 3: Upgrade Virtual Machine Scale Set + +Use the Upgrade Virtual Machine Scale Set step to set the desired instances for the new VMSS. + +The number of instances will be used for both stage and production traffic. + +You can select a percentage or count. + +This is the same as the **Scale mode** settings in **Auto created scale condition** in VMSS: + +![](./static/create-an-azure-vmss-blue-green-deployment-05.png) + +#### Name + +Enter a name for the Workflow step. + +#### Desired Instances + +Set the number of instances that the VMSS will attempt to deploy and maintain. + +* If you select **Count**, enter the actual number of instances. +* If you select **Percent**, enter a percentage of the available capacity. + +Your setting cannot exceed your **Maximum Instances** setting in the Workflow's preceding **Azure Virtual Machine Scale Set Setup** step. + +This setting corresponds to the **Maximum** setting in **Instance limits** in VMSS. + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in this setting. + +### Step 4: Swap Virtual Machine Scale Set Route + +The Swap Virtual Machine Scale Set Route detaches the stage backend pool from the new VMSS, and then attached the production backend pool to it. + +Select **Downsize Old VMSS** is you want Harness to downscale the previously deployed VMSS to 0. + +If you still want to maintain the previous VMSS and its instances, do not select this option. + +### Step 5: Deploy + +Now that the Canary Workflow is complete, you can deploy it to Azure. + +1. When you have set up your Workflow, click **Deploy**. +2. In **Artifacts**, select the image you supplied in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). +3. Click **Submit**. + +#### Azure Virtual Machine Scale Set Setup + +In **Azure Virtual Machine Scale Set Setup**, you can see Harness set up the new VMSS. + +![](./static/create-an-azure-vmss-blue-green-deployment-06.png) + +#### Upgrade Virtual Machine Scale Set + +In **Upgrade Virtual Machine Scale Set**, you can see the new VMSS upscaled to its desired instances. + +![](./static/create-an-azure-vmss-blue-green-deployment-07.png) + +#### Swap Virtual Machine Scale Set Route + +In **Swap Virtual Machine Scale Set Route**, you can see Harness detach the stage pool from the new VMSS and attached the production pool to it. + +![](./static/create-an-azure-vmss-blue-green-deployment-08.png) + +You can see the swap in the logs: + + +``` +... +Sending request to attach virtual machine scale set:[doc__blue__green__2] to prod backend pool:[prod-backend-pool] +Updating virtual machine instance: [doc__blue__green__2_0] for the scale set: [doc__blue__green__2] +All virtual machine instances updated for the scale set: [doc__blue__green__2] +Tagging VMSS: [doc__blue__green__2] as [BLUE] deployment +Tagged successfully VMSS: [doc__blue__green__2] +Sending request to detach virtual machine scale set:[doc__blue__green__1] from prod backend pool:[prod-backend-pool] +Updating virtual machine instance: [doc__blue__green__1_1] for the scale set: [doc__blue__green__1] +All virtual machine instances updated for the scale set: [doc__blue__green__1] +... +``` +Congratulations. Your deployment was successful. + +### Review: Blue/Green VMSS Tags + +Azure tags are used for versioning by Harness, as described in [Azure VMSS Versioning and Naming](azure-vmss-versioning-and-naming.md). + +For Blue/Green deployments, the an additional tag named is `BG_VERSION` added. + +![](./static/create-an-azure-vmss-blue-green-deployment-09.png) + +The value for the tag is either BLUE or GREEN. The value alternates with each deployment. + +In the following example, the new deployment of `doc__blue__green` (`doc__blue__green__2`) is tagged with `BLUE` and the previous version `doc__blue__green__1` is tagged with `GREEN`. + + +``` +Starting Swap Backend pool step during blue green deployment +Sending request to detach virtual machine scale set:[doc__blue__green__2] from stage backend pool:[stage-backend-pool] +Updating virtual machine instance: [doc__blue__green__2_0] for the scale set: [doc__blue__green__2] +All virtual machine instances updated for the scale set: [doc__blue__green__2] +Sending request to attach virtual machine scale set:[doc__blue__green__2] to prod backend pool:[prod-backend-pool] +Updating virtual machine instance: [doc__blue__green__2_0] for the scale set: [doc__blue__green__2] +All virtual machine instances updated for the scale set: [doc__blue__green__2] +Tagging VMSS: [doc__blue__green__2] as [BLUE] deployment +Tagged successfully VMSS: [doc__blue__green__2] +Sending request to detach virtual machine scale set:[doc__blue__green__1] from prod backend pool:[prod-backend-pool] +Updating virtual machine instance: [doc__blue__green__1_1] for the scale set: [doc__blue__green__1] +All virtual machine instances updated for the scale set: [doc__blue__green__1] +Tagging VMSS: [doc__blue__green__1] as [GREEN] deployment +Tagged successfully VMSS: [doc__blue__green__1] +Swap backend pool completed successfully +``` +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-canary-deployment.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-canary-deployment.md new file mode 100644 index 00000000000..0abd50338dc --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/create-an-azure-vmss-canary-deployment.md @@ -0,0 +1,239 @@ +--- +title: Create an Azure VMSS Canary Deployment +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature.. A Canary virtual machine scale set (VMSS) deployment sets up a new VMSS using the image… +# sidebar_position: 2 +helpdocs_topic_id: ebq6gwgs5r +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. A Canary virtual machine scale set (VMSS) deployment sets up a new VMSS using the image you supplied in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) and the base VMSS template you selected in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +The Canary Workflow deploys in two phases. The first phase creates the new VMSS and deploys a number/percentage of the desired instances. Once deployment is successful, the second phase deploys 100% of the desired instances. + +For other deployment strategies, see [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md), and [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md). + +### Before You Begin + +* [Azure Virtual Machine Scale Set Deployments Overview](azure-virtual-machine-scale-set-deployments.md) +* [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md) +* [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) +* [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Visual Summary + +Here is a successful Canary VMSS deployment, showing both phases: + +![](./static/create-an-azure-vmss-canary-deployment-10.png) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Create the Canary Workflow + +In your Harness Application, click **Workflows**, and then click **Add Workflow**. + +Enter the new Workflow's settings. + +#### Name + +Enter a name for the Workflow. You will use this name to locate the Workflow in Deployments and to add it to [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration). + +#### Workflow Type + +Select **Canary**. See [Deployment Concepts and Strategies](../../concepts-cd/deployment-types/deployment-concepts-and-strategies.md). + +For other deployment strategies, see [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md), and [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md). + +#### Environment + +Select the Environment you created in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). + +Later, you will select the Harness Service and target Infrastructure Definition in each phase of the Workflow. + +#### Submit + +When you are done, click **Submit**. + +When you add phases to the Canary Workflow, the deployment steps in the phases are generated automatically. + +Next, we'll take a look at each step's settings and how you can change them. + +### Step 2: Create Phase 1 + +The first phase of the Workflow will set up the new VMSS and scale it to a count/percentage of your desired instances. + +1. In **Deployment Phases**, click **Add Phase**. +2. In Workflow Phase, in **Service**, select the Service you created in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). +3. In **Infrastructure Definition**, select the Infrastructure Definition you created in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). +4. Click **Submit**. + +The new phase's VMSS steps are added automatically. + +### Step 3: Azure Virtual Machine Scale Set Setup + +The Azure Virtual Machine Scale Set Setup step is where you specify the default settings for the new VMSS. + +In particular, you specify the min, max, and desired number of instances for the new VMSS. + +These correspond to the **Instance limits** settings in **Auto created scale condition** in VMSS: + +![](./static/create-an-azure-vmss-canary-deployment-11.png) + +Later, in the **Upgrade Virtual Machine Scale Set** step, you will upgrade the number of instances by a percentage or count of the desired instances. + +#### Name + +Enter a name for the Workflow step. + +#### Virtual Machine Scale Set Name + +Enter a name for the new VMSS. This is the name that will appear in the **Virtual machine scale sets** blade in Azure. + +Hyphens in the names are converted to double underscores in Azure. For example, if you enter `doc-basic` the name in Azure will be `doc__basic`.The first time you deploy, the name of the new VMSS is given the suffix `__1`. Each time you deploy a new VMSS using the same Harness Infrastructure Definition, the suffix is incremented, such as `__2`. + +You can use the default name, which is a concatenation of the names of your Application, Service, and Environment: `${app.name}_${service.name}_${env.name}`. + +For information on naming and versioning, see [Azure VMSS Versioning and Naming](azure-vmss-versioning-and-naming.md). + +#### Instances + +Select **Fixed** or **Same as already running Default Instances**. + +For **Same as already running Default instances**, Harness determines if there is a previous VMSS deployment for the same Infrastructure Definition. If one is present, Harness takes the number of instances from there. If there is no previous deployment, Harness uses the default of 6. + +If there is more than one scaling policy attached to the already running, previously deployed VMSS, Harness uses the policy named **Auto created scale condition** or **Profile1**. + +**Fixed** allows you to set the min, max, and desired number of instances for the new VMSS. + +#### Maximum Instances + +Specify maximum instance count. + +#### Minimum Instances + +Specify minimum instance count. + +#### Desired Instances + +Specify the desired instance count. This is the same as default instance count in VMSS: + + +> In case there is a problem reading the resource metrics and the current capacity is below the default capacity, then to ensure the availability of the resource, Autoscale will scale out to the default. + + +> If the current capacity is already higher than the default capacity, Autoscale will not scale in. + +#### Resize Strategy + +Select whether you want Harness to resize the new VMSS instances first, or after it has downsized the old instances. + +#### Auto Scaling Steady State Timeout + +Enter how long you want Harness to wait for this step to finish. If the step's execution exceeds this timeout, Harness fails the deployment. + +### Option: Use Variable Expressions in Settings + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in certain step settings. + +When you deploy the Workflow, alone, in a Pipeline, or using a [Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2), you will be prompted to provide values for the variables. + +To see if a Workflow variable can be used in a setting, enter `$` or `${workflow.variables` and see the available expressions. + +### Step 4: Upgrade Virtual Machine Scale Set + +Use the Upgrade Virtual Machine Scale Set step to set the desired instances for the new VMSS in **phase 1**. + +For a Canary deployment phase 1, this is a subset of the available capacity. + +You can select a percentage or count. + +This is the same as the **Scale mode** settings in **Auto created scale condition** in VMSS: + +![](./static/create-an-azure-vmss-canary-deployment-12.png) + +#### Name + +Enter a name for the Workflow step. + +#### Desired Instances + +Set the number of instances that the VMSS will attempt to deploy and maintain for **phase 1** of the Canary deployment. Typically, this is half or fewer of the available capacity. + +* If you select **Count**, enter the actual number of instances. +* If you select **Percent**, enter a percentage of the available capacity. + +Your setting cannot exceed your **Maximum Instances** setting in the Workflow's preceding **Azure Virtual Machine Scale Set Setup** step. + +This setting corresponds to the **Maximum** setting in **Instance limits** in VMSS. + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in this setting. + +### Step 5: Create Phase 2 + +Phase 2 of the Canary Workflow runs after phase 1 is successful. Phase 2 will upgrade the VMSS to the available capacity. + +1. In **Deployment Phases**, under **Phase 1**, click **Add Phase**. +2. In Workflow Phase, in **Service**, select the Service you created in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). +3. In **Infrastructure Definition**, select the Infrastructure Definition you created in [Define Your Azure VMSS Target Infrastructure](define-your-azure-vmss-target-infrastructure.md). +4. Click **Submit**. + +The Upgrade Virtual Machine Scale Set step is added to the phase automatically. + +### Step 6: Upgrade Virtual Machine Scale Set + +The settings for the Upgrade Virtual Machine Scale Set step are the same as phase 1. + +In phase 2, you increase the **Desired Instances** percentage/count to the full capacity: 100% or total count. + +That's all you have to do. + +### Step 7: Deploy + +Now that the Canary Workflow is complete, you can deploy it to Azure. + +1. When you have set up your Workflow, click **Deploy**. +2. In **Artifacts**, select the image you supplied in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). +3. Click **Submit**. + +#### Phase 1: Azure Virtual Machine Scale Set Setup + +In **Azure Virtual Machine Scale Set Setup**, you can see Harness set up the new VMSS. + +#### Phase 1: Upgrade Virtual Machine Scale Set + +In **Upgrade Virtual Machine Scale Set**, you can see the new VMSS upscaled to its desired instances. + +#### Phase 2: Upgrade Virtual Machine Scale Set + +In phase 2's **Upgrade Virtual Machine Scale Set** step, you can see the new VMSS upscaled to its full capacity. + + +``` +Checking the status of VMSS: [doc__canary__3] VM instances +Virtual machine instance: [doc__canary__3_1] provisioning state: [Provisioning succeeded] +Virtual machine instance: [doc__canary__3_1] provisioning state: [Provisioning succeeded] +Virtual machine instance: [doc__canary__3_2] provisioning state: [Creating] +Virtual machine instance: [doc__canary__3_3] provisioning state: [Creating] +... +All the VM instances of VMSS: [doc__canary__3] are provisioned successfully +Attaching scaling policy to VMSS: [doc__canary__3] as number of Virtual Machine instances has reached to desired capacity +``` +Congratulations. Your deployment was successful. + +For information on naming and versioning, see [Azure VMSS Versioning and Naming](azure-vmss-versioning-and-naming.md). + +### Option: Templatize the Workflow + +You can parameterize the Workflow's settings to turn it into a template. When it is deployed, values are provided for the parameters. + +See [Templatize a Workflow](https://docs.harness.io/article/bov41f5b7o-templatize-a-workflow-new-template). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/define-your-azure-vmss-target-infrastructure.md b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/define-your-azure-vmss-target-infrastructure.md new file mode 100644 index 00000000000..43a8c3f4774 --- /dev/null +++ b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/define-your-azure-vmss-target-infrastructure.md @@ -0,0 +1,118 @@ +--- +title: Define Your Azure VMSS Target Infrastructure +description: Currently, this feature is behind the Feature Flag AZURE_VMSS. Contact Harness Support to enable the feature.. The target infrastructure for an Azure virtual machine scale set (VMSS) deployment is a… +# sidebar_position: 2 +helpdocs_topic_id: 2976rmk4kd +helpdocs_category_id: 4o8zim2tfr +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `AZURE_VMSS`. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. The target infrastructure for an Azure virtual machine scale set (VMSS) deployment is a base VMSS template you select in the Harness Infrastructure Definition. + +During deployment, this template is used along with the image definition you selected in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) to create a new VMSS. + +You select the VMSS to use as a template and provide the username and password/key to connect to the new VMs. + +Once you have set up the target infrastructure, you select it when you set up your Harness Workflow, described in [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md), [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md), and [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md). + +. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Supported Platforms and Technologies](#undefined) +* [Step 1: Create an Environment](#undefined) +* [Step 2: Create an Infrastructure Definition](#step_2_create_an_infrastructure_definition) +* [Option 1: Scope to Specific Services](#option_1_scope_to_specific_services) +* [Next Steps](#next_steps) +* [Configure As Code](#configure_as_code) + +### Before You Begin + +* [Azure Virtual Machine Scale Set Deployments Overview](azure-virtual-machine-scale-set-deployments.md) +* [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md) +* [Connect to Azure for VMSS Deployments](connect-to-your-azure-vmss.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Create an Environment + +Environments represent one or more of your deployment infrastructures, such as Dev, QA, Stage, Production, etc. Use Environments to organize your target cluster Infrastructure Definitions. + +1. In your Harness Application, click **Environments**. The **Environments** page appears. +2. Click Add Environment. The **Environment** settings appear. +3. In **Name**, enter a name that describes this group of target clusters, such as QA, Stage, Prod, etc. +4. In **Environment Type**, select **Non-Production** or **Production**. +5. Click **SUBMIT**. The new **Environment** page appears. +6. Click  **Add Infrastructure Definition**. The following section provide information on setting up different Add Infrastructure Definitions for different target clusters. + +### Step 2: Create an Infrastructure Definition + +1. In your Environment, click Add Infrastructure Definition. +2. Enter the following settings. + +#### Name + +Enter a name for the Infrastructure Definition. You will select this name when you set up the Infrastructure Definition in your Workflow. + +#### Cloud Provider Type + +Select **Microsoft Azure**. + +#### Deployment Type + +Select **Azure Virtual Machine Scale Set**. + +#### Cloud Provider + +Select the Azure Cloud Provider you set up in [Connect to Your Azure VMSS](connect-to-your-azure-vmss.md). The Cloud Provider is used to pull the Azure information you need to define the VMSS Harness will create. + +#### Subscription + +Select the subscription to use for the new VMSS. + +#### Resource Group + +Select the resource group to use for the new VMSS. + +#### Virtual Machine Scale Sets + +Select the base VMSS to use as a template when the Harness Workflow creates a new VMSS using the image definition you selected in [Add Your Azure VM Image for Deployment](add-your-azure-vm-image-for-deployment.md). + +#### Username + +This is the username for connecting to the new VMs Harness will create. For example, connections over SSH or RDP. The **Username** setting is populated using the username taken from the base VMSS you selected. You can use the same username or replace it. + +The username may contain letters, numbers, hyphens, and underscores. It may not start with a hyphen or number. Usernames must not include [reserved words](https://docs.microsoft.com/en-us/rest/api/compute/virtualmachines/createorupdate#osprofile). The value must be between 1 and 64 characters long (Linux) or 20 characters (Windows). + +#### Authentication Type + +Select **Password** or **SSH Public Key**. + +For **Password**, enter the password for the new VMs Harness will create. + +For **SSH Public Key**, select an SSH key that you have added to Harness. See [Add SSH Keys](https://docs.harness.io/article/gsp4s7abgc-add-ssh-keys). + +Creating the SSH key in Azure is covered in [Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys) from Azure. + +### Option 1: Scope to Specific Services + +The **Scope to specific Services** setting in the Infrastructure Definition enables you to scope this Infrastructure Definition to specific Harness Services. + +See [Add an Infrastructure Definition](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions). + +### Next Steps + +* [Create an Azure VMSS Basic Deployment](create-an-azure-vmss-basic-deployment.md) +* [Create an Azure VMSS Canary Deployment](create-an-azure-vmss-canary-deployment.md) +* [Create an Azure VMSS Blue/Green Deployment](create-an-azure-vmss-blue-green-deployment.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/add-your-azure-vm-image-for-deployment-18.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/add-your-azure-vm-image-for-deployment-18.png new file mode 100644 index 00000000000..81419a0ed3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/add-your-azure-vm-image-for-deployment-18.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-13.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-13.png new file mode 100644 index 00000000000..1f46eb5195d Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-13.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-14.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-14.png new file mode 100644 index 00000000000..a9bcbb8d90a Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-14.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-15.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-15.png new file mode 100644 index 00000000000..96c58a24396 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/azure-vmss-versioning-and-naming-15.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-basic-deployment-16.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-basic-deployment-16.png new file mode 100644 index 00000000000..e6a48535514 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-basic-deployment-16.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-basic-deployment-17.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-basic-deployment-17.png new file mode 100644 index 00000000000..e6a48535514 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-basic-deployment-17.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-00.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-00.png new file mode 100644 index 00000000000..1c470c0cb70 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-00.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-01.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-01.png new file mode 100644 index 00000000000..06860c82130 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-01.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-02.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-02.png new file mode 100644 index 00000000000..e6de464ba94 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-02.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-03.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-03.png new file mode 100644 index 00000000000..1c470c0cb70 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-03.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-04.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-04.png new file mode 100644 index 00000000000..e6a48535514 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-04.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-05.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-05.png new file mode 100644 index 00000000000..e6a48535514 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-05.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-06.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-06.png new file mode 100644 index 00000000000..c0bc061ef30 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-06.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-07.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-07.png new file mode 100644 index 00000000000..fe0a9433a34 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-07.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-08.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-08.png new file mode 100644 index 00000000000..be9d732d1cd Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-08.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-09.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-09.png new file mode 100644 index 00000000000..7afaad70807 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-blue-green-deployment-09.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-10.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-10.png new file mode 100644 index 00000000000..8d562871cf0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-10.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-11.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-11.png new file mode 100644 index 00000000000..e6a48535514 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-11.png differ diff --git a/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-12.png b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-12.png new file mode 100644 index 00000000000..e6a48535514 Binary files /dev/null and b/docs/first-gen/continuous-delivery/azure-deployments/vmss-howtos/static/create-an-azure-vmss-canary-deployment-12.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/1-harness-accountsetup.md b/docs/first-gen/continuous-delivery/build-deploy/1-harness-accountsetup.md new file mode 100644 index 00000000000..a415e676b3b --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/1-harness-accountsetup.md @@ -0,0 +1,57 @@ +--- +title: Connect to Your Artifact Build and Deploy Pipeline Platforms +description: Set up the Harness Delegate, Artifact Server, and Cloud Provider for the Pipeline. +sidebar_position: 20 +helpdocs_topic_id: xiys9djs0h +helpdocs_category_id: j1q21aler1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You need to set up the following common Harness account-level component, before creating Harness Application containing your Artifact Build and Deploy Workflows and Pipelines: + +* [Harness Delegate](1-harness-accountsetup.md#step-1-install-a-harness-delegate) +* [Artifact Server](1-harness-accountsetup.md#step-2-create-an-artifact-server) +* [Cloud Provider](1-harness-accountsetup.md#step-3-create-a-cloud-provider) + +In this topic: + +* [Before You Begin](1-harness-accountsetup.md#before-you-begin) +* [Step 1: Install a Harness Delegate](1-harness-accountsetup.md#step-1-install-a-harness-delegate) +* [Step 2: Create an Artifact Server](1-harness-accountsetup.md#step-2-create-an-artifact-server) +* [Step 3: Create a Cloud Provider](1-harness-accountsetup.md#step-3-create-a-cloud-provider) +* [Next Steps](1-harness-accountsetup.md#next-steps) + +### Before You Begin + +* [CI/CD with the Build Workflow](../concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md) +* [Delegate Installation Overview](https://docs.harness.io/article/igftn7rrtg-delegate-installation-overview) + +### Step 1: Install a Harness Delegate + +The Harness Delegate is a service you run in your local network or VPC to connect all of your artifact, infrastructure, collaboration, verification, and other providers with the Harness Manager. + +As explained in [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts), when you set up Harness for the first time, you install a Harness Delegate in your target infrastructure (for example, Kubernetes cluster, ECS cluster, EC2 subnet, Pivotal Cloud Foundry space, etc). Once the Delegate is installed, you can set up the resources and model your release process. + +The Delegate performs all deployment operations. To do so, it needs network connectivity to your artifact server, such as Jenkins, and your cloud deployment environment, such as a Kubernetes cluster or AWS. Also, the roles associated with the Delegate must have the policies needed to perform its operations. + +For detailed information on installing Harness Delegates, see [Manage Harness Delegates](https://docs.harness.io/category/gyd73rp7np-manage-delegates). + +### Step 2: Create an Artifact Server + +After installing the Delegate, create an Artifact Server in Harness to connect to Jenkins or any other artifact server that you use. Provide the necessary credentials when you set up the Artifact Server. + +For detailed information on creating an Artifact Server, see [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +### Step 3: Create a Cloud Provider + +After installing the Delegate, you also need to create a Cloud Provider in Harness to connect to your target deployment environment. For some Cloud Providers, such as Kubernetes Cluster and AWS, the Cloud Providers can assume the credentials assigned to the Delegate. + +For more information on creating a Cloud Provider, see [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +Now that your Harness account-level components are set up, you can build your Harness Application containing your Artifact Build and Deploy Workflows and Pipelines. + +### Next Steps + +* [Add Your Build and Deploy Pipeline Artifacts](2-service-and-artifact-source.md) + diff --git a/docs/first-gen/continuous-delivery/build-deploy/2-service-and-artifact-source.md b/docs/first-gen/continuous-delivery/build-deploy/2-service-and-artifact-source.md new file mode 100644 index 00000000000..84ccbcee555 --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/2-service-and-artifact-source.md @@ -0,0 +1,84 @@ +--- +title: Add Your Build and Deploy Pipeline Artifacts +description: Add a Harness Service and Artifact Source to identify the artifacts you want to build. +sidebar_position: 30 +helpdocs_topic_id: xhh8oi4bkh +helpdocs_category_id: j1q21aler1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in Artifact Build and Deploy pipeline is to create a Harness Service. Harness Service connects to the Artifact Server you added in [Harness Account Setup](1-harness-accountsetup.md) and identifies the artifact to deploy. The Service also contains the scripts used to install and run the build on the target nodes. + +This Service is used by the **Artifact Collection** command in [Create the Deploy Workflow for Build and Deploy Pipelines](5-deploy-workflow.md), the Environment's [target infrastructure](4-environment.md), and in the set up of the [Deploy Workflow](5-deploy-workflow.md). + +In this document, Shell Script Service and a Jenkins job artifact is used as an example but Harness supports all the common [artifact source](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server) and [custom sources](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source). + +### Before You Begin + +* [CI/CD with the Build Workflow](../concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md) + +### Step 1: Create a Service + +Harness Services represent your microservices/apps. You define where the artifacts for those microservices/apps come from, the container specs, configuration variables, and files for those microservices. + +To create the Service, perform the following steps: + +1. In your Harness Application, click **Services**. To create a Harness Application, see [Application Components](https://docs.harness.io/article/bucothemly-application-configuration). +2. Click **Add Service**. The **Add Services** settings appear. +3. In **Name**, enter a name for your Service. In this example, **ToDo List WAR** is used because ToDo List app is built and packaged it in a WAR file. +4. In **Deployment Type**, select **Secure Shell (SSH)**. +5. In **Artifact Type**, select **Web Archive (WAR)**. +6. In **Application Stack**, select **Standard Tomcat 8**. +7. Click **Submit**. + +The new Service is created. There are several installations and start scripts added by default. When your application is deployed, these scripts run on the target host(s). + +For more information about the default scripts in a Secure Shell (SSH) Service, see [Traditional Deployments Overview](../traditional-deployments/traditional-deployments-overview.md).Next, Artifact Source is added. + +### Step 2: Add an Artifact Source + +The Artifact Source defines where the Artifact Collection step looks for the built artifact during the Build Workflow. Also, the Service is used in the Deploy Workflow to install the artifact on the target host(s). + +To add an Artifact Source, perform the following steps: + +1. In your Service, click **Add Artifact Source**, and select **Jenkins**. The **Jenkins** settings appear. +2. In **Source Server**, select the Jenkins Artifact Server you set up in [Harness Account Setup](1-harness-accountsetup.md). Once you select the Source Server, the **Job Name** field loads all of the Jenkins jobs from the Source Server. +3. In **Job Name**, select the job you want to run to build your artifact. Harness also supports [Jenkins Multibranch Pipelines](https://docs.harness.io/article/5fzq9w0pq7-using-the-jenkins-command#multibranch_pipeline_support). +4. Select the **Meta-data Only** setting. Typically, metadata is sufficient as it contains enough information for the target host(s) to obtain the artifact. Harness stores the metadata and, during runtime, the Harness Delegate passes the metadata to the target host(s) where it is used to obtain the artifact(s) from the source repo. Ensure that the target host has network connectivity to the Artifact Server. +5. In **Artifact Path**, select the path and name of the artifact. In this example, **target/todolist.war** is used. + +When you are done, the **Jenkins Artifact Source** will look something like this:![](./static/2-service-and-artifact-source-10.png) +6. Click **Submit**. + +The Artifact Source is added to the Service.![](./static/2-service-and-artifact-source-11.png) + +#### Step: View Artifact History + +Next, let's see the artifact history that Harness can pull from Jenkins. + +1. Click **Artifact History**. +2. Click **Manually pull artifact**. The **Manually Select An Artifact** dialog appears.![](./static/2-service-and-artifact-source-12.png) +3. In **Artifact Stream**, select the artifact source you just added. +4. In **Artifact**, select the build for which you want to view history. +5. Click **Submit**. +6. Click **Artifact History** again. The history for the build you specified is displayed.![](./static/2-service-and-artifact-source-13.png) + + For more information on Service settings, see [Services](https://docs.harness.io/article/eb3kfl8uls-service-configuration). + +Now, that the Service is complete, we can create the Build Workflow to build the next version of the artifact. + +### Use the Same Artifact Build/Tag Across Multiple Workflows in a Pipeline + +When using a Build Workflow followed by multiple Workflows in a Pipeline, you can now pass the same artifact from the first Build Workflow to rest of the Workflows in the Pipeline that deploy the same Harness Service. + +Execution of each Workflow will use the artifact collected by the last run Build Workflow in the Pipeline that has collected the artifact from the same Service. + +In the first Build Workflow's **Artifact Collection** step's **Build/Tag** setting (described in [Create the Deploy Workflow for Build and Deploy Pipelines](5-deploy-workflow.md)), you specify the artifact to collect. For subsequent **Artifact Collection** steps in the same and subsequent Workflows in the Pipeline deploying the same Service, you can leave the **Artifact Collection** step's **Build/Tag** setting empty and Harness will use the artifact collected by the last Build Workflow in the Pipeline. + +This functionality requires the Feature Flag `SORT_ARTIFACTS_IN_UPDATED_ORDER`. + +### Next Step + +* [Create the Build Workflow for Build and Deploy Pipelines](3-build-workflow.md) + diff --git a/docs/first-gen/continuous-delivery/build-deploy/3-build-workflow.md b/docs/first-gen/continuous-delivery/build-deploy/3-build-workflow.md new file mode 100644 index 00000000000..7f5d8a5d661 --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/3-build-workflow.md @@ -0,0 +1,130 @@ +--- +title: Create the Build Workflow for Build and Deploy Pipelines +description: Create the Build Workflow to build the artifact. +sidebar_position: 40 +helpdocs_topic_id: obqhjaabnl +helpdocs_category_id: j1q21aler1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Build Workflow doesn't require a deployment environment, unlike other Harness Workflows. It simply runs a build process via Jenkins, Bamboo, or Shell Script, and then saves the artifact to an explicit path. + +In this document, Jenkins is used as an example, but Harness supports all the common [artifact sources](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server) and [custom sources](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source). + +### Before You Begin + +* [CI/CD with the Build Workflow](../concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md) + +### Step: Create a Build + +To create a Build Workflow, do the following: + +1. In your Harness Application, click **Workflows**, and then click **Add Workflow**. The **Workflow** settings appear. +2. In **Name**, enter a name for this Build Workflow, such as **Build File**. +3. In **Workflow Type**, select **Build**. When you are done, the dialog will look something like this.![](./static/3-build-workflow-00.png) +4. Click **Submit**. The Workflow is created. + +By default, only the **Artifact Collection** step is added. Next, we will add the Jenkins step to build the artifact. + +### Step: Configure Jenkins + +The **Jenkins** step runs the Jenkins job to build the artifact and then pass an output variable containing the build's environment variables. Use the output variable to configure the **Artifact Collection** step to obtain the newly build artifact (typically, just its metadata). + +To learn more about the Jenkins command, see [Using the Jenkins Command](https://docs.harness.io/article/5fzq9w0pq7-using-the-jenkins-command).To add the Jenkins step, do the following: + +1. In the **Prepare Steps** section of the Workflow, click **Add Step**. +2. Select **Jenkins**. The **Jenkins** settings appear.![](./static/3-build-workflow-01.png) +3. In **Jenkins Server**, select the Jenkins server you added as a Harness Artifact Server in [Harness Account Setup](1-harness-accountsetup.md). This is the server that contains the Jenkins job to build the artifact. +4. In **Job Name**, select the Jenkins job to build the artifact. Next, we will create an output variable to pass build environment variables to the **Artifact Collection** step. +5. Click **Jenkins Output in the Context**. +6. In **Variable Name**, enter the name of the output variable. For example, **Jenkins**. +7. In **Scope**, select **Workflow**. Selecting **Workflow** ensures that there are no conflicts with any variables that share this name outside of the Workflow, such as another Workflow in the Pipeline. + + When you are done, the dialog will look something like this. + + ![](./static/3-build-workflow-02.png) + +8. Click **Submit**. The Jenkins step is added to the Workflow. + +### Step: Configure Shell Script + +To view the environment parameters contained in the output variable, you can add a Shell Script step and echo the parameters. + +1. In the **Prepare Steps** section of the Workflow, click **Add Command**, and then click **Shell Script**. The **Shell Script** settings appear. +2. In **Script**, enter the following script: + + +``` +echo "buildDisplayName: " ${Jenkins.buildDisplayName} + +echo "jobStatus: " ${Jenkins.jobStatus} + +echo "buildUrl: " ${Jenkins.buildUrl} + +echo "buildFullDisplayName: " ${Jenkins.buildFullDisplayName} + +echo "buildNumber: " ${Jenkins.buildNumber} + +echo "description: " ${Jenkins.description} +``` + +The `${Jenkins.description}` parameter requires [Descriptor Setter](https://wiki.jenkins.io/display/JENKINS/Description+Setter+Plugin) plugin in Jenkins.When you deploy the Workflow, the Shell Script command will output all the parameters (`${Jenkins.description}` is not shown): + +![](./static/3-build-workflow-03.png) + +Next, we will add the **Artifact Collection** step and use the `${Jenkins.buildNumber}` parameter to obtain the latest build number. + +To learn more about the Shell Script command, see [Using the Shell Script Command](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output). + +You can use these [artifact variable expressions](https://docs.harness.io/article/9dvxcegm90-variables#artifact) in a Shell Script to see the built artifact information (at the end of the Workflow). + +### Step: Configure Artifact Collection + +The Artifact Collection step uses the `${Jenkins.buildNumber}` parameter to obtain the latest build number, and then pull the build from the artifact repo. Typically, only the build metadata is pulled by Harness. It depends on the **Meta-data Only** setting in the Service's Artifact Source. + +The Artifact Collection step was added automatically when you created the Build Workflow. You simply need to configure it. + +#### Option: Build an Artifact + +1. In your Build Workflow, click **Artifact Collection**. The **Artifact Collection** dialog appears. +2. Enter a name for your Artifact Collection. +3. Select **Artifact** as the **Source Type** for the Artifact collection. + + ![](./static/3-build-workflow-04.png) + + Pick the Artifact Source you created in your Service under **Artifact Source**. + + You can template the **Artifact Source** setting by clicking the **[T]** button. This will create a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). When you deploy the Workflow, you can provide a value for the variable. + + ![](./static/3-build-workflow-05.png) + +4. In **Build / Tag**, enter the Jenkins step output variable `${Jenkins.buildNumber}` parameter to provide this step with the build number of the artifact built in the Jenkins step. + + When you are done, the Artifact Collection dialog will look something like this. + + ![](./static/3-build-workflow-06.png) + +5. Click **Submit**. The Artifact Collection step is configured. + +#### Option: Build a Helm Chart + +You can also build Helm Charts with Manifests. To do that, perform the following steps: + +1. In your Build Workflow, click **Artifact Collection**. The **Artifact Collection** dialog appears. +2. Enter a name for your Artifact Collection. +3. Select **Manifest** as the **Source Type** for the Manifest collection. +4. Pick the Manifest Source you created in your Service under **Manifest Source.** +5. You can template the **Manifest Source** setting by clicking the **[T]** button. This will create a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). When you deploy the Workflow, you can provide a value for the variable. +6. In **Build / Tag**, enter the Jenkins step output variable `${Jenkins.buildNumber}` parameter to provide this step with the build number of the artifact built in the Jenkins step. +When you are done, the Artifact Collection dialog will look something like this. +7. Click **Submit**. The Artifact Collection step is configured. + +You can run the Build Workflow to test its settings before setting up the rest of the Pipeline. In the following example, the Artifact Collection step displays the artifact metadata Harness obtained, including the Build/Tag.  + +![](./static/3-build-workflow-07.png) + +### Next Step + +* [Define Your Build and Deploy Pipeline Target Infrastructure](4-environment.md) + diff --git a/docs/first-gen/continuous-delivery/build-deploy/4-environment.md b/docs/first-gen/continuous-delivery/build-deploy/4-environment.md new file mode 100644 index 00000000000..5d47a961ed1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/4-environment.md @@ -0,0 +1,57 @@ +--- +title: Define Your Build and Deploy Pipeline Target Infrastructure +description: Define the target deployment Environment for your artifacts. +sidebar_position: 50 +helpdocs_topic_id: fav3v3jx3d +helpdocs_category_id: j1q21aler1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Environments represent one or more of your deployment infrastructures, such as Dev, QA, Stage, Production, etc. For Artifact Build and Deploy Pipelines, the Environment specifies the Service containing the new artifact for the Deploy Workflow to use and the target deployment infrastructure where the new build will be deployed. + +In this topic: + +* [Before You Begin](4-environment.md#before-you-begin) +* [Step: Set Up Environment](4-environment.md#step-set-up-environment) +* [Step: Set Up Infrastructure Definition](4-environment.md#step-set-up-infrastructure-definition) +* [Next Step](4-environment.md#next-step) + +### Before You Begin + +* [CI/CD with the Build Workflow](../concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md) + +### Step: Set Up Environment + +To set up the Environment, do the following: + +1. In your Harness Application, click **Environments**, and then click **Add Environment**. The **Environment** settings appear.![](./static/4-environment-08.png) +2. In **Name**, enter a name for the Environment that identifies it to your colleagues. In this example, File-Based is used as the name for the Environment. +3. In **Environment Type**, select **Production** or **Non-Production**, and click **Submit**. The new Environment is added. + +Next, we will add an Infrastructure Definition to provide the Deploy Workflow with the target deployment environment. + +### Step: Set Up Infrastructure Definition + +Infrastructure Definitions specify the target deployment infrastructure for your Harness Services, and the specific infrastructure details for the deployment, like VPC settings. For Artifact Build and Deploy Pipelines, you will scope the ​Infrastructure Definition to the Service you added earlier where you identified the Artifact Source for your artifact. You will also use the Cloud Provider you set up in [Harness Account Setup](1-harness-accountsetup.md) to configure the ​Infrastructure Definition. Then you will specify the target infrastructure for the deployment. In this example, an AWS EC2 instance is used. + +To set up the ​Infrastructure Definition, do the following: + +1. In **Environment**, click **Add ​Infrastructure Definition**. The **​Infrastructure Definition** settings appear. +2. In **Display Name**, enter a name for the Infrastructure Definition. In this example, ToDo List WAR is used as a name for the Infrastructure Definition. +3. In **Cloud Provider Type**, select the type of Cloud Provider you used in [Harness Account Setup](1-harness-accountsetup.md). +4. In Deployment Type, select **Secure Shell (SSH)**. +5. Click **Use Already Provisioned Infrastructure**. If you were using a Harness [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner), you would select **Map Dynamically Provisioned Infrastructure**. +6. In **Cloud Provider**, select the Cloud Provider you set up in [Harness Account Setup](1-harness-accountsetup.md). +7. Fill out the remaining infrastructure settings for your target deployment infrastructure. +8. In **Scope to specific Services**, select the Service you created in [Service and Artifact Source](2-service-and-artifact-source.md). + +Here is an example that targets an AWS EC2 instance.![](./static/4-environment-09.png) +9. Click **Submit**. + +The Infrastructure Definition is added. You will select this Infrastructure Definition when you create the Deploy Workflow. + +### Next Step + +* [Create the Deploy Workflow for Build and Deploy Pipelines](5-deploy-workflow.md) + diff --git a/docs/first-gen/continuous-delivery/build-deploy/5-deploy-workflow.md b/docs/first-gen/continuous-delivery/build-deploy/5-deploy-workflow.md new file mode 100644 index 00000000000..7d0c53f0f4b --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/5-deploy-workflow.md @@ -0,0 +1,75 @@ +--- +title: Create the Deploy Workflow for Build and Deploy Pipelines +description: Create the Deploy Workflow to deploy the artifact built by the Build Workflow. +sidebar_position: 60 +helpdocs_topic_id: q6rtl33634 +helpdocs_category_id: j1q21aler1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Deploy Workflow takes the artifact build you built in [Create the Build Workflow for Build and Deploy Pipelines](3-build-workflow.md) by using the Service you created for the Artifact Source. Then the Deploy Workflow installs the build into the nodes in the Environment. See the supported [Workflow Types](https://docs.harness.io/article/m220i1tnia-workflow-configuration#workflow_types). + +### Before You Begin + +* [CI/CD with the Build Workflow](../concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md) + +### Step: Deploy Workflow + +To set up the Deploy Workflow, do the following: + +1. In your Application, click **Workflows**. +2. In **Workflows**, click **Add Workflow**. The **Workflow** dialog appears. +3. In **Name**, enter a name for the Deploy Workflow. For example, **Deploy File**. +4. In **Workflow Type**, select **Basic Deployment**. +5. In **Environment**, select the Environment you created earlier. +6. In **Service**, select the Service you created earlier. +7. In **Infrastructure Definition**, select the Infrastructure Definition you created earlier. If the Infrastructure Definition does not appear, ensure that you added the Service to the Infrastructure Definition **Scope to specific Services** setting. + +When you are done, the dialog will look something like this.![](./static/5-deploy-workflow-21.png) +8. Click **Submit**. The Deploy Workflow is created.![](./static/5-deploy-workflow-22.png) + +Let's look at the two commands created automatically. + +#### Configure Select Nodes + +The **Select Nodes** command simply selects the nodes in your Infrastructure Definition. + +1. Select **Yes** to specify specific hosts in the **Host Name(s)** field. Harness will add whatever hosts it can find using the criteria in your Infrastructure Definition. For example, the following image shows how the criteria in your Infrastructure Definition locates an EC2 instance, which is then displayed in the **Node Select**.![](./static/5-deploy-workflow-23.png) +2. Select **No** to specify the number of **Desired Instances** and **Instance Unit Type** you want.![](./static/5-deploy-workflow-24.png) +3. When the **Exclude instances from future phases** setting is selected (recommended), the instance(s) selected by this Node Select step will not be eligible for selection by any future Node Select step. In cases where you want to perform a one-time operation using a node and then deploy to all nodes in later phases, you might want to leave this setting unselected. + +#### Configure Install + +The Install step installs the artifact onto the nodes you selected. + +![](./static/5-deploy-workflow-25.png)The Install step uses the artifact build number obtained in the Artifact Collection step and the Artifact Source in the Harness Service. Basically, the Install step looks in the Artifact Source for the build number obtained by Artifact Collection by using artifact metadata. The logs for the step display the artifact copy to the target nodes. + + +``` +Begin file transfer harness7cbd2fa8dc9e2d9f634205b288811b27 to ip-10-0-0-87.ec2.internal:/tmp/AC3HcFy1QByir0UGIR09Zg + +File successfully transferred + +Connecting to ip-10-0-0-87.ec2.internal .... + +Connection to ip-10-0-0-87.ec2.internal established +``` +Next, the Install step runs the scripts in the Harness Service to install and run the artifact in its runtime environment. For details about common script steps, see [Traditional Deployments Overview](../traditional-deployments/traditional-deployments-overview.md). + +Now that both Workflows are set up, you can create the Artifact Build and Deploy Pipeline. + +### Use the Same Artifact Build/Tag Across Multiple Workflows in a Pipeline + +When using a Build Workflow followed by multiple Workflows in a Pipeline, you can now pass the same artifact from the first Build Workflow to rest of the Workflows in the Pipeline that deploy the same Harness Service. + +Execution of each Workflow will use the artifact collected by the last run Build Workflow in the Pipeline that has collected the artifact from the same Service. + +In the first Build Workflow's **Artifact Collection** step's **Build/Tag** setting, you specify the artifact to collect. For subsequent **Artifact Collection** steps in the same and subsequent Workflows in the Pipeline deploying the same Service, you can leave the **Artifact Collection** step's **Build/Tag** setting empty and Harness will use the artifact collected by the last Build Workflow in the Pipeline. + +This functionality requires the Feature Flag `SORT_ARTIFACTS_IN_UPDATED_ORDER`. + +### Next Step + +* [Create the Build and Deploy Pipeline](6-artifact-build-and-deploy-pipelines.md) + diff --git a/docs/first-gen/continuous-delivery/build-deploy/6-artifact-build-and-deploy-pipelines.md b/docs/first-gen/continuous-delivery/build-deploy/6-artifact-build-and-deploy-pipelines.md new file mode 100644 index 00000000000..03dbaac1b88 --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/6-artifact-build-and-deploy-pipelines.md @@ -0,0 +1,73 @@ +--- +title: Create the Build and Deploy Pipeline +description: Create a Pipeline to run your Build and Deploy Workflows. +sidebar_position: 70 +helpdocs_topic_id: slkhuejdkw +helpdocs_category_id: j1q21aler1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Pipelines define your release process using multiple Workflows and approvals in sequential and/or parallel stages. + +An Artifact Build and Deploy Pipeline simply runs your Build Workflow followed by your Deploy Workflow. The Deploy Workflow uses the Harness Service you set up to get the new build number. + +![](./static/6-artifact-build-and-deploy-pipelines-14.png) + +### Before You Begin + +* [CI/CD with the Build Workflow](../concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md) + +### Review: Use Build Workflows in a Pipeline + +Here are some important things to remember when using a Build Workflow in a Pipeline: + +* **No Artifact Selection** — If you use a Build Workflow in a Pipeline, you cannot select an artifact when you deploy the Pipeline. A Build Workflow tells Harness you will be building and collecting the artifact for the Pipeline. Harness will use that artifact for the Pipeline deployment. +* **Don't Execute in Parallel** — Do not execute a Build Workflow in a Pipeline in parallel with a Workflow that deploys the artifact built by the Build Workflow. The Build Workflow must complete before another Workflow in the Pipeline can deploy the artifact. +* **Always Put Build Workflow First** — The Build Workflow should always be the first stage in the Pipeline. This enables the rest of the Pipeline to use the artifact it builds and collects. + +### Step: Create the Build and Deploy Pipeline + +To create the Artifact Build and Deploy Pipeline, do the following: + +1. In your Harness Application, click **Pipelines**, and then click **Add Pipeline**. The **Add Pipeline** settings appear. +2. In **Name**, enter a name for your Pipeline, such as **Artifact Build and Deploy**. +3. Click **Submit**. The new Pipeline is created. + +### Step: Add the Build Workflow + +Add the Build Workflow as the first stage in the Pipeline: + +1. In **Pipeline Stages**, click the **plus** button. The **Pipeline Stage** settings appear. + ![](./static/6-artifact-build-and-deploy-pipelines-15.png) +2. In **Step Name**, enter a name for the Build Stage, such as **Build Artifact**. +3. In **Execute Workflow**, select the Build Workflow you created. When you are done, it will look something like this: + ![](./static/6-artifact-build-and-deploy-pipelines-16.png) +4. Click **Submit**. The stage is added to the Pipeline. +5. Use the same steps to add the Deploy Workflow to the Pipeline. When you are done, it will look something like this. + ![](./static/6-artifact-build-and-deploy-pipelines-17.png) +6. Click **Deploy** to run the Pipeline. Note that you do not need to select an artifact build number as the Deploy Workflow will obtain the latest build. +7. Click **Submit**. + +The Workflows are run in succession. First, the Build Workflow is run. Click the **Artifact Collection** step to see the metadata collected by Harness, including the build number. +![](./static/6-artifact-build-and-deploy-pipelines-18.png) +You can see the same build number in Jenkins. +In this document, Jenkins job artifact is used as an example but Harness supports all the common [artifact source](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server) and [custom sources](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source). + +![](./static/6-artifact-build-and-deploy-pipelines-19.png) + +Next, the Deploy Workflow is run. Click the **Artifact Check** step to see the same build number that was collected by the **Artifact Collection** step. You can also see the build number next to the **Artifacts** heading. + +![](./static/6-artifact-build-and-deploy-pipelines-20.png) + +The Pipeline has run successfully. You can now build and deploy artifacts by running a single Pipeline. + +### See Also + +[Trigger Workflows and Pipelines](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) — Triggers automate deployments using a variety of conditions, such as Git events, new artifacts, schedules, and the success of other Pipelines. + +Now that you have an Artifact Build and Deploy Pipeline, you can create a Harness Trigger that runs the Pipeline in response to a Git push to the source repo. The Trigger provides a Webhook URL you can add to your Git repo. + +When the push event happens, Git sends a HTTP POST payload to the Webhook's configured URL. The Trigger then executes Artifact Build and Deploy Pipeline. + +Do not use **On New Artifact** Trigger to trigger a Build and Deploy Pipeline because you need to Build Workflow to build a new artifact. \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/build-deploy/_category_.json b/docs/first-gen/continuous-delivery/build-deploy/_category_.json new file mode 100644 index 00000000000..b136ef02a78 --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/_category_.json @@ -0,0 +1 @@ +{"label": "CI/CD: Artifact Build and Deploy Pipelines", "position": 40, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "CI/CD: Artifact Build and Deploy Pipelines"}, "customProps": { "helpdocs_category_id": "j1q21aler1"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/build-deploy/build-and-deploy-pipelines-overview.md b/docs/first-gen/continuous-delivery/build-deploy/build-and-deploy-pipelines-overview.md new file mode 100644 index 00000000000..59cef266443 --- /dev/null +++ b/docs/first-gen/continuous-delivery/build-deploy/build-and-deploy-pipelines-overview.md @@ -0,0 +1,21 @@ +--- +title: Build and Deploy Pipeline How-tos +description: Overview of how an Artifact Build and Deploy Pipeline builds and deploys a build to a deployment environment. +sidebar_position: 10 +helpdocs_topic_id: 181zspq0b6 +helpdocs_category_id: j1q21aler1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following How-tos guide you through Build and Deploy Pipeline tasks: + +* [Connect to Your Artifact Build and Deploy Pipeline Platforms](1-harness-accountsetup.md) +* [Add Your Build and Deploy Pipeline Artifacts](2-service-and-artifact-source.md) +* [Create the Build Workflow for Build and Deploy Pipelines](3-build-workflow.md) +* [Define Your Build and Deploy Pipeline Target Infrastructure](4-environment.md) +* [Create the Deploy Workflow for Build and Deploy Pipelines](5-deploy-workflow.md) +* [Create the Build and Deploy Pipeline](6-artifact-build-and-deploy-pipelines.md) + +To see the concept of Harness Build and Deploy Pipeline deployment, see [Artifact Build and Deploy Pipelines Overview](../concepts-cd/deployment-types/artifact-build-and-deploy-pipelines-overview.md). + diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-10.png b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-10.png new file mode 100644 index 00000000000..137db6a4c6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-10.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-11.png b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-11.png new file mode 100644 index 00000000000..8d426ac5fb0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-11.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-12.png b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-12.png new file mode 100644 index 00000000000..c167eac18f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-12.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-13.png b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-13.png new file mode 100644 index 00000000000..fd2f61ecd1b Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/2-service-and-artifact-source-13.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-00.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-00.png new file mode 100644 index 00000000000..324c5ffc36d Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-00.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-01.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-01.png new file mode 100644 index 00000000000..199d65fe0c6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-01.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-02.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-02.png new file mode 100644 index 00000000000..5bf913315e9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-02.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-03.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-03.png new file mode 100644 index 00000000000..9d3ad9757f1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-03.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-04.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-04.png new file mode 100644 index 00000000000..efe81e1cb6a Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-04.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-05.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-05.png new file mode 100644 index 00000000000..e70456404dd Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-05.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-06.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-06.png new file mode 100644 index 00000000000..a900f48b975 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-06.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-07.png b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-07.png new file mode 100644 index 00000000000..0b838b090bb Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/3-build-workflow-07.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/4-environment-08.png b/docs/first-gen/continuous-delivery/build-deploy/static/4-environment-08.png new file mode 100644 index 00000000000..6f5fd674eb6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/4-environment-08.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/4-environment-09.png b/docs/first-gen/continuous-delivery/build-deploy/static/4-environment-09.png new file mode 100644 index 00000000000..f0c14bdb35c Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/4-environment-09.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-21.png b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-21.png new file mode 100644 index 00000000000..9f593ca0bd2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-21.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-22.png b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-22.png new file mode 100644 index 00000000000..529f5cc3b37 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-22.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-23.png b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-23.png new file mode 100644 index 00000000000..854fcc5950a Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-23.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-24.png b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-24.png new file mode 100644 index 00000000000..17ab041cac2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-24.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-25.png b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-25.png new file mode 100644 index 00000000000..a632eb05478 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/5-deploy-workflow-25.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-14.png b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-14.png new file mode 100644 index 00000000000..98c1b04232a Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-14.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-15.png b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-15.png new file mode 100644 index 00000000000..c7b9013e699 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-15.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-16.png b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-16.png new file mode 100644 index 00000000000..3dd8ec1a885 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-16.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-17.png b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-17.png new file mode 100644 index 00000000000..5e609b1adf4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-17.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-18.png b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-18.png new file mode 100644 index 00000000000..5019b1a1b72 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-18.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-19.png b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-19.png new file mode 100644 index 00000000000..b320dd8bd22 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-19.png differ diff --git a/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-20.png b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-20.png new file mode 100644 index 00000000000..4879d473e61 Binary files /dev/null and b/docs/first-gen/continuous-delivery/build-deploy/static/6-artifact-build-and-deploy-pipelines-20.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/_category_.json b/docs/first-gen/continuous-delivery/concepts-cd/_category_.json new file mode 100644 index 00000000000..e61644d96bf --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/_category_.json @@ -0,0 +1,15 @@ +{ + "label": "Continuous Delivery Overview", + "position": 10, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Continuous Delivery Overview" + }, + "customProps": { + "helpdocs_category_id": "cwefyz0jos", + "helpdocs_parent_category_id": "w51ys7qkag" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/_category_.json b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/_category_.json new file mode 100644 index 00000000000..1b9870873e0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/_category_.json @@ -0,0 +1 @@ +{"label": "Deployment Strategies and Integrations", "position": 20, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Deployment Strategies and Integrations"}, "customProps": { "helpdocs_category_id": "vbcmo6ltg7"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/ami-spotinst-elastigroup-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/ami-spotinst-elastigroup-deployments-overview.md new file mode 100644 index 00000000000..e72a356e033 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/ami-spotinst-elastigroup-deployments-overview.md @@ -0,0 +1,86 @@ +--- +title: AMI Spotinst Elastigroup Deployments Overview +description: A summary of Harness AMI Spotinst implementation. +sidebar_position: 50 +helpdocs_topic_id: ighbnk6xg6 +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, Harness integrates with Spotinst only for deployments to AWS (Amazon Web Services) via Elastigroup.This topic describes the concept of a Harness AWS Spotinst Elastigroup deployment by describing the high-level steps involved. + +For detailed instructions on using AWS Spotinst Elastigroup in Harness, see the  [AMI Spotinst Elastigroup Deployment](../../aws-deployments/ami-deployments/ami-elastigroup.md). + +### Before You Begin + +Before learning about Harness AWS Spotinst Elastigroup deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### What Does Harness Need Before You Start? + +A Harness AWS Spotinst Elastigroup deployment requires the following: + +* In AWS: + + For an AMI Canary deployment, you must set up a working AMI that Harness will use to create your instances and at least one  [Application Load Balancer](https://docs.aws.amazon.com/en_pv/elasticloadbalancing/latest/application/introduction.html) (ALB) or  [Classic Load Balancer](https://docs.aws.amazon.com/en_pv/elasticloadbalancing/latest/classic/introduction.html). (See the  [Spotinst documentation](https://docs.spot.io/elastigroup/tools-integrations/aws-load-balancers-elb-alb) for Load Balancer support.) + + For AMI Blue/Green deployment, you must also have: + - A pair of  [Target Groups](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html)—typically staging (Stage) and production (Prod)—both with the **instance** target type. + - A Load Balancer with listeners for both your Target Groups' ports. +* In Spotinst, an Elastigroup configuration with at least one Elastigroup cluster that matches your AWS configuration's AMI, VPC, Load Balancer(s), security groups, availability zones, and allowed instance types. + +### What Does Harness Deploy? + +Harness takes the AMI and Elastigroup configuration you provide, and creates a new Elastigroup and populates it with instances using the AMI. You can specify the target, min, and max instances for the new Elastigroup, and other settings in Harness. + +### What Does a Harness AWS Spotinst Elastigroup Deployment Involve? + +The following list describes the major steps of a Harness AWS Spotinst Elastigroup deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness Shell Script or ECS **Delegate** in your target EC2 subnet. | Typically, the Shell Script or ECS Delegate is installed in the same subnet where you will deploy your application(s). | +| 2 | Add both an **AWS** **Cloud Provider** and a **Spotinst Cloud Provider**. | An AWS Cloud Provider is a connection to your AWS account. The AWS Cloud Provider is used to obtain the AMI Harness will use to create new instances and to deploy the new instances.A Spotinst Cloud Provider is used to connect Harness to Spotinst. | +| 3 | Create the Harness **Application** for your Spotinst deployment. | The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. | +| 4 | Create the Harness **Service** using the Amazon Machine Image Deployment Type. | Add an AMI as an artifact in a Harness Service, add any AMI User Data, and any config variables and files. | +| 5 | Create the Harness **Environment** and Infrastructure Definition for your deployment, and any overrides. | Using the Harness AWS Cloud Provider and Spotinst Cloud Provider you set up, you can select the Elastigroup configuration as the target environment for your deployment.You can also override any Service settings, such as User Data values. This enables you to use a single Service with multiple Harness Environments. | +| 6 | Create the Basic, Canary, and Blue/Green deployments in Harness **Workflows**. | The Workflow deploys the new AMI instances defined in the Harness Service to the environment in the Harness Infrastructure Definition. | +| 7 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your AWS AMI CD:* [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration) +* [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) +* [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) + | + +### How Does Harness Downsize Old Elastigroups? + +Harness upscales and downsizes in two states, setup and deploy. + +* **Setup** — The setup state is when your new Elastigroup is created. +* **Deploy** — The deploy phase(s) is when your new Elastigroup is upscaled to the number of new instances you requested. This is either a fixed setting (Min, Max, Desired) or the same number as the previous Elastigroup. + +Instances are always tied to their Elastigroup. A new Elastigroup does not take instances from an old Elastigroup.During setup state: + +* The previous Elastigroup is kept with non-zero instance count (highest revision number, such as **\_7**). Any older Elastigroup are downsized to 0. +* New Elastigroup is created with 0 count. +* For Elastigroups that had 0 instances, Harness keeps 3 old Elastigroups and deletes the rest. + +During deploy phases: + +* New Elastigroup is upscaled to the number of new instances you requested. +* Previous Elastigroup is gradually downsized. In the case of a Canary deployment, the old Elastigroup is downsized in the inverse proportion to the new Elastigroup's upscale. If the new Elastigroup is upscaled 25% in phase 1, the previous Elastigroup is downsized 25%. + +At the end of deployment: + +* New Elastigroup has the number of new instances you requested. In Canary, this is always 100%. +* Previous Elastigroup is downsized to 0. + +#### Rollback + +If rollback occurs, the previous Elastigroup is upscaled to its pre-setup number of instances using new instances. + +### Next Steps + +Read the following topics to build on what you've learned: + +* [AMI Spotinst Elastigroup Deployment](../../aws-deployments/ami-deployments/ami-elastigroup.md) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/artifact-build-and-deploy-pipelines-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/artifact-build-and-deploy-pipelines-overview.md new file mode 100644 index 00000000000..2ee622ae09d --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/artifact-build-and-deploy-pipelines-overview.md @@ -0,0 +1,70 @@ +--- +title: Artifact Build and Deploy Pipelines Overview +description: A summary of Harness' Build and Deploy Pipeline implementation. +sidebar_position: 30 +helpdocs_topic_id: 0tphhkfqx8 +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the concept of a Harness Build and Deploy Pipeline deployment by describing the high-level steps involved. + +For detailed instructions on using build and deploy pipelines in Harness, see + +* [Artifact Build and Deploy Pipelines How-tos](https://docs.harness.io/category/cicd-artifact-build-and-deploy-pipelines) + +### Before You Begin + +Before learning about Build and Deploy Pipelines, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### Visual Summary + +An Artifact Build and Deploy pipeline runs a build process, deposits the built artifact (or metadata) in Harness, and deploys the build to a deployment environment. It is a simple, but useful deployment commonly used for traditional file-based and AMI deployments. + +![](./static/artifact-build-and-deploy-pipelines-overview-35.png) + +### What Does Harness Need Before You Start? + +A Harness Build and Deploy Pipeline deployment requires the following: + +* Application artifact from an artifact server, such as Jenkins. This could be a Zip, WAR, or even an AWS AMI. +* Target nodes for the deployment. + +### What Does Harness Deploy? + +Artifact Build and Deploy Pipelines involve two Workflows, Artifact Build and Deploy, executed in succession by a Pipeline.  + +1. **Build Workflow** - The Build Workflow connects to your Artifact Server and runs a build, such as a Jenkins job that creates a WAR file or an AMI and deposits it in a repo. Next, the Workflow connects to the repo and pulls the built artifact/metadata into Harness. Now the new artifact is awaiting deployment. +2. **Deploy Workflow** - The Deploy Workflow obtains the new artifact from Harness and deploys it to the target deployment environment. +3. **Pipeline** - The Pipeline runs the Build Workflow followed by the Deploy Workflow, and the latest artifact is deployed. + +You could simply run each Workflow separately, but by putting them in a Pipeline you do not have to manually select the latest artifact build when you run the Deploy Workflow. The Pipeline builds the artifact in the Build Workflow and then picks it up in the Deploy Workflow automatically.### What Does a Harness Build and Deploy Pipeline Deployment Involve? + +To understand what is involved in a Build and Deploy Pipeline, let's look at a file-based example. + +#### File-based Example + +A simple Artifact Build and Deploy Deployment for a file-based artifact like a WAR file consists of the following components: + +1. **Service referencing the artifact repo** - Create a Harness Service for the WAR file, including the Artifact Source that points to the artifact repo. Later, in the Build Workflow, when you set up the Artifact Collection command, you will reference this Artifact Source as the location of the artifact repo. +2. **Build Workflow** + 1. **Jenkins command** - Runs the Jenkins job to build the WAR file and push it to the artifact repo (such as Artifactory, etc). You configure an output variable, such as Jenkins, so that the Artifact Collection step can get the new build number and collect the built artifact (typically, just the metadata). You can also use a Shell Script command to run a job and create an output variable. For more information on the Jenkins command, see [Using the Jenkins Command](https://docs.harness.io/article/5fzq9w0pq7-using-the-jenkins-command). + 2. **Artifact Collection command** - Grabs the artifact from the repo using the output variable and build environment variables and deposits it in Harness. +3. **Environment** - Define the target deployment infrastructure where the Deploy Workflow will deploy the built artifact, such as an AWS VPC. +4. **Deploy Workflow** + 1. **Select Nodes command** - Specifies how many instances in the Environment to deploy. + 2. **Install command** - Installs the new artifact into the nodes selected by Select Nodes. +5. **Artifact Build and Deploy Pipeline** - A Pipeline that runs the Build Workflow followed by the Deploy Workflow. +6. **Trigger** - A Harness Trigger that executes the Pipeline via a Git Webhook. + +#### AMI Example + +For an AMI Artifact Build and Deploy Pipeline, the only difference from the File-based Example is that the Harness Service is an AMI type and the Deploy Workflow deploys the AMI instances in an Auto Scaling Group. + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Artifact Build and Deploy Pipelines How-tos](https://docs.harness.io/category/cicd-artifact-build-and-deploy-pipelines) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-ami-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-ami-deployments-overview.md new file mode 100644 index 00000000000..cb7d1785bbc --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-ami-deployments-overview.md @@ -0,0 +1,101 @@ +--- +title: AWS AMI Deployments Overview +description: A summary of Harness AWS AMI and ASG implementation. +sidebar_position: 40 +helpdocs_topic_id: aedsdsw9cm +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the concept of a Harness AWS AMI deployment by describing the high-level steps involved. + +For a quick tutorial, see the [AWS AMI Quickstart](https://docs.harness.io/article/wfk9o0tsjb-aws-ami-deployments). + +For detailed instructions on using AWS AMI in Harness, see the [AWS AMI How-tos](https://docs.harness.io/category/aws-ami-deployments). + +### Before You Begin + +Before learning about Harness AWS AMI deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### What Does Harness Need Before You Start? + +A Harness AWS AMI deployment requires the following: + +* A working AWS AMI that Harness will use to create your instances. +* A working Auto Scaling Group (ASG) that Harness will use as a template for the ASG that Harness will create. The template ASG is referred to as the **base ASG** in Harness documentation. +* An AWS Instance or ECS cluster in which to install a Harness Delegate. +* IAM Role for the Harness Cloud Provider connection to AWS. + +### What Does Harness Deploy? + +Harness takes the AMI and base ASG you provide, and creates a new ASG and populates it with instances using the AMI. You can specify the desired, min, and max instances for the new ASG, resize strategy, and other settings in Harness. + +Harness specifically supports AWS *target* tracking scaling policies. For details, see AWS' [Dynamic Scaling for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types) topic. + +### What Does a Harness AWS AMI Deployment Involve? + +The following list describes the major steps of a Harness AWS AMI deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness Shell Script or ECS **Delegate** in your target EC2 subnet. | Typically, the Shell Script or ECS Delegate is installed in the same subnet where you will deploy your application(s).This is the same subnet as your base ASG, using the same security group and the same key pair. | +| 2 | Add an **AWS** **Cloud Provider**. | An AWS Cloud Provider is a connection to your AWS account.The AWS Cloud Provider is used to obtain the AMI Harness will use to create new instances, the base ASG Harness will uses a template, and to deploy the new instances. | +| 3 | Create the Harness **Application** for your AMI CD Pipeline. | The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. | +| 4 | Create the Harness **Service** using the Amazon Machine Image Deployment Type. | Add an AMI as an artifact in a Harness Service, add any AMI User Data, and any config variables and files. | +| 5 | Create the Harness **Environment** and Infrastructure Definition for your deployment, and any overrides. | Using the Harness AWS Cloud Provider you set up, you can select the base ASG and target environment for your deployment.You can also override any Service settings, such as User Data values. This enables you to use a single Service with multiple Harness Environments. | +| 6 | Create the Basic, Canary, and Blue/Green deployments in Harness **Workflows**. | The Workflow deploys the new ASG and AMI instances defined in the Harness Service to the environment in the Harness Infrastructure Definition. | +| 7 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your AWS AMI CD:* [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration) +* [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) +* [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) + | + +### How Does Harness Downsize Old ASGs? + +Harness identifies the ASGs it deploys using the Harness Infrastructure Definition used to deploy it. During deployments, Harness tags the new ASG with an Infrastructure Definition ID. + +It uses that ID to identify the previous ASG version(s), and downsize them as described below. + +Harness upscales and downsizes in two states, setup and deploy. + +* **Setup** — The setup state is when your new ASG is created. +* **Deploy** — The deploy phase(s) is when your new ASG is upscaled to the number of new instances you requested. This is either a fixed setting (Min, Max, Desired) or the same number as the previous ASG. + +Instances are always tied to their ASG. A New ASG does not take instances from an old ASG.During setup state: + +* The previous ASG is kept with non-zero instance count. It is identified by its tag containing the Infrastructure Definition ID and the highest revision number, such as **\_7**. Any older ASGs are downsized to 0. +* New ASG is created with 0 count. +* For old ASGs that have 0 instances, Harness keeps the 3 last old ASGs and deletes the rest. + +During deploy phases: + +* New ASG is upscaled to the number of new instances you requested. +* Previous ASG is gradually downsized. In the case of a Canary deployment, the old ASG is downsized in the inverse proportion to the new ASG's upscale. If the new ASG is upscaled 25% in phase 1, the previous ASG is downsized 25%. + +At the end of deployment: + +* New ASG has the number of new instances you requested. In Canary, this is always 100%. +* Previous ASG is downsized to 0. + +#### Rollback + +If rollback occurs, the previous ASG is upscaled to its pre-setup number of instances using new instances. + +#### Don't Want a Previous ASG Downsized? + +As stated earlier, Harness identifies the ASGs it deploys using the Harness Infrastructure Definition used to deploy it. During deployments, Harness tags the new ASG with an Infrastructure Definition ID. + +It uses that ID to identify the previous ASG version(s), and downsize them as described above. + +If you do not want a previously deployed ASG to be downsized, then you must use a new Infrastructure Definition for future ASG deployments. A new ASG name is not enough. + +### Next Steps + +Read the following topics to build on what you've learned: + +* [AWS AMI Quickstart](https://docs.harness.io/article/wfk9o0tsjb-aws-ami-deployments) +* [AWS AMI How-tos](https://docs.harness.io/category/aws-ami-deployments) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-ecs-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-ecs-deployments-overview.md new file mode 100644 index 00000000000..067f2b14788 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-ecs-deployments-overview.md @@ -0,0 +1,81 @@ +--- +title: AWS ECS Deployments Overview +description: A summary of Harness AWS ECS implementation. +sidebar_position: 60 +helpdocs_topic_id: 5z2kw34d7x +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the concept of a Harness ECS deployment by describing the high-level steps involved. + +For a quick tutorial, see [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments). + +For detailed instructions on using ECS in Harness, see the [AWS ECS How-tos](https://docs.harness.io/category/aws-ecs-deployments). + +### Before You Begin + +Before learning about Harness ECS deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### What Does Harness Need Before You Start? + +A Harness ECS deployment requires the following: + +1. Artifact: For example, a Docker image of NGINX from Docker Hub. +2. One or more existing ECS clusters: + * You will need an ECS cluster to deploy your ECS services using Harness. + * If you use a Harness ECS Delegate (recommended), you will need an ECS cluster for the Delegate. The steps for setting up an ECS Delegate are in [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments). +3. IAM Role for the Harness Cloud Provider connection to AWS. The policies are listed in [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments). + +### What Does Harness Deploy? + +Harness takes the artifact, ECS task definition, and service specification you provide, and deploys the artifact as a task in a new ECS service in the target ECS cluster. + +### What Does a Harness ECS Deployment Involve? + +The following list describes the major steps of a Harness ECS deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness ECS **Delegate** in your ECS cluster.  | Typically, the ECS Delegate is installed in the target cluster where you will deploy your application(s). | +| 2 | Add a Harness **Artifact Server**. | Add a Harness **Artifact Server**. For example, a Docker Registry Artifact Server that connects to the Docker registry where your Docker images are located, or the public Docker Hub. | +| 3 | Add an **AWS** **Cloud Provider**. | An AWS Cloud Provider is a connection to your AWS account.The AWS Cloud Provider is used to deploy the ECS services to the ECS cluster. | +| 4 | Create the Harness **Application** for your Kubernetes CD Pipeline. | The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. | +| 5 | Create the Harness **Service** using the Amazon EC2 Container Services (ECS) Deployment Type. | Add your ECS specs and any config variables and files.You can define specs for the following:
•  Replica Strategy
•  Daemon Strategy
•  awsvpc Network Mode
•  Service Discovery | +| 6 | Create the Harness **Environment** and Infrastructure Definition for your target Kubernetes clusters), and any overrides. | Using the Harness Cloud Provider you set up, you can select the target Kubernetes cluster and namespace for your deployment.You can also override any Service settings, such as manifest values. This enables you to use a single Service with multiple Harness Environments. | +| 7 | Create the Canary, Blue/Green, or Rollout deployment Harness **Workflow**. | The Workflow deploys the artifact(s) and Kubernetes workloads defined in the Harness Service to the cluster and namespace in the Harness Infrastructure Definition. | +| 8 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your Kubernetes CD:
•  [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration)
•  [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2)
•  [Infrastructure Definitions](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) | + +### Component Overview + +The following table lists the ECS components and where they are set up in Harness, as well as the related Harness components that perform ECS deployment operations. For detailed explanations of ECS, see the [ECS Developer Guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) from Amazon. + + + +| | | | +| --- | --- | --- | +| **Component** | **Description** | **Harness Location** | +| Harness Delegate | A software service you run in the same VPC as the ECS cluster to enable Harness to perform deployment operations. The Delegate does not need root privileges, and it only makes an outbound HTTPS connection to the Harness platform. | [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments) | +| Harness Cloud Provider | A Cloud Provider is a logical representation of your AWS infrastructure. Typically, a Cloud Provider is mapped to a AWS account, Kubernetes cluster, Google service account, Azure subscription, or a data center. | [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments) | +| ECS Task Definition | Describes the Docker containers to run (CPU, memory, environment variables, ports, etc) and represents your application. | Specified in the Harness Service, in Container Specification. | +| ECS Task | Instance of a Task Definition. Multiple Tasks can be created by one Task Definition, as demand requires. | | +| ECS Service | Defines the minimum and maximum Tasks from one Task Definition to run at any given time, autoscaling, and load balancing. | This is specified in the Harness Service, in Service Specification. | +| ECS Cluster | A Cluster is a group of ECS Container Instances where you run your service tasks in order for them to be accessible. The container management service handles the cluster across one or more ECS Container Instance(s), including the scheduling, maintaining, and scaling requests to these instances. | ECS Clusters are selected in two Harness components:

•  The AWS Cloud Provider, via the IAM role for Delegate option.

•  Harness application Environment, where you select the AWS Cloud provider, and your ECS cluster name. | +| Launch Types | There are two types:

•  Fargate - Run containers without having to manage servers or clusters of Amazon EC2 instances.

•  EC2 - Run containers on a cluster of Amazon EC2 instances that you manage. | You specify the launch type to use when adding a Service Infrastructure to a Harness Environment. | +| Replica Scheduling Strategy | Places and maintains the desired number of tasks across your cluster. | This is specified in the Harness Service, in Service Specification. | +| Daemon Scheduling Strategy | As of July 2018, ECS has a daemon scheduling strategy that deploys exactly one task on each active container instance that meets all of the task placement constraints that you specify in your cluster.With a daemon strategy, a task is deployed onto each instance in a cluster to provide common supporting functionality. | This is specified in the Harness Service, in Service Specification. | +| awsvpc Network Mode | Provides each task with its own elastic network interface. Fargate task definitions require the awsvpc network mode. | | +| Service Discovery | An ECS service can use the ECS Service Discovery to manage HTTP and DNS namespaces for ECS services via the AWS Cloud Map API actions. | This is specified in the Harness Service, in Service Specification. | +| Auto Scaling | Auto Scaling adjusts the ECS desired count up or down in response to CloudWatch alarms. | This is specified in the Harness Workflow ECS Service Setup command. | + +### Next Steps + +Read the following topics to build on what you've learned: + +* [AWS ECS Quickstart](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments). +* [AWS ECS How-tos](https://docs.harness.io/category/aws-ecs-deployments). + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-lambda-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-lambda-deployments-overview.md new file mode 100644 index 00000000000..9aa169b3265 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/aws-lambda-deployments-overview.md @@ -0,0 +1,73 @@ +--- +title: AWS Lambda Deployments Overview +description: A summary of Harness AWS Lambda implementation. +sidebar_position: 70 +helpdocs_topic_id: 96mqftt93v +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/5fnx4hgwsa).This topic describes the concept of a Harness AWS Lambda deployment by describing the high-level steps involved. + +For a quick tutorial, see the [AWS Lambda Quickstart](https://docs.harness.io/article/wy1rjh19ej-aws-lambda-deployments). + +### Before You Begin + +Before learning about Harness AWS Lambda deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### What Does Harness Need Before You Start? + +A Harness AWS Lambda deployment requires the following: + +* AWS account - An AWS account you can connect to Harness. +* Lambda function file stored on an artifact server, typically AWS S3. + +### Artifact Source Support + +Harness supports the following artifact sources with Lambda: + +* [Jenkins](https://docs.harness.io/article/qa7lewndxq-add-jenkins-artifact-servers) +* [Artifactory](https://docs.harness.io/article/nj3p1t7v3x-add-artifactory-servers) +* [AWS S3](../../aws-deployments/lambda-deployments/1-delegate-and-connectors-for-lambda.md) +* [Nexus](https://docs.harness.io/article/rdhndux2ab-nexus-artifact-sources) +* [Custom Artifact Source](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source) + +### What Does Harness Deploy? + +Setting up a Lambda deployment is as simple as adding your function zip file, configuring function compute settings, and adding aliases and tags. Harness takes care of the rest of the deployment, making it consistent, reusable, and safe with automatic rollback. + +[![](./static/aws-lambda-deployments-overview-24.png)](./static/aws-lambda-deployments-overview-24.png) + +Basically, the Harness setup for Lambda is akin to using the AWS CLI  [aws lambda](https://docs.aws.amazon.com/cli/latest/reference/lambda/index.html#cli-aws-lambda)  [create-function](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html),  [update-function-code](https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html), and  [update-function-configuration](https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html) commands, as well as the many other commands that are needed. + +The benefit with Harness is that you can set up your Lambda deployment once, with no scripting, and then have your Lambda functions deployed automatically as they are updated in your AWS S3 bucket. You can even templatize the deployment Environment and Workflow for use by other devops and developers in your team. + +Furthermore, Harness manages Lambda function versioning to perform rollback when needed. + +### What Does a Harness AWS Lambda Deployment Involve? + +The following list describes the major steps of a Harness AWS Lambda deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness Shell Script or ECS **Delegate** in AWS. | Typically, the Shell Script or ECS Delegate is installed in the same AWS VPC as your Lambda functions.When you set up a Harness AWS Cloud Provider, you can use the same IAM credentials as the installed Delegate.The IAM role you assign to the Delegate requires the standard Lambda Permissions.See [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation).See [Lambda Permissions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-permissions.html) from AWS. | +| 2 | Add an **AWS** **Cloud Provider**. | An AWS Cloud Provider is a connection to your AWS account.If you use AWS S3 to store your Lambda function files, the AWS Cloud Provider is used to obtain the Lambda function file from AWS S3.The AWS Cloud Provider is also used to connect to Lambda and deploy your function.When you set up a Harness AWS Cloud Provider, you can use the same IAM credentials as the installed Delegate.See [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). | +| 3 | Create the Harness **Application** for your Lambda CD Pipeline. | The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD.See [Create an Application](https://docs.harness.io/article/bucothemly-application-configuration). | +| 4 | Create the Harness **Service** using the **AWS Lambda** Deployment Type. | Add a Lambda function file as an artifact in a Harness Service, define a function specification, and any config variables and files.See [Services for Lambda](../../aws-deployments/lambda-deployments/2-service-for-lambda.md). | +| 5 | Create the Harness **Environment** and Infrastructure Definition for your deployment, and any overrides. | Using the Harness AWS Cloud Provider you set up, you can select the IAM role, region, and other components of the target environment for your deployment.You can also override any Service settings, such as config variables and files. This enables you to use a single Service with multiple Harness Environments.See [Define Your Kubernetes Target Infrastructure](../../kubernetes-deployments/define-your-kubernetes-target-infrastructure.md). | +| 6 | Create the Basic deployment for Lambda in Harness **Workflows**. | The Workflow deploys the Lambda function as defined in the Harness Service to the AWS Lambda environment in the Harness Infrastructure Definition.See [Lambda Workflows and Deployments](../../aws-deployments/lambda-deployments/4-lambda-workflows-and-deployments.md). | +| 7 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your AWS Lambda CD:* [Deploy Individual Workflows](https://docs.harness.io/article/5ffpvrohi3-deploy-a-workflow) +* [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) +* [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) + | + +### Next Steps + +Read the following topics to build on what you've learned: + +* [AWS Lambda Quickstart](https://docs.harness.io/article/wy1rjh19ej-aws-lambda-deployments) tutorial + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md new file mode 100644 index 00000000000..d8f07e345ad --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/azure-arm-and-blueprint-provision-with-harness.md @@ -0,0 +1,108 @@ +--- +title: Azure ARM and Blueprint Provisioning with Harness +description: Harness has first-class support for Azure Resource Manager (ARM) templates and Azure Blueprints as infrastructure provisioners. You can use ARM templates to provision the deployment target environmen… +# sidebar_position: 2 +helpdocs_topic_id: c7bzn7vjwn +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness has first-class support for [Azure Resource Manager (ARM) templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) and Azure [Blueprints](https://docs.microsoft.com/en-us/azure/governance/blueprints/overview) as infrastructure provisioners. + +You can use ARM templates to provision the deployment target environment in Azure, or to simply provision any Azure infrastructure. + +You can use Blueprints to provision Azure resources that adhere to your organization's standards, patterns, and requirements. You can package ARM templates, resource groups, policy and role assignments, and much more into a Blueprint. See [this video](https://www.youtube.com/watch?v=cQ9D-d6KkMY) from Microsoft Developer for more details. + +This topic provides a high-level summary of how to use Harness to provision a target environment using ARM, or to simply provision resources using ARM or Blueprint. + +Looking for How-tos? See [Azure Resource Management (ARM) How-tos](https://docs.harness.io/article/qhnnq1mks3-azure-arm-and-blueprint-how-tos). + +### Provision and Deploy to the Same Infrastructure using ARM + +Here's a short video showing how to provision and deploy to the same Azure infrastructure using ARM and Harness: + + + + +Here's a visual summary of how you use your Azure ARM templates in Harness to provision infra and then deploy to it: + + +![](./static/azure-arm-and-blueprint-provision-with-harness-19.png) + +1. **ARM Infrastructure Provisioner**: add your Azure ARM template as a Harness Infrastructure Provisioner. You add it by connecting to the Git repo for the ARM template. You also set the scope (Resource group, Tenant, etc). You can also enter the ARM template inline without connecting to a Git repo. +2. **​Infrastructure Definition**: define a Harness Infrastructure Definition using the Infrastructure Provisioner. This setup identifies the ARM template's resources as a deployment target. +3. **Workflow Setup:** when you create your Workflow, you select the Infrastructure Definition you created, identifying it as the target infrastructure for the Workflow deployment. +4. **Workflow Provisioner Step:** in the Workflow pre-deployment section, you add a **ARM/Blueprint Create Resource** step that uses the ARM Infrastructure Provisioner you set up. The Workflow will build the infrastructure according to your ARM template. +5. **Pre-deployment**: the pre-deployment steps are executed and provision the infrastructure using the **ARM/Blueprint Create Resource** step. +6. **Deployment:** the Workflow deploys to the provisioned infrastructure defined as its target Infrastructure Definition. + +### General Provisioning using ARM and Blueprint + +Here's a short video showing how to provision Azure infrastructure using ARM and Harness: + + + + +Here's a short video showing how to provision Azure infrastructure using Blueprint and Harness: + + + + +You can use Azure ARM templates/Blueprint definitions in Harness for general Azure provisioning. + +1. **ARM/Blueprint Infrastructure Provisioner**: add your Azure ARM template or Blueprint definition as a Harness Infrastructure Provisioner. +2. **Workflow Provisioner Step**: create a Workflow and add an **ARM/Blueprint Create Resource** step in its pre-deployment section that uses the Infrastructure Provisioner. You can use the rest of the Workflow to deploy services, or just omit any further phases and steps. +3. **Deploy:** the Workflow will build the Azure resources according to your ARM template/Blueprint definition. + +![](./static/azure-arm-and-blueprint-provision-with-harness-20.png) + +### Limitations + +For ARM, see [Azure Resource Management (ARM) How-tos](https://docs.harness.io/article/qhnnq1mks3-azure-arm-and-blueprint-how-tos). + +### Azure Roles Required + +See **Azure Resource Management (ARM)** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). + +See **Azure Blueprint** in [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). + +### Permissions Summary + +You need to give Harness permissions in your Azure subscription so Harness can provision using ARM/Blueprint. These are the same permissions you'd need to grant Harness for existing static infrastructures. + +As a summary, you'll need to manage the following permissions: + +* **Delegate** - The Harness Delegate will require permissions to create resources in Azure. It'll use the credentials you provide in the Harness Azure Cloud Provider. +* **Azure** **Cloud Provider** - The Harness Azure Cloud Provider must have permissions for the resources you are planning to provision using ARM/Blueprint. +See [Add Microsoft Azure Cloud Provider](https://docs.harness.io/article/4n3595l6in-add-microsoft-azure-cloud-provider). +* **Git Repo** - You'll add the Git repo where the ARM templates or Blueprints are located to Harness as a Source Repo Provider. For more information, see  [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +#### Harness User Group Permissions Required + +To set up a Harness ARM/Blueprint Provisioner, your Harness User account must belong to a User Group with the following Application Permissions: + +* **Permission Type:** `Provisioners`. +* **Application:** one or more Applications. +* **Filter:** `All Provisioners`. +* **Action:** `Create, Read, Update, Delete`. + +### No Artifact Required + +You don't need to deploy artifacts via Harness Services to use Azure ARM/Blueprint provisioning in a Workflow. + +You can simply set up an Azure ARM/Blueprint Provisioner and use it in a Workflow to provision infrastructure without deploying any artifact. + +### Service Instances (SIs) Consumption + +Harness Service Instances (SIs) aren't consumed and no other licensing is required when a Harness Workflow uses Azure ARM/Blueprint to provision resources. + +When Harness deploys artifacts via Harness Services to the provisioned infrastructure in the same Workflow or Pipeline, SIs licensing is consumed. + +### Next Steps + +* [Azure ARM and Blueprint How-tos](https://docs.harness.io/article/qhnnq1mks3-azure-arm-and-blueprint-how-tos) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/azure-kubernetes-service-aks-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/azure-kubernetes-service-aks-deployments-overview.md new file mode 100644 index 00000000000..06a82602989 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/azure-kubernetes-service-aks-deployments-overview.md @@ -0,0 +1,67 @@ +--- +title: Azure Kubernetes Service (AKS) Deployments Overview +description: A summary of Harness AKS implementation. +# sidebar_position: 2 +helpdocs_topic_id: brwfq82umt +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/m7nkbph0ac).This topic describes the concept of a Harness Azure Kubernetes Service (AKS) deployment by describing the high-level steps involved. + +For a vendor-agnostic, Harness Kubernetes deployment, see our [Kubernetes Deployments Overview](kubernetes-overview.md) doc.For a quick tutorial, see the [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart). + +For detailed instructions on using AKS in Harness, see the [Azure How-tos](https://docs.harness.io/category/azure-deployments-and-provisioning). + +This guide covers new the **Version 2** features of Harness' Kubernetes implementation for AKS. For Version 1 Kubernetes see [Harness Kubernetes v1 FAQ](https://docs.harness.io/article/dtu3ud1ok7-kubernetes-and-harness-faq). For Helm deployment features, see [Helm Quickstart](https://docs.harness.io/article/2aaevhygep-helm-quickstart). + +### Before You Begin + +Before learning about Harness Azure Kubernetes Service (AKS) deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### What Does Harness Need Before You Start? + +A Harness AKS deployment requires the following: + +1. Artifact: For example, a Docker image on Azure ACR. +2. Kubernetes cluster: You will need a target cluster for the Harness Delegate, your application, and your Kubernetes workloads. A Kubernetes Delegate requires at least 8GB RAM, and so your cluster should have enough RAM to host the Delegate and your applications and workloads. + +### What Does Harness Deploy? + +Harness takes the artifacts and Kubernetes manifests you provide and deploys them to the target AKS cluster. You can simply deploy Kubernetes objects via manifests and you can provide manifests using remote sources and Helm charts. + + + +| | | +| --- | --- | +| Azure deployment in Harness Manager | The same deployment in Kubernetes Dashboard | +| | | + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh). + +### What Does a Harness AKS Deployment Involve? + +The following list describes the major steps of a Harness AKS deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness Kubernetes **Delegate** in your AKS cluster.  | Typically, the Kubernetes Delegate is installed in the target AKS cluster where you will deploy your application(s). | +| 2 | Add a Harness **Artifact Server** or Harness **Azure Cloud Provider**. | The Harness **Artifact Server** is for artifact servers such as a Docker Registry Artifact Server. Harness connects to the Docker registry where your Docker images are located, or the public Docker Hub.If you want to obtain your artifacts from an ACR container, use an **Azure Cloud Provider**. | +| 3 | Add a **Kubernetes Cluster** **Cloud Provider**. | A Cloud Provider is a connection to your Kubernetes cluster.You can add a Kubernetes Cluster Cloud Provider (recommended) or a Cloud Provider for the cloud platform where the cluster is hosted, such as an Azure Cloud Provider. A Kubernetes Cluster Cloud Provider will connect to any cluster on any platform.In you use a Kubernetes Cluster Cloud Provider, you can use the Delegate installed in your cluster for authentication. | +| 4 | Create the Harness **Application** for your AKS CD Pipeline. | The Harness Application represents a group of microservices/containers, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. | +| 5 | Create the Harness **Service** using the **Kubernetes** Deployment Type. | Add your Kubernetes manifests and any config variables and files.You can use remote manifests stored in a source repo or Helm charts in a Helm repo. | +| 6 | Create the Harness **Environment** and Infrastructure Definition for your target AKS cluster, and any overrides. | Using the Harness Cloud Provider you set up, you can select the target AKS cluster and namespace for your deployment.You can also override any Service settings, such as manifest values. This enables you to use a single Service with multiple Harness Environments. | +| 7 | Create the Canary, Blue/Green, or Rollout deployment Harness **Workflow**. | The Workflow deploys the artifact(s) and Kubernetes workloads defined in the Harness Service to the cluster and namespace in the Harness Infrastructure Definition.See [Azure Workflows and Deployments](../../azure-deployments/aks-howtos/4-azure-workflows-and-deployments.md).For additional Workflows, see the vendor-agnostic steps in the following:
•  [Create a Kubernetes Canary Deployment](../../kubernetes-deployments/create-a-kubernetes-canary-deployment.md)
•  [Create a Kubernetes Blue/Green Deployment](../../kubernetes-deployments/create-a-kubernetes-blue-green-deployment.md)
•  [Create a Kubernetes Rolling Deployment](../../kubernetes-deployments/create-a-kubernetes-rolling-deployment.md) | +| 8 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your AKS CD:
•  [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration)
•  [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2)
•  [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) | + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Azure (AKS) How-tos](https://docs.harness.io/category/azure-deployments-and-provisioning) +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md new file mode 100644 index 00000000000..186f9896969 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/ci-cd-with-the-build-workflow.md @@ -0,0 +1,93 @@ +--- +title: CI/CD with the Build Workflow +description: Build and collect artifacts for CI/CD. +sidebar_position: 20 +helpdocs_topic_id: wqytbv2bfd +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Unlike other Workflow types, Build Workflows do not perform deployments. Build Workflows build and collect a specific artifact version and pass it on to the subsequent Workflows in the Pipeline: + +1. First it runs the build process and builds the artifact version via Jenkins, Bamboo, Shell Script, or any other CI tools. It receives the new version number from the build process. +2. Next it collects the artifact of that version number from the artifact repository. +3. Finally it passes the new build/version number to the subsequent deployment Workflow in the Pipeline. + +Build Workflows enable you to model your entire CI/CD process in one place: Harness. + +This topic discusses the concept of a Harness CI/CD process using the Build Workflow. Use the [CI/CD: Artifact Build and Deploy Pipelines](https://docs.harness.io/category/cicd-artifact-build-and-deploy-pipelines) guide for a step-by-step walkthrough of a CI/CD deployment. + +### Without a Build Workflow + +Without a Build Workflow, when you trigger a Harness Pipeline (manually or through an automated Trigger), you must provide the build/version number of the artifact to be deployed. + +![](./static/ci-cd-with-the-build-workflow-26.png) + +This flow works on the assumption that the artifact version already exists in the artifact repository attached to the Harness Service being deployed. + +### Coupling CI and CD with the Build Workflow + +In some deployment scenarios, the artifact version you want to deploy hasn't been built. To build it, a CI job has to run before the Harness Pipeline can deploy. + +Rather than having a decoupled CI and CD process, Harness provides the Build Workflow that acts as a CI job/stage in the [CI/CD Pipeline](../../build-deploy/build-and-deploy-pipelines-overview.md) in Harness. + +#### When Do I Use a Build Workflow? + +When you want to view your entire CI/CD flow in the Harness dashboard, use the Build Workflow as a proxy for the CI stage of the CI/CD pipeline. + +You can trigger a Pipeline in Harness in many ways, such as a code commit in your Git repository. The [Harness Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) will initiate a Pipeline in Harness with the Build Workflow as first stage in the Pipeline. + +### Summary of CI/CD Pipeline + +The [CI/CD: Artifact Build and Deploy Pipelines](https://docs.harness.io/category/cicd-artifact-build-and-deploy-pipelines) guide provides a lengthy walkthrough of a CI/CD deployment, but let's look at a quick summary. + +Here is a simple Harness CI/CD Pipeline: + +![](./static/ci-cd-with-the-build-workflow-27.png) + +Let’s Deploy this Pipeline. + +The first thing you will notice it that we are not prompted for a build/version number to deploy. + +If you use a Build Workflow in a Pipeline, you cannot select an artifact when you deploy the Pipeline. A Build Workflow tells Harness you will be building the artifact for deployment as part of the Pipeline. Harness will use that artifact for the Pipeline deployment. + +![](./static/ci-cd-with-the-build-workflow-28.png) + +Here's the output of the Build Workflow **Jenkins** step. The Jenkins step triggered a Jenkins job and it was successful. + +![](./static/ci-cd-with-the-build-workflow-29.png) + +The deployment shows the new build number (**25**) and also a URL back to the Jenkins Job. + +The **Jenkins output** step in the Workflow is echoing the value of the new build number: + +![](./static/ci-cd-with-the-build-workflow-30.png) + +The **Artifact Collection** step then collects that artifact and makes it available for deployment in Harness. + +![](./static/ci-cd-with-the-build-workflow-31.png) + +Now that the Build Workflow is done, the Pipeline moves onto the deployment Workflows: Dev, QA, and Prod. + +You can see below that the deployment Workflows automatically pick up the new Build/Version number (**25**) for deployment: + +* Dev: +![](./static/ci-cd-with-the-build-workflow-32.png) +* QA: +![](./static/ci-cd-with-the-build-workflow-33.png) +* Prod: +![](./static/ci-cd-with-the-build-workflow-34.png) + +### Summary + +Harness Build Workflows build and collect a specific artifact version and pass it forward to the subsequent Workflows in the Pipeline. They enable you to model your entire CI/CD process in one place: Harness. + +### Notes + +Build Workflows do not use Harness Services. Consequently, Service variables and Service variable overrides cannot be used in a Build Workflow. + +### Next Steps + +Use the [CI/CD: Artifact Build and Deploy Pipelines](https://docs.harness.io/category/cicd-artifact-build-and-deploy-pipelines) guide to walk through a CI/CD deployment. + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md new file mode 100644 index 00000000000..541ecb9096f --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/cloud-formation-provisioning-with-harness.md @@ -0,0 +1,55 @@ +--- +title: CloudFormation Provisioning with Harness (FirstGen) +description: Use AWS CloudFormation to provision infrastructure as part of your deployment process. +# sidebar_position: 2 +helpdocs_topic_id: qj0ems5hmg +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vynj4hxt98).Harness lets you use AWS CloudFormation to provision infrastructure as part of your deployment process. Harness can provision any resource that is supported by [CloudFormation](https://aws.amazon.com/cloudformation/). + +In this topic: + +* [Limitations](#limitations) +* [CloudFormation Implementation Summary](#cloud_formation_implementation_summary) +* [Permissions](#permissions) +* [No Artifact Required](#no_artifact_required) +* [Service Instances (SIs) Consumption](#service_instances_s_is_consumption) + +### Limitations + +* Harness CloudFormation integration does not support AWS Serverless Application Model (SAM) templates. Only standard [AWS CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-whatis-concepts.html#w2ab1b5c15b7). +* Harness Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI/ASG and ECS deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. + +### CloudFormation Implementation Summary + +You use a CloudFormation Infrastructure Provisioner in the following ways: + +1. **CloudFormation Infrastructure Provisioner** — Add a Harness Infrastructure Provisioner as a blueprint for the infrastructure where you will deploy your application. +You add the CloudFormation template by pasting it into the Infrastructure Provisioner, or by connecting to an AWS S3 bucket or Git repo where the CloudFormation templates are kept. +You simply need to map some of the output variables in the template to the required fields in Harness. When Harness deploys your microservice, it will build your infrastructure according to this blueprint. +2. **Infrastructure Definition** — In a Harness Infrastructure Definition, the outputs are mapped as part of the Infrastructure Definition: + ![](./static/cloud-formation-provisioning-with-harness-01.png) + The provisioned environment is now a deployment target environment for a Workflow to use. You can use this Infrastructure Definition in any Workflow where you want to deploy to that provisioned deployment infrastructure. +3. **Workflow Step** — Add a CloudFormation Provisioner step to a Workflow to build the infrastructure according to your CloudFormation Provisioner and its template. + +### Permissions + +The permissions required for Harness to use your provisioner and successfully deploy to the provisioned instances depends on the deployment platform you provision. The permissions are discussed in this topic in the configuration steps where they are applied, but, as a summary, you will need to manage the following permissions: + +* **Delegate** - The Delegate will require permissions according to the deployment platform. It will use the access, secret, and SSH keys you configure in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) to perform deployment operations. For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see [Trust Relationships and Roles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#trust_relationships_and_roles). +* **Cloud Provider** - The AWS Cloud Provider must have **create** permissions for the resources you are planning to create in the CloudFormation template. For Harness AWS Cloud Providers, you can install the Delegate in your AWS VPC and have the Cloud Provider assume the permissions used by the Delegate. + +The account used for the Cloud Provider will require platform-specific permissions for creating infrastructure. For example, to create EC2 AMI instances the account requires the **AmazonEC2FullAccess** policy.* **S3 Bucket** - You can use an AWS S3 bucket to point to the provisioner template. The AWS Cloud Provider can be used to access S3 also. The IAM role used by the Cloud Provider simply needs the S3 Bucket policy, described in [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers#amazon_s3). +* **Access and Secret Keys** - These are set up in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) and then used as inout values when you add a CloudFormation Provisioner step to a Workflow. + +### No Artifact Required + +You do not need to deploy artifacts via Harness Services to use CloudFormation provisioning in a Workflow. You can simply set up a CloudFormation Provisioner and use it in a Workflow to provision infrastructure without deploying any artifact. In Harness documentation, we include artifact deployment as it is the ultimate goal of Continuous Delivery. + +### Service Instances (SIs) Consumption + +Harness Service Instances (SIs) are not consumed and no additional licensing is required when a Harness Workflow uses CloudFormation to provision resources. When Harness deploys artifacts via Harness Services to the provisioned infrastructure in the same Workflow or Pipeline, SIs licensing is consumed. + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/deployment-concepts-and-strategies.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/deployment-concepts-and-strategies.md new file mode 100644 index 00000000000..ad6fcd9e27d --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/deployment-concepts-and-strategies.md @@ -0,0 +1,226 @@ +--- +title: Deployment Concepts and Strategies (FirstGen) +description: Quick overview of deployment strategies. +sidebar_position: 10 +helpdocs_topic_id: 325x7awntc +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/0zsf97lo3c).You have likely heard terms like *blue/green* and *canary* when it comes to deploying code and applications into production. These are common deployment strategies, available in Harness as Workflow types, along with many others. + +[![](./static/deployment-concepts-and-strategies-02.png)](./static/deployment-concepts-and-strategies-02.png) + +This topic will explain these strategies to give you an idea of how to approach deployments in Harness, and to help you decide what strategy is best for you. + + +### Build Deployment + +A Build Deployment runs a build process, such as a Jenkins job that creates a WAR file and deposits it in a repo, or builds an AMI in AWS EC2. + +#### When to use Build Deployments + +Typically, you use Build deployments as part of an Artifact Build and Deploy pipeline. + +An Artifact Build and Deploy pipeline runs a build process, deposits the built artifact (or metadata) in the Artifact Source or Harness, and deploys the build to a deployment environment. It is a simple, but useful deployment commonly used for traditional file-based and AMI deployments. + +See  [Build and Deploy Pipelines Overview](../../build-deploy/build-and-deploy-pipelines-overview.md). + +#### Build Workflow for Push Events + +Build Workflows can also be used to build an artifact because the source has been updated. + +For example, you might use a Trigger to execute the Workflow on a Webhook event, such as a Git push event. In this case, the artifact needs to be built before the Workflow can pick it up. + +You simply add a Build Workflow at the beginning of the Pipeline to build the artifact so you always have the latest build. + +See [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) for information on Webhook triggers. + +### Basic Deployment + +With Basic Deployment, all nodes within a single environment are updated at the same time with a single new service/artifact version. + +#### When to use Basic Deployments + +* Your app/service is not business, mission, or revenue critical +* You’re deploying off-hours and no one is using the app/service +* Your experimenting with deployments and it's okay if the app/service fails + +##### Pros + +* Simple and fast. +* Useful for learning Harness. + +##### Cons + +* Risk, outages, slower rollback. + +Not too long ago, Basic deployment was how developers rolled out applications. Typically, someone in Ops updates the servers at midnight and then you hope all goes well. + +[![](./static/deployment-concepts-and-strategies-04.png)](./static/deployment-concepts-and-strategies-04.png) + +Basic deployments are supported in Harness for a number of platforms as a way for you to experiment with deployments. They are not intended for production deployments because they are not as safe as Canary or Blue/Green deployments. + +### Multi-Service Deployment + +With Multi-Service Deployment, all nodes within a single environment are updated at the same time with *multiple* new services/artifacts. + +#### When to use Multi-Service Deployments + +* When your app has service/version dependencies. +* You’re deploying off-hours and no one is using the app/service. + +##### Pros + +* Simple, fast, and with less risk than Basic deployment. + +##### Cons + +* Risk, difficult to test/verify all service dependencies, outages, slow rollback. + +[![](./static/deployment-concepts-and-strategies-06.png)](./static/deployment-concepts-and-strategies-06.png) + + + +### Rolling Deployment + +With a Rolling Deployment, all nodes within a single environment are incrementally updated one-by-one or in N batches (as defined by a window size) with a new service/artifact version. + +#### When to use Rolling Deployments + +* When you need to support both new and old deployments. +* Load balancing scenarios that require reduced downtime. + +One use of Rolling deployments is as the stage following a Canary deployment in a deployment pipeline. For example, in the first stage you can perform a Canary deployment to a QA environment and verify each group of nodes and, once successful, you perform a Rolling to production. + +##### Pros + +* Simple, relatively simple to rollback, less risk than Basic deployment. +* Gradual app rollout with increasing traffic. + +##### Cons + +* Verification gates between nodes difficult and slow. +* App/DB needs to support both new and old artifacts. Manual checks/verification at each increment could take a long time. +* Lost transactions and logged-off users are also something to take into consideration. + +[![](./static/deployment-concepts-and-strategies-08.png)](./static/deployment-concepts-and-strategies-08.png) + +See  [Kubernetes Rolling Update Workflows](https://docs.harness.io/article/5gouaz9w5r-kubernetes-rolling-update-workflows). + +### Blue/Green Deployment + +With Blue/Green Deployment, two identical environments called **blue** (staging) and **green** (production) run simultaneously with different versions or service/artifact. + +QA and UAT are typically done on the blue environment. When satisfied, traffic is flipped (via a load balancer) from the green environment (current version) to the blue environment (new version). + +You can then decommission the old environment once deployment is successful. + +Some vendorscalls this a red/black deployment. + +#### When to use Blue/Green Deployments + +* When you want to perform verification in a full production environment. +* When you want zero downtime. + +##### Pros + +* Simple, fast, well understood, and easy to implement: switch is almost instantaneous. +* Less risk relative to other deployment strategies. +* Rapid rollback (flip traffic back to old environment) + +##### Cons + +* Replicating a production environment can be complex and expensive (i.e. microservice downstream dependencies). +* QA/UAT test coverage may not identify all anomalies & regressions in blue environment. +* An outage or SPOF could have wide-scale business impact before rollback kicks in. +* Current transactions and sessions will be lost due to the physical switch from one machine serving the traffic to another one. +* Database compatibility (schema changes, backward compatibility). + +[![](./static/deployment-concepts-and-strategies-10.png)](./static/deployment-concepts-and-strategies-10.png) + +See: + +* [ECS Blue/Green Workflows](../../aws-deployments/ecs-deployment/ecs-blue-green-workflows.md) +* [AMI Blue/Green Deployment](../../aws-deployments/ami-deployments/ami-blue-green.md) +* [Kubernetes Blue/Green Workflows](https://docs.harness.io/article/zim6pw6hd5-blue-green-workflows) +* [Pivotal Cloud Foundry Deployments](../../pcf-deployments/pcf-tutorial-overview.md) + +### Canary Deployment + +With Canary Deployment, all nodes in a single environment are incrementally updated in small phases, with each phase requiring a verification/gate to proceed to the next phase. + +#### When to use Canary Deployments + +When you want to verify whether the new version of the application is working correctly in your production environment. + +This is currently the most common way to deploy apps/services into production. + +**Pros:** + +* Deploy in small phases (e.g. 2%, 10%, 25%, 50,%, 75%, 100%). +* Lowest risk relative to all other deployment strategies (reduce business exposure). +* Test in production with real users & use cases. +* Run & compare two service versions side-by-side. +* Cheaper than blue/green, because there is no need to have two production environments. +* Fast and safe rollback. + +**Cons:** + +* Scripting canary deployments can be complex (Harness automates this process). +* Manual verification can take time (Harness automates this process with Continuous Verification). +* Required monitoring and instrumentation for testing in production (APM, Log, Infra, End User, etc). +* Database compatibility (schema changes, backward compatibility). + +This is a standard Canary deployment: + +[![](./static/deployment-concepts-and-strategies-12.png)](./static/deployment-concepts-and-strategies-12.png) + +For Kubernetes, Harness does this a little different. + +In Phase 1 we do a canary to the same group but we leave the production version alone. We just use other instances. Then we delete our canary version in Phase 1. + +In Phase 2 we do a rolling deployment with the production version and scale down the older version. + +![](./static/deployment-concepts-and-strategies-14.png) + +For examples, see: + +* [AMI Canary Deployment](../../aws-deployments/ami-deployments/ami-canary.md) +* [Create a Kubernetes Canary Deployment](../../kubernetes-deployments/create-a-kubernetes-canary-deployment.md) + +### A/B Testing + +Different versions of the same service/artifact run simultaneously as “experiments” in the same environment (typically production) for a period of time. Experiments are either controlled by the deployment of distinct artifacts or through the use of feature flags/toggling and/or AB testing tools (e.g. Optimizely). + +User traffic is commonly routed to each different version/experiment based on specific rules or user demographics (e.g. location, interests, etc). Measurements and comparisons are then performed across experiments to see which returned the best result. + +After experiments are concluded, the environment is typically updated with the optimal service version/experiment. + +The biggest difference between AB testing and the other strategies is that AB testing deploys many versions of the same service/artifact to an environment with no immediate goal of updating all nodes with a specific version. It’s about testing multiple ideas vs. deploying one specific tested idea. + +#### Pros + +Fast, easy and cheap way to test new features in production. Lots of tools exist to enable this. + +#### Cons + +* Experiments can sometimes break app/service/user experience. +* Scripting AB tests can be complex. +* Database compatibility (schema changes, backward compatibility) + +[![](./static/deployment-concepts-and-strategies-15.png)](./static/deployment-concepts-and-strategies-15.png) + +### Which Deployment Strategy Should I Use? + +It depends entirely on the type of application/service and environment. Most Harness customers are currently using blue/green or canary deployments for mission-critical applications. + +In many cases, customers are migrating from blue/green to canary so they can test in production with minimal business impact. + +You can also combine many of the above deployment strategies into a single strategy. For example, at Harness, we have customers doing multi-service canary deployments. + +### Next Steps + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/harness-hashi-corp-integrations.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/harness-hashi-corp-integrations.md new file mode 100644 index 00000000000..47f1e04f154 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/harness-hashi-corp-integrations.md @@ -0,0 +1,28 @@ +--- +title: Harness HashiCorp Integrations +description: This content is for Harness FirstGen. Switch to NextGen. Harness integrates with the following HashiCorp offerings -- HashiCorp Terraform. Terraform allows infrastructure to be expressed as code in a s… +# sidebar_position: 2 +helpdocs_topic_id: 97wr4kz3ew +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](../../../../getting-started/supported-platforms-and-technologies.md).Harness integrates with the following HashiCorp offerings: + +### HashiCorp Terraform + +Terraform allows infrastructure to be expressed as code in a simple, human readable language. + +Harness allows you to use Terraform to provision deployment target infrastructure and any other components supported by Terraform. + +See: [Terraform Infrastructure Provisioner](https://docs.harness.io/category/terraform). + +### HashiCorp Vault + +Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. + +Harness allows you to use HashiCorp Vault as your Harness secrets manager. Managing all secrets used in your Harness account and deployments. + +See: [Add a HashiCorp Vault Secrets Manager](https://docs.harness.io/article/am3dmoxywy-add-a-hashi-corp-vault-secrets-manager), [Use HashiCorp Vault Secrets Manager API](https://docs.harness.io/article/ehovbje4p1-use-hashi-corp-vault-secrets-manager-api). + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/helm-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/helm-deployments-overview.md new file mode 100644 index 00000000000..99164101277 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/helm-deployments-overview.md @@ -0,0 +1,71 @@ +--- +title: Native Helm Deployments Overview +description: A summary of Harness Helm implementation. +# sidebar_position: 2 +helpdocs_topic_id: 583ojfgg49 +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at).Harness includes both Kubernetes and Helm deployments, and you can use Helm charts in both. Harness [Kubernetes](../../kubernetes-deployments/kubernetes-deployments-overview.md) integration allows you to use your own Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller (for Helm v2) needing to be installed in the target cluster. See [Link Resource Files or Helm Charts in Git Repos](../../kubernetes-deployments/link-resource-files-or-helm-charts-in-git-repos.md).This topic describes the concept of a Harness **Native Helm** deployment by describing the high-level steps involved. + +For a quick tutorial on using Helm with a Harness Kubernetes deployment, see the [Helm Quickstart](https://docs.harness.io/article/2aaevhygep-helm-quickstart). + +Harness supports Helm v2 and v3. + +### Before You Begin + +Before learning about Harness Helm-based Kubernetes deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### Native Helm or Kubernetes Deployments? + +Harness includes both Kubernetes and Helm deployments, and you can use Helm charts in both. Here's the difference: + +* Harness [Kubernetes Deployments](../../kubernetes-deployments/kubernetes-deployments-overview.md) allow you to use your own Kubernetes manifests or a Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. +Harness Kubernetes deployments also support all deployment strategies (Canary, Blue/Green, Rolling, etc). +* For Harness [Native Helm Deployments](../../helm-deployment/helm-deployments-overview.md), you must always have Helm and Tiller (for Helm v2) running on one pod in your target cluster. Tiller makes the API calls to Kubernetes in these cases. You can perform a Basic or Rolling deployment strategy only (no Canary or Blue Green). For Harness Native Helm v3 deployments, you no longer need Tiller, but you are still limited to Basic or Rolling deployments. + + **Versioning:** Harness Kubernetes deployments version all objects, such as ConfigMaps and Secrets. Native Helm does not. + + **Rollback:** Harness Kubernetes deployments will roll back to the last successful version. Native Helm will not. If you did 2 bad Native Helm deployments, the 2nd one will just rollback to the 1st. Harness will roll back to the last successful version. + +### What Does Harness Need Before You Start? + +A Harness **Native Helm** deployment requires the following: + +* Artifact: For example, a Docker image of NGINX from Docker Hub. +* Kubernetes cluster: You will need a target cluster for the Harness Delegate, your application, and your Kubernetes workloads. A Kubernetes Delegate requires at least 8GB RAM, and so your cluster should have enough RAM to host the Delegate and your applications and workloads. +* Helm and Tiller **for Helm v2 only**: Helm and Tiller installed and running on one pod in the cluster. + + **If you are using Helm v3:** You do not need Tiller installed. Tiller is not used in Helm v3. + + When you install and run a new Harness Delegate, [Harness includes Helm 3 support automatically](https://docs.harness.io/article/ymw96mf8wy-use-custom-helm-binaries-on-harness-delegates). +* Helm chart: For example, a Bitnami Helm chart for NGINX from their Github repo. + +### What Does Harness Deploy? + +Harness takes the artifacts and Helm chart and version you provide and deploys the artifact to the target Kubernetes cluster. + +### What Does a Harness Helm Deployment Involve? + +The following list describes the major steps of a Harness Helm deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness Kubernetes or Helm **Delegate**.  | Typically, the Kubernetes or Helm Delegate is installed in the target cluster where you will deploy your application(s).You can also install the Helm Delegate using Rancher. | +| 2 | Add a Harness **Artifact Server**. | Add a Harness **Artifact Server**. For example, a Docker Registry Artifact Server that connects to the Docker registry where your Docker images are located, or the public Docker Hub. | +| 3 | Add a Helm Chart or Source Repository. | Add your Helm chart using a Helm Chart or Source Repository. | +| 4 | Add a **Cloud Provider**. | A Cloud Provider is a connection to your Kubernetes cluster.You can add a Kubernetes Cluster Cloud Provider (recommended) or a Cloud Provider for the cloud platform where the cluster is hosted, such as a Google Cloud Platform Cloud Provider. A Kubernetes Cluster Cloud Provider will connect to any cluster on any platform.In you use a Kubernetes Cluster Cloud Provider, you can use the Delegate installed in your cluster for authentication. | +| 5 | Create the Harness **Application** for your Kubernetes CD Pipeline. | The Harness Application represents a group of microservices/apps, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. | +| 6 | Create the Harness **Service** using the **Native Helm** Deployment Type. | Add your Helm charts and any config variables and files. | +| 7 | Create the Harness **Environment** and Infrastructure Definition for your target Kubernetes clusters, and any overrides. | Using the Harness Cloud Provider you set up, you can select the target Kubernetes cluster and namespace for your deployment.You can also override any Service settings, such as manifest values. This enables you to use a single Service with multiple Harness Environments. | +| 8 | Create the Basic Helm deployment Harness **Workflow**. | The Workflow deploys the artifact(s) and Kubernetes workloads defined in the Harness Service Helm charts to the cluster and namespace in the Harness Infrastructure Definition. | +| 9 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your Kubernetes CD:
•  [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration)
•  [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2)
•  [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) | + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Helm How-tos](https://docs.harness.io/category/native-helm-deployments) +* Blog on Helm support in Harness Kubernetes deployments, [Helm Support for Harness Continuous Delivery](https://harness.io/2019/05/helm-support-for-harness-continuous-delivery/). + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/iis-net-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/iis-net-deployments-overview.md new file mode 100644 index 00000000000..7f955c2781b --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/iis-net-deployments-overview.md @@ -0,0 +1,69 @@ +--- +title: IIS (.NET) Deployments Overview +description: A summary of Harness IIS (.NET) implementation. +# sidebar_position: 2 +helpdocs_topic_id: bq9938fbk8 +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the concept of a Harness IIS (.NET) deployment by describing the high-level steps involved. + +For a quick tutorial, see [IIS (.NET) Quickstart](https://docs.harness.io/article/2oo63r9rwb-iis-net-quickstart). + +For detailed instructions on using IIS (.NET) in Harness, see the [IIS (.NET)](https://docs.harness.io/category/iis-(https://docs.harness.io.net)-deployments) How-tos. + +### Before You Begin + +Before learning about Harness IIS (.NET) deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### Video Summary + +Here is a quick primer on deploying Microsoft IIS .NET applications and Microsoft .NET Core container applications using Harness Continuous Delivery. + + + + + +### What Does Harness Need Before You Start? + +A Harness IIS (.NET) deployment requires the following: + +* Templates: IIS website, application, or virtual directory. Harness automatically creates the Deployment Specifications for these templates, which you can customize. +* Target infrastructure: For example, an AWS region and load balancer or an Azure subscription and resource group. + +It is important to note that a site contains one or more applications, an application contains one or more virtual directories, and a virtual directory maps to a physical directory on a computer. To use Harness to deploy IIS sites, applications, and virtual directories, the IIS structural requirements (site > application > virtual directory) must be established on the Windows instances. + +### What Does Harness Deploy? + +Harness takes the IIS website, application, or virtual directory templates and deployment specifications you provide, and creates an IIS website, application, or virtual directory in your target infrastructure. + +You can create a Harness Pipeline that runs three Workflows to deploy the IIS website, application, or virtual directory in succession. + +### What Does a Harness IIS (.NET) Deployment Involve? + +The following list describes the major steps of a Harness IIS (.NET) deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness **Delegate** in your target infrastructure, such as an EC2 subnet. | Typically, the Shell Script or ECS Delegate is installed in the same subnet where you will deploy your application(s).For Azure deployments, you can run the Delegate on a Linux VM in your Azure VPC (such as Ubuntu) or simply ensure that the Delegate has network access to resources in your Azure VPC | +| 2 | Add a Harness **Artifact Server**. | Add a connection to the Artifact Server where Harness can pull the IIS website, application, or virtual directory template.If you are using the same Cloud Provider as artifact server, then you can skip this step. | +| 3 | Add a **Cloud Provider**. | A Cloud Provider is a connection to your cloud platform account, such as AWS or Azure. You can also connect to a physical server.For example, an AWS Cloud Provider can be used to connect to S3 and obtain the IIS templates Harness will use to create new website, application, or virtual directory templates. | +| 4 | Create the Harness **Application** for your IIS (.NET) CD Pipeline. | The Harness Application represents a group of microservices/apps, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. | +| 5 | Create the Harness **Service** using the **Windows Remote Management (WinRM)** Deployment Type. | Add an IIS website, application, or virtual directory template in a Harness Service, revise the Deployment Specification, and any config variables and files. | +| 6 | Create the Harness **Environment** and Infrastructure Definition for your deployment, and any overrides. | Using the Harness Cloud Provider you set up, you can select the target environment for your deployment.You can also override any Service settings. This enables you to use a single Service with multiple Harness Environments. | +| 7 | Create the Website, Application, and Virtual Directory deployments in Harness Basic **Workflows**. | The Workflow deploys the Website, Application, and Virtual Directory templates defined in the Harness Service to the environment in the Harness Infrastructure Definition. | +| 8 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your IIS (.NET) CD:
•  [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration)
•  [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2)
•  [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) | + +### Next Steps + +Read the following topics to build on what you've learned: + +* [IIS (.NET) Quickstart](https://docs.harness.io/article/2oo63r9rwb-iis-net-quickstart) +* [IIS (.NET)](https://docs.harness.io/category/iis-(https://docs.harness.io.net)-deployments) How-tos + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/kubernetes-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/kubernetes-overview.md new file mode 100644 index 00000000000..9f62b690dc9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/kubernetes-overview.md @@ -0,0 +1,64 @@ +--- +title: Kubernetes Deployments Overview (FirstGen) +description: Harness Kubernetes deployment high-level steps. +# sidebar_position: 2 +helpdocs_topic_id: wnr5n847b1 +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/u29v1uc1mh).This topic describes the concept of a Harness Kubernetes deployment by describing the high-level steps involved. + +For a quick tutorial, see the [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) tutorial. + +For detailed instructions on using Kubernetes in Harness, see the [Kubernetes How-tos](https://docs.harness.io/category/kubernetes-deployments). + +This guide covers new Harness Kubernetes Deployment **Version 2** features. For **Version 1** Kubernetes and Helm deployment features, see [Harness Kubernetes v1 FAQ](https://docs.harness.io/article/dtu3ud1ok7-kubernetes-and-harness-faq). + +### Before You Begin + +Before learning about Harness Kubernetes deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### What Does Harness Need Before You Start? + +A Harness Kubernetes deployment requires the following: + +1. Artifact: For example, a Docker image of NGINX from Docker Hub. +2. Kubernetes cluster: You will need a target cluster for your application and the Harness Delegate, and your Kubernetes workloads. A Kubernetes Delegate requires at least 8GB RAM, and so your cluster should have enough RAM to host the Delegate and your applications and workloads. + +### What Does Harness Deploy? + +Harness takes the artifacts and Kubernetes manifests you provide and deploys them to the target Kubernetes cluster. You can simply deploy Kubernetes objects via manifests and you can provide manifests using remote sources and Helm charts. + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh). + +### What Does a Harness Kubernetes Deployment Involve? + +The following list describes the major steps of a Harness Kubernetes deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness Kubernetes **Delegate** in your Kubernetes cluster.  | Typically, the Kubernetes Delegate is installed in the target cluster where you will deploy your application(s).See [Connect to Your Target Kubernetes Platform](../../kubernetes-deployments/connect-to-your-target-kubernetes-platform.md). | +| 2 | Add a Harness **Artifact Server**. | Add a Harness **Artifact Server**. For example, a Docker Registry Artifact Server that connects to the Docker registry where your Docker images are located, or the public Docker Hub.See [Add Container Images for Kubernetes Deployments](../../kubernetes-deployments/add-container-images-for-kubernetes-deployments.md). | +| 3 | Add a **Cloud Provider**. | A Cloud Provider is a connection to your Kubernetes cluster.You can add a Kubernetes Cluster Cloud Provider (recommended) or a Cloud Provider for the cloud platform where the cluster is hosted, such as a Google Cloud Platform Cloud Provider. A Kubernetes Cluster Cloud Provider will connect to any cluster on any platform.In you use a Kubernetes Cluster Cloud Provider, you can use the Delegate installed in your cluster for authentication.See [Connect to Your Target Kubernetes Platform](../../kubernetes-deployments/connect-to-your-target-kubernetes-platform.md). | +| 4 | Create the Harness **Application** for your Kubernetes CD Pipeline. | The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD.See [Application Components](https://docs.harness.io/article/bucothemly-application-configuration). | +| 5 | Create the Harness **Service** using the Kubernetes Deployment Type. | Add your Kubernetes manifests and any config variables and files.
See [Define Kubernetes Manifests](../../kubernetes-deployments/define-kubernetes-manifests.md). | +| 6 | Create the Harness **Environment** and Infrastructure Definition for your target Kubernetes clusters, and any overrides. | Using the Harness Cloud Provider you set up, you can select the target Kubernetes cluster and namespace for your deployment.You can also override any Service settings, such as manifest values. This enables you to use a single Service with multiple Harness Environments.
See [Define Your Kubernetes Target Infrastructure](../../kubernetes-deployments/define-your-kubernetes-target-infrastructure.md). | +| 7 | Create the Canary, Blue/Green, or Rollout deployment Harness **Workflow**. | The Workflow deploys the artifact(s) and Kubernetes workloads defined in the Harness Service to the cluster and namespace in the Harness Infrastructure Definition.
See:
•  [Create a Kubernetes Canary Deployment](../../kubernetes-deployments/create-a-kubernetes-canary-deployment.md)
•  [Create a Kubernetes Rolling Deployment](../../kubernetes-deployments/create-a-kubernetes-rolling-deployment.md)
•  [Create a Kubernetes Blue/Green Deployment](../../kubernetes-deployments/create-a-kubernetes-blue-green-deployment.md) | +| 8 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your Kubernetes CD:
•  [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration)
•  [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2)
•  [Provision Kubernetes Infrastructures](../../kubernetes-deployments/provision-kubernetes-infrastructures.md) | + +:::note +In Harness, a workload is a Deployment, StatefulSet, or DaemonSet object deployed and managed to steady state. For Rolling Update deployments, you can deploy multiple managed workloads. For Canary and Blue/Green Workflow deployments, only one managed object may be deployed per Workflow by default. You can deploy additional objects using the **Apply Step**, but it is typically used for deploying Jobs controllers. See Apply Step in  [Deploy Manifests Separately using Apply Step](../../kubernetes-deployments/deploy-manifests-separately-using-apply-step.md). +::: + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) tutorial +* [Kubernetes How-tos](../../kubernetes-deployments/kubernetes-deployments-overview.md) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/pivotal-cloud-foundry-deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/pivotal-cloud-foundry-deployments-overview.md new file mode 100644 index 00000000000..4fbb8bc43b9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/pivotal-cloud-foundry-deployments-overview.md @@ -0,0 +1,58 @@ +--- +title: Tanzu Application Service Deployment Overview +description: A summary of Harness Pivotal implementation. +# sidebar_position: 2 +helpdocs_topic_id: ekaesq5wwg +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the concept of a Harness Tanzu Application Service (TAS, formerly Pivotal Cloud Foundry) deployment by describing the high-level steps involved. + +Pivotal Cloud Foundry (PCF) was purchased by VMWare and renamed to Tanzu Application Service (TAS). For a quick tutorial, see the [Tanzu Application Service Quickstart](https://docs.harness.io/article/hy819vmsux-pivotal-cloud-foundry-quickstart). + +For detailed instructions on using TAS in Harness, see the [Tanzu Application Service How-tos](https://docs.harness.io/category/tanzu-application-service-(https://docs.harness.ioformerly-pivotal)). + +### Before You Begin + +Before learning about Harness TAS deployments, you should have an understanding of [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### What Does Harness Need Before You Start? + +A Harness TAS deployment requires the following: + +* Artifact: For example, a Docker image of NGINX from Docker Hub. +* Target TAS Organization and Space for the deployment. + +### What Does Harness Deploy? + +Harness takes the artifacts and TAS specs you provide and deploys them to the target TAS Organization and Space. + +You can use CLI plugins in your deployments. The App Autoscaler plugin has first-class support in Harness, enabling you to ensure app performance and control the cost of running apps. See [Use CLI Plugins in Harness Tanzu Deployments](../../pcf-deployments/use-cli-plugins-in-harness-pcf-deployments.md). + +### What Does a Harness TAS Deployment Involve? + +The following list describes the major steps of a Harness TAS deployment: + + + +| | | | +| --- | --- | --- | +| **Step** | **Name** | **Description and Links** | +| 1 | Install the Harness **Delegate** in your target TAS infrastructure.  | Typically, the Delegate is installed in the target space where you will deploy your application(s).If you are running your TAS Cloud in AWS, you can use a Shell Script Delegate run on an EC2 instance in the same VPC and subnet as your TAS Cloud, or an ECS Delegate run in an ECS cluster in the same VPC. | +| 2 | Add a Harness **Artifact Server**. | Add a Harness **Artifact Server**. For example, a Docker Registry Artifact Server that connects to the Docker registry where your Docker images are located, or the public Docker Hub. | +| 3 | Add a **Cloud Provider**. | A Cloud Provider is a connection to your TAS API endpoint URL. For example, **api.run.pivotal.io**. | +| 4 | Create the Harness **Application** for your TAS CD Pipeline. | The Harness Application represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your release process using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD.See [Create an Application](https://docs.harness.io/article/bucothemly-application-configuration). | +| 5 | Create the Harness **Service** using the TAS Deployment Type. | Add your TAS specs and any config variables and files. | +| 6 | Create the Harness **Environment** and Infrastructure Definition for your target TAS org and space, and any overrides. | Using the Harness Cloud Provider you set up, you can select the target TAS org and space for your deployment.You can also override any Service settings, such as manifest values. This enables you to use a single Service with multiple Harness Environments. | +| 7 | Create the Canary, Blue/Green, or Basic deployment Harness **Workflow**. | The Workflow deploys the artifact(s), TAS apps and routes defined in the Harness Service to the org and space in the Harness Infrastructure Definition. | +| 8 | Deploy the Workflow. | Once you've deployed a Workflow, learn how to improve your TAS CD:
•  [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration)
•  [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2)
•  [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) | + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Tanzu Application Service (TAS) Quickstart](https://docs.harness.io/article/hy819vmsux-pivotal-cloud-foundry-quickstart) +* [TAS How-tos](https://docs.harness.io/category/tanzu-application-service-(https://docs.harness.ioformerly-pivotal)) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/shell-script-provisioning-with-harness.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/shell-script-provisioning-with-harness.md new file mode 100644 index 00000000000..d01e8290c1e --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/shell-script-provisioning-with-harness.md @@ -0,0 +1,122 @@ +--- +title: Shell Script Provisioning with Harness +description: Use a Shell Script Provisioner to provision infrastructure as part of your deployments. +# sidebar_position: 2 +helpdocs_topic_id: drculfgwwn +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness has first-class support for Terraform and AWS CloudFormation provisioners, but to support different provisioners, or your existing shell script implementations, Harness includes the Shell Script Infrastructure Provisioner. + +This is a conceptual overview. For steps on setting up the Shell Script Provisioner, see [Shell Script Provisioner](https://docs.harness.io/article/1m3p7phdqo-shell-script-provisioner). + +In this topic: + +* [Shell Script Provisioner Implementation Summary](#shell_script_provisioner_implementation_summary) +* [Limitations](#permissions) +* [Permissions](#permissions) +* [No Artifact Required](#no_artifact_required) +* [Service Instances (SIs) Consumption](#service_instances_s_is_consumption) + +### Shell Script Provisioner Implementation Summary + +When you set up a Shell Script Provisioner in Harness, you add a shell script that the Harness Delegate uses to query your provisioner for a JSON collection describing your infrastructure (VPCs, DNS names, subnets, etc). + +Normally, the JSON will exist in your custom provisioner, such as a database, but for this topic, we'll use AWS as an example. + +For example, here is a shell script that pulls EC2 instance information from AWS: + + +``` +apt-get -y install awscli +aws configure set aws_access_key_id $access_key +aws configure set aws_secret_access_key $secret_key +aws configure set region us-east-1 +aws ec2 describe-instances --filters Name=tag:Name,Values=harness-provisioner > "$PROVISIONER_OUTPUT_PATH" +``` +The Harness environment variable `"$PROVISIONER_OUTPUT_PATH"` is initialized by Harness and stores the JSON collection returned by your script. + +:::note +Currently, Harness supports Bash shell scripts. PowerShell will be added soon. +::: + +This script returns a JSON array describing the instances: + + +``` +{ + "Instances": [ + { + ... + "Status": "online", + "InstanceId": "4d6d1710-ded9-42a1-b08e-b043ad7af1e2", + "SshKeyName": "US-West-2", + "InfrastructureClass": "ec2", + "RootDeviceVolumeId": "vol-d08ec6c1", + "InstanceType": "t1.micro", + "CreatedAt": "2015-02-24T20:52:49+00:00", + "AmiId": "ami-35501205", + "PublicDnsName": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com", + "Hostname": "ip-192-0-2-0", + "Ec2InstanceId": "i-5cd23551", + "SubnetId": "subnet-b8de0ddd", + "SecurityGroupIds": [ + "sg-c4d3f0a1" + ... + }, + ] +} +``` +Next, in a Harness Infrastructure Definition, you map the keys from the JSON host objects to Shell Script Provisioner fields to tell Harness where to obtain the values for your infrastructure settings, such as hostname and subnet. + +![](./static/shell-script-provisioning-with-harness-00.png) + +At runtime, Harness queries your provisioner using your script and stores the returned JSON collection on the Harness Delegate as a file. Harness then uses the JSON key values to define the instructure for your deployment environment as it creates that environment in your target platform. + +Here is a high-level summary of the setup steps involved: + +1. **Delegate and Cloud Provider** - Install a Harness Delegate where it can connect to your infrastructure provisioner and query it for the JSON infrastructure information. Add a Harness Cloud Provider that connects to the platform where the infrastructure will be deployed. +2. **Application and Service** - Create a Harness Application to manage your deployment. Add a Service to your Application. The type of Service you select determines how you map JSON keys in the Shell Script Provisioner **Service Mappings**. For example, an ECS Service will require different mapping settings than a Kubernetes Service. +3. **JSON and Script Prep** - Prepare the JSON file to be retrieved by Harness. Prepare the shell script to pull the JSON to Harness. +4. **Shell Script Provisioner** - Add a Shell Script provisioner to your Application. + 1. Add the shell script to the Shell Script provisioner to query your provisioner and retrieve the JSON infrastructure information. + 2. Add Service Mappings. The mapping method depends on the Service and Deployment Type you select. +5. **Environment** - Add an Environment to your Application that uses the Shell Script Provisioner in its Infrastructure Definition. +6. **Workflow** - Add a Workflow to your Application that applies the Shell Script Provisioner. + +### Limitations + +Shell Script Provisioners are only supported in Canary and Multi-Service deployment types. For AMI/ASG and ECS deployments, Shell Script Provisioners are also supported in Blue/Green deployments. + +### Permissions + +You need to give Harness permissions in your target environment so Harness can provision using you provisioner. These are the same permissions you would need to grant Harness for existing, static infrastructures. + +The permissions required for Harness to use your provisioner and successfully deploy to the provisioned instances depends on the deployment platform you use. + +As a summary, you will need to manage the following permissions: + +* **Delegate** - The Harness Delegate will require permissions according to the deployment platform. It will use the access, secret, and SSH keys you configure in Harness  [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) to perform deployment operations. For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see  [Trust Relationships and Roles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#trust_relationships_and_roles). +* **Cloud Provider** - The Harness Cloud Provider must have access permissions for the resources you are planning to create in the provisioner script. For some Harness Cloud Providers, you can use the installed Delegate and have the Cloud Provider assume the permissions used by the Delegate. For others, you can enter cloud platform account information. + :::note + The account used for the Cloud Provider will require platform-specific permissions for creating infrastructure. For example, to create EC2 AMIs the account requires the **AmazonEC2FullAccess** policy. + ::: +* **Git Repo** - You will add the Git repo where the provisioner script is located to Harness as a Source Repo Provider. For more information, see  [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +* **Access and Secret Keys** - These are set up in Harness  [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) and then used as variable values when you add a Provisioner step to a Workflow. +* **SSH Key** - In order for the Delegate to copy artifacts to the provisioned instances, it will need an SSH key. You set this up in Harness Secrets Management and then reference it in the Harness Environment Infrastructure Definition. See [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). +* **Platform Security Groups** - Security groups are associated with EC2 and other cloud platform instances and provide security at the protocol and port access level. You will need to define security groups in your provisioner scripts and ensure that they allow the Delegate to connect to the provisioned instances. + +### No Artifact Required + +You do not need to deploy artifacts via Harness Services to use provisioning in a Workflow. You can simply set up a Shell Script Provisioner and use it in a Workflow to provision infrastructure without deploying any artifact. In Harness documentation, we include artifact deployment as it is the ultimate goal of Continuous Delivery. + +### Service Instances (SIs) Consumption + +Harness Service Instances (SIs) are not consumed and no additional licensing is required when a Harness Workflow uses the provisioner to provision resources. When Harness deploys artifacts via Harness Services to the provisioned infrastructure in the same Workflow or Pipeline, SIs licensing is consumed. + +### Next Step + +[Shell Script Provisioner](https://docs.harness.io/article/1m3p7phdqo-shell-script-provisioner) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/artifact-build-and-deploy-pipelines-overview-35.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/artifact-build-and-deploy-pipelines-overview-35.png new file mode 100644 index 00000000000..4956af25f75 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/artifact-build-and-deploy-pipelines-overview-35.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/aws-lambda-deployments-overview-24.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/aws-lambda-deployments-overview-24.png new file mode 100644 index 00000000000..bb063980bc7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/aws-lambda-deployments-overview-24.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/aws-lambda-deployments-overview-25.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/aws-lambda-deployments-overview-25.png new file mode 100644 index 00000000000..bb063980bc7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/aws-lambda-deployments-overview-25.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/azure-arm-and-blueprint-provision-with-harness-19.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/azure-arm-and-blueprint-provision-with-harness-19.png new file mode 100644 index 00000000000..939e9df8280 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/azure-arm-and-blueprint-provision-with-harness-19.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/azure-arm-and-blueprint-provision-with-harness-20.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/azure-arm-and-blueprint-provision-with-harness-20.png new file mode 100644 index 00000000000..fef3ba904ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/azure-arm-and-blueprint-provision-with-harness-20.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-26.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-26.png new file mode 100644 index 00000000000..2e494d652e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-26.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-27.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-27.png new file mode 100644 index 00000000000..a8432c4508a Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-27.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-28.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-28.png new file mode 100644 index 00000000000..b88f3b46de8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-28.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-29.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-29.png new file mode 100644 index 00000000000..ce05da28867 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-29.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-30.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-30.png new file mode 100644 index 00000000000..6e5825bc2a7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-30.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-31.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-31.png new file mode 100644 index 00000000000..df8e0a9ee82 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-31.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-32.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-32.png new file mode 100644 index 00000000000..c044b58c01c Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-32.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-33.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-33.png new file mode 100644 index 00000000000..a89fc3321b4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-33.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-34.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-34.png new file mode 100644 index 00000000000..576ef714df0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/ci-cd-with-the-build-workflow-34.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/cloud-formation-provisioning-with-harness-01.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/cloud-formation-provisioning-with-harness-01.png new file mode 100644 index 00000000000..2bebe4a7bd9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/cloud-formation-provisioning-with-harness-01.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-02.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-02.png new file mode 100644 index 00000000000..ccee55130ed Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-02.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-03.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-03.png new file mode 100644 index 00000000000..ccee55130ed Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-03.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-04.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-04.png new file mode 100644 index 00000000000..eb8b3fa3faf Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-04.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-05.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-05.png new file mode 100644 index 00000000000..eb8b3fa3faf Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-05.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-06.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-06.png new file mode 100644 index 00000000000..59fc843675d Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-06.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-07.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-07.png new file mode 100644 index 00000000000..59fc843675d Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-07.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-08.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-08.png new file mode 100644 index 00000000000..9fe751a0452 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-08.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-09.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-09.png new file mode 100644 index 00000000000..9fe751a0452 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-09.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-10.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-10.png new file mode 100644 index 00000000000..a7e6aa8dd54 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-10.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-11.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-11.png new file mode 100644 index 00000000000..a7e6aa8dd54 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-11.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-12.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-12.png new file mode 100644 index 00000000000..05eae38bd33 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-12.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-13.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-13.png new file mode 100644 index 00000000000..05eae38bd33 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-13.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-14.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-14.png new file mode 100644 index 00000000000..e260bf1f253 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-14.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-15.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-15.png new file mode 100644 index 00000000000..15ff8904579 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-15.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-16.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-16.png new file mode 100644 index 00000000000..15ff8904579 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/deployment-concepts-and-strategies-16.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/shell-script-provisioning-with-harness-00.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/shell-script-provisioning-with-harness-00.png new file mode 100644 index 00000000000..0a61457d657 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/shell-script-provisioning-with-harness-00.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-21.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-21.png new file mode 100644 index 00000000000..d4528f57109 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-21.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-22.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-22.png new file mode 100644 index 00000000000..36d957217f2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-22.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-23.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-23.png new file mode 100644 index 00000000000..36d957217f2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terraform-provisioning-with-harness-23.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-36.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-36.png new file mode 100644 index 00000000000..e729dad08b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-36.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-37.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-37.png new file mode 100644 index 00000000000..2977a1e7863 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-37.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-38.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-38.png new file mode 100644 index 00000000000..e69c736d100 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/terragrunt-provisioning-with-harness-38.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/use-templates-17.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/use-templates-17.png new file mode 100644 index 00000000000..d120fd3bc35 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/use-templates-17.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/use-templates-18.png b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/use-templates-18.png new file mode 100644 index 00000000000..0fc0d603956 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/static/use-templates-18.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/terraform-provisioning-with-harness.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/terraform-provisioning-with-harness.md new file mode 100644 index 00000000000..5f51ea0ea47 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/terraform-provisioning-with-harness.md @@ -0,0 +1,71 @@ +--- +title: Terraform Provisioning with Harness (FirstGen) +description: Use Terraform to provision infrastructure as part of your deployment process. +# sidebar_position: 2 +helpdocs_topic_id: hh52ews03d +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/w6i5f7cpc9).Harness lets you use Terraform to provision infrastructure as part of your deployment process. Harness can provision any resource that is supported by a Terraform  [provider or plugin](https://www.terraform.io/docs/configuration/providers.html). + +Looking for How-tos? See [Terraform How-tos](../../terraform-category/terrform-provisioner.md). + +### Terraform Implementation Summary + +Here is a visual summary of how you use your Terraform scripts in Harness to provision infra and then deploy to it: + +![](./static/terraform-provisioning-with-harness-21.png) + +You set up a Terraform Infrastructure Provisioner in the following order: + +1. **Terraform Infrastructure Provisioner** — Add your Terraform scripts as a Harness Terraform Provisioner. You add the provisioner script by connecting to a Git repo where the scripts are kept and setting up any inputs. +2. **​Infrastructure Definition** — Select the Terraform Provisioner you set up. Now it can be used in any Workflow where you want to target the provisioned infrastructure. You simply map your script outputs to the required Harness settings: + + [![](./static/terraform-provisioning-with-harness-22.png)](./static/terraform-provisioning-with-harness-22.png) + +3. **Workflow Setup** — When you create your Workflow, you select the Infrastructure Definition that maps to your script outputs. +4. **Workflow Provisioner Step** — In the Workflow, you add a **Terraform Provisioner** step. The Workflow will build the infrastructure according to your Terraform script. +5. **Pre-deployment** — The pre-deployment steps are executed and provision the infrastructure using the **Terraform Provisioner** step. +6. **Deployment** — The Workflow deploys to the provisioned infrastructure defined in its Infrastructure Definition. + +### Use Terraform for Non-deployment Provisioning + +You can use Terraform in Harness to provision any infrastructure, not just the target infrastructure for the deployment. + +See [Using the Terraform Apply Command](../../terraform-category/using-the-terraform-apply-command.md). + +### Limitations + +* Terraform Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI/ASG and ECS deployments, Terraform Infrastructure Provisioners are also supported in Blue/Green deployments. +* The **Terraform Provision** and **Terraform Rollback** commands are available in the **Pre-deployment** section and the **Terraform Destroy** command is available in the **Post-deployment** section. + +### Permissions + +You need to give Harness permissions in your target environment so Harness can provision using Terraform. These are the same permissions you would need to grant Harness for existing, static infrastructures. + +The permissions required for Harness to use your provisioner and successfully deploy to the provisioned instances depends on the deployment platform you use. + +As a summary, you will need to manage the following permissions: + +* **Delegate** - The Harness Delegate will require permissions according to the deployment platform. It will use the access, secret, and SSH keys you configure in Harness  [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) to perform deployment operations. For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see  [Trust Relationships and Roles](../../aws-deployments/ecs-deployment/harness-ecs-delegate.md#trust-relationships-and-roles). +* **Cloud Provider** - The Harness Cloud Provider must have access permissions for the resources you are planning to create in the Terraform script. For some Harness Cloud Providers, you can use the installed Delegate and have the Cloud Provider assume the permissions used by the Delegate. For others, you can enter cloud platform account information. + +The account used for the Cloud Provider will require platform-specific permissions for creating infrastructure. For example, to create EC2 AMIs the account requires the **AmazonEC2FullAccess** policy.* **Git Repo** - You will add the Git repo where the provisioner script is located to Harness as a Source Repo Provider. For more information, see  [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +* **Access and Secret Keys** - These are set up in Harness  [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) and then used as variable values when you add a Provisioner step to a Workflow. +* **SSH Key** - In order for the Delegate to copy artifacts to the provisioned instances, it will need an SSH key. You set this up in Harness Secrets Management and then reference it in the Harness Environment Infrastructure Definition. See [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). +* **Platform Security Groups** - Security groups are associated with EC2 and other cloud platform instances and provide security at the protocol and port access level. You will need to define security groups in your provisioner scripts and ensure that they allow the Delegate to connect to the provisioned instances. + +### No Artifact Required + +You do not need to deploy artifacts via Harness Services to use Terraform provisioning in a Workflow. You can simply set up a Terraform Provisioner and use it in a Workflow to provision infrastructure without deploying any artifact. In Harness documentation, we include artifact deployment as it is the ultimate goal of Continuous Delivery. + +### Service Instances (SIs) Consumption + +Harness Service Instances (SIs) are not consumed when a Harness Workflow uses Terraform to provision resources. When Harness deploys artifacts via Harness Services to the provisioned infrastructure in the same Workflow or Pipeline, SIs licensing is consumed. + +### Next Steps + +Get started with [Terraform How-tos](../../terraform-category/terrform-provisioner.md). + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md new file mode 100644 index 00000000000..4c1c8800f2b --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md @@ -0,0 +1,124 @@ +--- +title: Terragrunt Provisioning with Harness +description: Harness lets you use Terragrunt to provision infrastructure as part of your deployment process. +# sidebar_position: 2 +helpdocs_topic_id: a6onutvbem +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness lets you use [Terragrunt](https://terragrunt.gruntwork.io/) to provision infrastructure as part of your deployment process. Harness can provision any resource that is supported by Terragrunt and the related Terraform [provider or plugin](https://www.terraform.io/docs/configuration/providers.html). + +Once Harness provisions the target infrastructure using Terragrunt and Terraform, Harness can deploy to it, all in the same Workflow. + +You can also simply provision non-target infrastructure without deploying to it. + +If you want to use Terraform without Terragrunt, Harness supports that, too. See [Terraform Provisioning with Harness](terraform-provisioning-with-harness.md) and [Terraform How-tos](../../terraform-category/terrform-provisioner.md). + +Looking for How-tos? See [Terragrunt How-tos](../../terragrunt-category/terragrunt-how-tos.md). + +### Terragrunt Target Infrastructure Provisioning + +Here is a visual summary of how you use your and Terragrunt and Terraform files with Harness to provision target infra and then deploy to it: + +![](./static/terragrunt-provisioning-with-harness-36.png) + +Here's a 6 minute video walkthrough of the process: + + + + + +You set up a Terragrunt deployment in the following order: + +1. **Terragrunt** **Infrastructure Provisioner** — Add your Terragrunt config file(s) (.hcl) as a Harness Terragrunt Provisioner. You add the Terragrunt file(s) by connecting to a Git repo where the files are kept. +2. **​Infrastructure Definition** — You use the Terragrunt Infrastructure Provisioner to define a deployment target infrastructure. +In a Harness Infrastructure Definition, you select the Terragrunt Infrastructure Provisioner you set up and map specific Terraform outputs to the required Infrastructure Definition settings. +With Terragrunt, the outputs will be in the Terraform module the Terragrunt config file points to (`source`). +3. **Workflow Setup** — When you create your Workflow, you select the Infrastructure Definition that maps to your outputs. You might add it in the main Workflow settings or in the settings within a Workflow Phase. Either way, the Infrastructure Definition mapped to your Terragrunt/Terraform files is the deployment target for the Workflow. +4. **Workflow Provisioner Step** — In the Workflow, you add a **Terragrunt** **Provisioner** pre-deployment step that uses the same Terragrunt Infrastructure Provisioner. The Workflow will build the infrastructure according to your Terragrunt and Terraform files. +5. **During** **Pre-deployment execution** — The pre-deployment steps are executed and provision the target infrastructure using the **Terragrunt** **Provisioner** step. +6. **Deployment** — The Workflow deploys your application to the provisioned infrastructure. + +See [Terragrunt How-tos](../../terragrunt-category/terragrunt-how-tos.md). + +### Use Terragrunt for Non-target Provisioning + +You can use Terragrunt in Harness to provision any infrastructure, not just the target infrastructure for the deployment. + +In this use case, you simply add the Terragrunt Provision step to your Workflow and it runs some Terragrunt commands to provision some non-target resources in your infrastructure. + +You do not need to deploy artifacts via Harness Services to use Terragrunt provisioning in a Workflow. You can simply set up a Terragrunt Provisioner and use it in a Workflow to provision infrastructure without deploying any artifact. + +![](./static/terragrunt-provisioning-with-harness-37.png) + +See [Provision using the Terragrunt Provision Step](../../terragrunt-category/provision-using-the-terragrunt-provision-step.md). + +### Limitations + +* Terragrunt Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. +* For AMI/ASG and ECS deployments, Terragrunt Infrastructure Provisioners are also supported in Blue/Green deployments. + +### Permissions + +You need to give Harness permissions in your target environment so Harness can provision using Terraform. These are the same permissions you would need to grant Harness for existing, static infrastructures. + +The permissions required for Harness to use your provisioner and successfully deploy to the provisioned instances depends on the deployment platform you use. + +As a summary, you will need to manage the following permissions: + +* **Harness User Groups:** to set up Terragrunt in Harness your Harness User Groups needs CRUD Application permissions for the Harness Application(s) that will use Terragrunt: + + **Provision Type:** Provisioners. + + **Application:** All Applications that you want to use with Terragrunt. + + **Filter:** All Provisioners. + + **Action:** Create, Read, Update, Delete. + See [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). + ![](./static/terragrunt-provisioning-with-harness-38.png) +* **Delegate**: the Harness Delegate will require permissions according to the deployment platform. It will use any access, secret, and SSH keys you configure in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) to perform deployment operations. For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see  [Trust Relationships and Roles](../../aws-deployments/ecs-deployment/harness-ecs-delegate.md#trust-relationships-and-roles). +* **Cloud Provider**: the Harness Cloud Provider must have access permissions for the resources you are planning to create using Terragrunt and Terraform. For some Harness Cloud Providers, you can use the installed Delegate and have the Cloud Provider assume the permissions used by the Delegate. For others, you can enter cloud platform account information. +:::note +The account used for the Cloud Provider will require platform-specific permissions for creating infrastructure. For example, to create EC2 AMIs the account requires the **AmazonEC2FullAccess** policy. +::: +* **Git Repo**: you will add the Git repo(s) where the Terragrunt and Terraform files are located to Harness as a Source Repo Provider. For more information, see  [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +* **Access and Secret Keys**: if needed, these are set up in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) and then used as variable values when you add a Provisioner step to a Workflow. +* **SSH Key**: in some cases, in order for the Delegate to copy artifacts to the provisioned instances, it will need an SSH key. You set this up in Harness Secrets Management and then reference it in the Harness Environment Infrastructure Definition. See [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). +* **Platform Security Groups**: security groups are associated with EC2 and other cloud platform instances and provide security at the protocol and port access level. If needed, you will need to define security groups in your provisioner scripts and ensure that they allow the Delegate to connect to the provisioned instances. + +### Supported Terraform Versions with Terragrunt + +The following recommendations are from [Terragrunt](https://terragrunt.gruntwork.io/docs/getting-started/supported-terraform-versions/). In practice, as Terragrunt notes, the version compatibility is more relaxed. + + + +| | | +| --- | --- | +| **Terraform Version** | **Terragrunt Version** | +| 0.14.x | >= 0.27.0 | +| 0.13.x | >= 0.25.0 | +| 0.12.x | 0.19.0 - 0.24.4 | +| 0.11.x | 0.14.0 - 0.18.7 | + +### No Artifact Required + +You do not need to deploy artifacts via Harness Services to use Terragrunt provisioning in a Workflow. You can simply set up a Terragrunt Provisioner and use it in a Workflow to provision infrastructure without deploying any artifact. In Harness documentation, we include artifact deployment as it is the ultimate goal of Continuous Delivery. + +### Service Instances (SIs) Consumption + +Harness Service Instances (SIs) are not consumed and no additional licensing is required when a Harness Workflow uses Terragrunt to provision resources. + +When Harness deploys artifacts via Harness Services to the provisioned infrastructure in the same Workflow or Pipeline, SIs licensing is consumed. + +### Auto-Approve and Force Option Support + +Currently `auto-approve` option support is behind the feature flag `TG_USE_AUTO_APPROVE_FLAG`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.By default, Harness uses the `force` option with `terraform apply -destroy`. + +The `force` option is deprecated in Terraform version 0.15.0 and greater. Consequently, Harness will use the `auto-approve` option if you are using Terraform version 0.15.0 and greater. + +If you are using a Terraform version earlier than Terraform version 0.15.0, Harness will continue to use `force`. + +### Next Steps + +Get started with [Terragrunt How-tos](../../terragrunt-category/terragrunt-how-tos.md). + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/traditional-deployments-ssh-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/traditional-deployments-ssh-overview.md new file mode 100644 index 00000000000..2d1ff64d029 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/traditional-deployments-ssh-overview.md @@ -0,0 +1,59 @@ +--- +title: Traditional Deployments (SSH) Overview +description: Traditional deployments use application package files and a runtime environment (Tomcat, JBoss) in Harness. For How-tos on Traditional deployments, see Traditional (SSH) Deployments How-tos. You can… +# sidebar_position: 2 +helpdocs_topic_id: aig5tw1zvo +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Traditional deployments use application package files and a runtime environment (Tomcat, JBoss) in Harness. + +For How-tos on Traditional deployments, see [Traditional (SSH) Deployments How-tos](../../traditional-deployments/traditional-deployments-overview.md).You can perform traditional deployments to AWS and Azure, and to any server on any platform via a platform agnostic [Physical Data Center](https://docs.harness.io/article/stkxmb643f-add-physical-data-center-cloud-provider) connection. In all cases, you simply set up a Harness Infrastructure Definition and target the hosts on the platform. + +These deployments are different from Harness deployments using container orchestration platforms like [Kubernetes](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart), [Helm](https://docs.harness.io/article/2aaevhygep-helm-quickstart), [Pivotal](https://docs.harness.io/article/hy819vmsux-pivotal-cloud-foundry-quickstart), [AWS ECS](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments), and [Azure](../../azure-deployments/aks-howtos/azure-deployments-overview.md). + +For a Build and Deploy Pipeline using a Traditional deployment, see [Artifact Build and Deploy Pipelines Overview](artifact-build-and-deploy-pipelines-overview.md).Traditional deployments involve obtaining an application package from an artifact source, such as a WAR file in an AWS S3 bucket, and deploying it to a target host, such as a virtual machine. + +### Supported Packaging Formats + +Harness supports the following traditional deployment packaging formats: WAR, JAR, TAR, RPM, ZIP, Docker, and custom files. + +All of these formats are also supported by other Harness deployment types, such as Kubernetes, Helm, PCF, ECS, etc.  This guide is concerned with traditional deployments outside of the container orchestration platforms. + +### Supported Platforms and Technologies + +See **SSH** in [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Deployment Summary + +For a general overview of how Harness works, see [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +The following list describes the major steps involved in a Traditional deployment: + +1. Installing the Harness Delegate in your target infrastructure. See [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). +2. Add a connection to your artifact server. See [Add Artifactory Servers](https://docs.harness.io/article/nj3p1t7v3x-add-artifactory-servers). +3. Add a connection to your cloud provider. This is a connection to your deployment infrastructure, either physical or hosted in a cloud platform like AWS, GCP, or Azure. See [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). +4. Create the Harness Application for your deploying your application packages. +5. Create the Harness Service using the SSH type. + 1. Add your packaged application file(s) as an Artifact Source. +6. Create the Harness Environment containing the [Infrastructure Definition](https://docs.harness.io/article/n39w05njjv-environment-configuration#add_an_infrastructure_definition) definition of your deployment infrastructure. +7. Create the Basic Deployment Workflow. +8. Deploy the Workflow. +9. Next steps: + 1. Create a Harness Pipeline for your deployment, including Workflows and Approval steps. For more information, see [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) and [Approvals](https://docs.harness.io/article/0ajz35u2hy-approvals). + 2. Create a Harness Trigger to automatically deploy your Workflows or Pipeline according to your criteria. For more information, see [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2). + 3. Create Harness Infrastructure Provisioners for your deployment environments. For more information, see [Infrastructure Provisioners](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner). + +### Next Step + +Traditional Deployments How-tos: + +* [Connect to Your Repos and Target SSH Platforms](../../traditional-deployments/connect-to-your-target-ssh-platform.md) +* [Add Artifacts and App Stacks for Traditional (SSH) Deployments](../../traditional-deployments/add-artifacts-for-ssh-deployments.md) +* [Add Scripts for Traditional (SSH) Deployments](../../traditional-deployments/add-deployment-specs-for-traditional-ssh-deployments.md) +* [Define Your Traditional (SSH) Target Infrastructure](../../traditional-deployments/define-your-traditional-ssh-target-infrastructure.md) +* [Create Default Application Directories and Variables](https://docs.harness.io/article/lgg12f0yry-set-default-application-directories-as-variables) +* [Create a Basic Workflow for Traditional (SSH) Deployments](../../traditional-deployments/create-a-basic-workflow-for-traditional-ssh-deployments.md) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/triggers-and-rbac.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/triggers-and-rbac.md new file mode 100644 index 00000000000..1e182e280c2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/triggers-and-rbac.md @@ -0,0 +1,41 @@ +--- +title: Triggers and RBAC +description: This content is for Harness FirstGen. Switch to NextGen. A Trigger involves multiple settings, including Service, Environment, and Workflow specifications. Harness examines these components as you se… +# sidebar_position: 2 +helpdocs_topic_id: su0wpdarqi +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/vz5cq0nfg2).A Trigger involves multiple settings, including Service, Environment, and Workflow specifications. Harness examines these components as you set up a Trigger. + +You might be authorized for one component selected in a Trigger, such as a Service, but not another, such as an Environment. In these cases, an error message will alert you to missing authorizations. + +To determine if you are authorized to create Triggers for a particular Environment or other components, review: + +* All the permissions of your Harness User Group. The User Group Application Permissions should include the **Deployments** Permission Type and **Execute Workflow** and/or **Execute Pipeline** Action for the Harness Application(s) with the Triggers you want Users to execute. +* The Usage Scope of the Cloud Provider, and of any other Harness connectors you have set up. + +For further details, see [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) and [Connectors Overview](https://docs.harness.io/article/a7n7lwsjpk-harness-connectors). + +Below are some errors that can occur. + +#### User does not have "Deployment: execute" permission + +Error messages of the form `User does not have "Deployment: execute" permission` indicate that your user group's **Application Permissions** > **Action** settings do not include **execute** in the scope of the specified Application and/or Environment. To resolve this, see [Application Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions#application_permissions). + +#### User not authorized + +The following error message indicates that a non-Administrator has tried to submit a Trigger whose **Workflow Variables: Environment** field is configured with a variable, rather than with a static Environment name: + +`User not authorized: Only members of the Account Administrator user group can create or update  Triggers with parameterized variables` + +Submitting a Pipeline Trigger that includes such a Workflow will generate the same error. + +One resolution is to set the **Environment** field to a static value. But if the **Environment** setting must be dynamic, a member of the Account Administrator user group will need to configure and submit the Trigger. + +### See Also + +* You can use settings to enforce authorization on some Triggers. See [Trigger a Deployment using cURL](https://docs.harness.io/article/mc2lxsas4c-trigger-a-deployment-using-c-url). + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/use-templates.md b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/use-templates.md new file mode 100644 index 00000000000..2ab81003b7f --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployment-types/use-templates.md @@ -0,0 +1,65 @@ +--- +title: Account and Application Templates +description: Create templates for common commands and scripts, to ensure consistency and save time. +# sidebar_position: 2 +helpdocs_topic_id: ygi6d8epse +helpdocs_category_id: vbcmo6ltg7 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/1re7pz9bj8).Harness includes an account-wide Template Library, called the Shared Template Library, and an Application-wide Template Library. + +Only members of a Harness User Group with the **Manage Template Library** permission may create, edit, and delete Account and Application-level templates. Members of a User Group with this permission disabled can view and link to templates only. + + +### Before You Begin + +* [Add a Service](https://docs.harness.io/article/eb3kfl8uls-service-configuration) +* [Add a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration) + +### Shared Template Library and Application Template Library + +The Shared Template Library is available from **Setup** and the Application Template Library is available in each Application. + +![](./static/use-templates-17.png) + +Using templates from either source works the same way, and both options are available in Harness components, but Application templates may be used within their Application only. + +For example, when you click **Add Command** in the Service, you see the option to select a template from the Application or Shared Template Library. + +![](./static/use-templates-18.png) + +### Template YAML + +When you look at the code for an Application containing Services or Workflows using linked templates, the YAML for the template information of the command is displayed like this: + + +``` +- type: SHELL_SCRIPT + name: DocExample + properties: + sweepingOutputScope: null + connectionAttributes: null + publishAsVar: false + commandPath: null + scriptType: BASH + host: null + scriptString: echo "Hello" ${name} + timeoutMillis: 600000 + sshKeyRef: null + executeOnDelegate: true + sweepingOutputName: null + tags: '' + templateUri: AccountName/DocExample:latest + templateVariables: + - name: name +``` +### Next Steps + +* [Create an HTTP Workflow Step Template](https://docs.harness.io/article/dv7ajeroou-account-and-application-templates) +* [Create a Shell Script Workflow Step Template](https://docs.harness.io/article/lfqn3t83hd-create-a-shell-script-workflow-step-template) +* [Create a Service Command Template](https://docs.harness.io/article/kbmz9uc7q9-create-a-service-command-template) +* [Add Service Command Templates into Command Units](https://docs.harness.io/article/mfoy0hrw8y-add-service-command-templates-into-command-units) +* [Link Templates to Services and Workflows](https://docs.harness.io/article/xd70p7rmqd-link-templates-to-services-and-workflows) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/_category_.json b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/_category_.json new file mode 100644 index 00000000000..243ea375bc0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/_category_.json @@ -0,0 +1 @@ +{"label": "General Deployment Features", "position": 10, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "General Deployment Features"}, "customProps": { "helpdocs_category_id": "cwefyz0jos"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deploy-to-multiple-infrastructures.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deploy-to-multiple-infrastructures.md new file mode 100644 index 00000000000..061a9b5d1fb --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deploy-to-multiple-infrastructures.md @@ -0,0 +1,146 @@ +--- +title: Deploy a Workflow to Multiple Infrastructures Simultaneously +description: This content is for Harness FirstGen. Switch to NextGen. Most Harness customers deploy the same service to multiple infrastructures, such as infrastructures for different stages of the release proces… +sidebar_position: 70 +helpdocs_topic_id: bc65k2imoi +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io).Most Harness customers deploy the same service to multiple infrastructures, such as infrastructures for different stages of the release process (QA, Prod) or in different regions. However, by default, Workflows only have one target infrastructure configured, defined as its Infrastructure Definition. + +Having only one infrastructure configured can pose certain challenges: + +* Multiple Workflow deployments for each infrastructure must be managed (tracked, rolled back, etc). +* More Workflows means more errors can be introduced, and consistency is more difficult to ensure. +* Creating separate Workflows for each infrastructure is time-consuming. + +To solve these challenges, Harness lets you deploy a single Workflow to several infrastructures. First you template the Infrastructure Definition setting, add the Workflow to a Pipeline, and then select multiple infrastructures when you deploy the Pipeline. Next, Harness reruns the same Workflow for each infrastructure, in parallel. + +### Before You Begin + +* [Templatize a Workflow](https://docs.harness.io/article/bov41f5b7o-templatize-a-workflow-new-template) +* [Create Pipeline Templates](https://docs.harness.io/article/60j7391eyy-templatize-pipelines) + +### Visual Summary + +First, the Workflow Infrastructure Definition setting is templated with the `${infrastructure}` expression and then, when deployed, the `infrastructure` Workflow multi-select variable is given multiple infrastructures. + +![](./static/deploy-to-multiple-infrastructures-08.png) + +Here is an example of a Pipeline deployment Stage using the Workflow that deploys to two Infrastructure Definitions: + +![](./static/deploy-to-multiple-infrastructures-09.png) + +### Limitations + +* Multi-infrastructure deployments can be done using Pipelines only, not individual Workflow deployments. +* You can only use the same variable when you template the Infrastructure Definitions settings in multiple phases or a Workflow for in multi-infrastructure deployments. +If you have a multi-phase Workflow (such as a two-phase Canary Deployment) then you must use the same variable for both phases **Infrastructure Definition** settings. +* You can only execute Multi-infrastructure deployments if your User Group's Application Permissions includes the **Execute Pipeline** action on all the infrastructures selected in the Pipeline. By default, all User Groups can deploy to all Infrastructure Definitions. See [Restrict Deployment Access to Specific Environments](restrict-deployment-access-to-specific-environments.md). + +### Step 1: Add Multiple Infrastructure Definitions + +To deploy to multiple infrastructures, you first need to define multiple Infrastructure Definitions in a Harness Environment. + +You must use a single Environment for these Infrastructure Definitions. You can use this Environment in the Workflow that you want to deploy to these Infrastructure Definitions, or you can template the Environment setting in the Workflow, and then select the Environment when you deploy. Either way, a single Environment is used when you deploy. + +Later, when you deploy the Pipeline, you will select from the list of multiple Infrastructure Definitions in the Environment. + +See [Add an Infrastructure Definition](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions). + +### Step 2: Template Workflow Infrastructure Definitions + +In order for a Workflow to deploy to multiple infrastructures, you must template (or *templatize*) its Infrastructure Definition setting. This turns the Infrastructure Definition setting into a parameter that can be given a value when you deploy the Pipeline (and its Workflows). + +You can only use multi-infrastructure deployments on Workflows with one phase. See [Limitations](#limitations) below. + +For steps on templating the Infrastructure Definition setting in a Workflow, see [Templatize a Workflow](https://docs.harness.io/article/bov41f5b7o-templatize-a-workflow-new-template). + +### Step 3: Template Pipeline Infrastructure Definitions + +You can only deploy a Workflow to multiple infrastructures as part of a Pipeline. You cannot deploy a Workflow to multiple infrastructures by itself. + +Add the Workflow that you want to deploy to multiple infrastructures to a Pipeline. See [Create a Pipeline](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration). + +When you add your Workflow to the Pipeline, you must template its Infrastructure Definition setting. If, instead, you select an Infrastructure Definition, you cannot select multiple Infrastructure Definition later when you deploy. + +For steps on templating the Infrastructure Definitions in a Pipeline, as well as other Pipeline settings, see [Create Pipeline Templates](https://docs.harness.io/article/60j7391eyy-templatize-pipelines). You simply need to enter in a variable expression, such as `${infrastructure}`. + +Here is an example where the Workflow Infrastructure Definition setting is templated with the `${infrastructure}` expression and then, when deployed, the `infrastructure` Workflow variable is given multiple infrastructures. + +![](./static/deploy-to-multiple-infrastructures-10.png) + +### Step 4: Deploy to Multiple Infrastructure Definitions + +Now that you have added the Workflow to a Pipeline and templated the Infrastructure Definition setting, you can deploy the Pipeline and select multiple infrastructures. + +In your Pipeline, click **Deploy**. If you click **Start New Deployment** in the Continuous Deployment page, select the Pipeline. + +In **Workflow Variables**, find the Workflow variable created for the templated Infrastructure Definition. + +In **Value**, select the multiple infrastructures where you want to deploy. Click the checkbox next to each Infrastructure Definition. + +![](./static/deploy-to-multiple-infrastructures-11.png) + +Click **Submit**. + +The Workflows that use multiple Infrastructure Definitions are displayed in parentheses to indicate that they are executing in parallel. + +Click each Workflow and look in the **Infrastructure** displayed. It will show a different Infrastructure Definition for each Workflow. + +![](./static/deploy-to-multiple-infrastructures-12.png) + +### Step 5: Trigger Multiple Infrastructure Workflows and Pipelines + +When you create a Trigger for the Pipeline, you must select the multiple Infrastructure Definitions for the Workflow(s) it contains: + +![](./static/deploy-to-multiple-infrastructures-13.png) + +The Trigger will execute the Pipeline and deploy to both infrastructures just as it did when you deployed the Pipeline on its own. + +See [Trigger Workflows and Pipelines](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2). + +### Option: Use Multiple Infrastructure Definitions in the API + +If you have templated your Workflow and Pipeline **Infrastructure Definition** settings as described above, you can execute the Pipeline in the Harness GraphQL API and provide multiple Infrastructure Definitions. + +You can provide multiple Infrastructure Definitions as a comma-separated list by Infrastructure Definition name: + + +``` +... + { + name: "InfraDefinition_Kubernetes" + variableValue: { + type: NAME + value: "k8s1,k8s2" + } + }, +... +``` +Or by Infrastructure Definition ID: + + +``` +... + { + name: "InfraDefinition_Kubernetes" + variableValue: { + type: ID + value: "oX8gFrDsTLWtYcZq4E8eGg,g6ghpzB3QICjNNbnzYRpIg" + } + } +... +``` +See [Trigger Workflows or Pipelines Using GraphQL API](https://docs.harness.io/article/s3leksekny-trigger-workflow-or-a-pipeline-using-api). + +### Review: Rollback + +When you deploy a Workflow to multiple infrastructures, the Workflow is executed as if it were two Workflows executed in parallel. Harness is actually triggering the same Workflow twice using two different Infrastructure Definitions. + +If either Workflow fails, that particular Workflow is rolled back, not both Workflows. The Failure Strategy for the failed Workflow is executed. The Workflow that succeeded is not rolled back. + +If either Workflow fails, the Pipeline stage fails, naturally, and the Pipeline deployment fails. + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deployment-logging-limitations.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deployment-logging-limitations.md new file mode 100644 index 00000000000..a07c83677b8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deployment-logging-limitations.md @@ -0,0 +1,28 @@ +--- +title: Deployment Logging Limitations +description: This content is for Harness FirstGen. Switch to NextGen. This topic lists the deployment log size, export, and viewing limits. Limitations. Harness deployment logging has the following limitations -- A… +sidebar_position: 90 +helpdocs_topic_id: h3b4wttuk5 +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/n06yruxm0d).This topic lists the deployment log size, export, and viewing limits. + +### Limitations + +Harness deployment logging has the following limitations: + +* A hard limit of 25MB for logs produced by 1 Workflow step. Logs beyond this limit will be skipped and not available for download as well. +* Harness always saves the final log line that contains the status (Success, Failure, etc) even if logs go beyond the limit. +* In cases where the final log line is itself very large and logs are already beyond max limit (25MB), Harness shows a limited portion of the line from the end (10KB of data for the log line). + +### Viewing Large Logs + +For any completed Workflow displayed in **Deployments**, you can expand the log section. The most recent log information is displayed first. You can scroll to see older logs information. + +### See Also + +[Export Deployment Logs](export-deployment-logs.md) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deployments-overview.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deployments-overview.md new file mode 100644 index 00000000000..ed91c15f187 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/deployments-overview.md @@ -0,0 +1,57 @@ +--- +title: Deployments Overview +description: Links to deployment guides for all supported platforms and scenarios. +sidebar_position: 10 +helpdocs_topic_id: i3n6qr8p5i +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/cqgeblt4uh).The deployment guides walk you through setting up a specific deployment using Harness, such as ECS, Kubernetes, and Helm. They are written to provide you with everything you need to learn how to model your CD process in Harness. + +### Deployment Guides + +Always start with the [Quickstarts](https://docs.harness.io/category/f6rh2cdvx9-first-gen-quickstarts). These will take you from novice to advanced Harness user in a matter of minutes. + +The following topics will walk you through how Harness implements common deployments according to platforms and scenarios: + +- [​AMI (Amazon Machine Image)](../../aws-deployments/ami-deployments/ami-deployment.md) +- [​AWS Elastic Container Service (ECS)](../../aws-deployments/ecs-deployment/ecs-deployments-overview.md) +- [AWS Lambda](/docs/category/aws-lambda-deployments) +- [​Azure](/docs/category/azure-deployments-and-provisioning) +- [CI/CD: Artifact Build and Deploy Pipelines](/docs/category/cicd-artifact-build-and-deploy-pipelines) +- [Google Cloud](/docs/category/google-cloud) +- [Native Helm](/docs/category/native-helm-deployments) +- [​IIS (.NET)](../../dotnet-deployments/iis-net-deployment.md) +- [​Kubernetes](/docs/category/kubernetes-deployments) (includes Helm, OpenShift, etc) +- [Pivotal Cloud Foundry](../../pcf-deployments/pcf-tutorial-overview.md) +- [​Traditional Deployments](../../traditional-deployments/traditional-deployments-overview.md) +- [Custom Deployments](/docs/category/custom-deployments) + +Also, other key platforms that help you make your CD powerful and efficient: + +- [Terraform](/docs/category/terraform) +- [CloudFormation](/docs/category/aws-cloudformation) +- [Configuration as Code](https://docs.harness.io/category/2ea2y01kgz-config-as-code) (work exclusively in YAML and sync with your Git repos) +- [Harness GitOps](https://docs.harness.io/category/goyudf2aoh-harness-gitops) + +For topics on general CD modeling in Harness, see [Model Your CD Pipeline](https://docs.harness.io/category/ywqzeje187-setup). + +### Kubernetes or Native Helm? + +Harness includes both Kubernetes and Helm deployments, and you can use Helm charts in both. Here's the difference: + +- Harness [Kubernetes Deployments](../../kubernetes-deployments/kubernetes-deployments-overview.md) allow you to use your own Kubernetes manifests or a Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. + Harness Kubernetes deployments also support all deployment strategies (Canary, Blue/Green, Rolling, etc). +- For Harness [Native Helm Deployments](../../helm-deployment/helm-deployments-overview.md), you must always have Helm and Tiller (for Helm v2) running on one pod in your target cluster. Tiller makes the API calls to Kubernetes in these cases. You can perform a Basic or Rolling deployment strategy only (no Canary or Blue Green). For Harness Native Helm v3 deployments, you no longer need Tiller, but you are still limited to Basic or Rolling deployments. + - **Versioning:** Harness Kubernetes deployments version all objects, such as ConfigMaps and Secrets. Native Helm does not. + - **Rollback:** Harness Kubernetes deployments will roll back to the last successful version. Native Helm will not. If you did 2 bad Native Helm deployments, the 2nd one will just rollback to the 1st. Harness will roll back to the last successful version. + +### Harness Videos + +Check out [Harness Youtube channel](https://www.youtube.com/c/Harnessio/videos) for the latest videos. + +### Deployment Logging Limitations + +There is a hard limit of 25MB for logs produced by 1 Workflow step. Logs beyond this limit will be skipped and not available for download as well. diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/export-deployment-logs.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/export-deployment-logs.md new file mode 100644 index 00000000000..80462b53421 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/export-deployment-logs.md @@ -0,0 +1,210 @@ +--- +title: Export Deployment Logs +description: This content is for Harness FirstGen. Switch to NextGen. Large enterprises are highly regulated and auditing the deployments happening in their environments is critical. These audits might take into… +sidebar_position: 50 +helpdocs_topic_id: pe7vgjs6sv +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/0slo2gklsy).Large enterprises are highly regulated and auditing the deployments happening in their environments is critical. These audits might take into account deployment information across years. + +Harness provides advanced auditing in the Harness Manager, as described in [Audit Trail](https://docs.harness.io/article/kihlcbcnll-audit-trail), and deployment information in the **Deployments** page and [dashboards](https://docs.harness.io/article/c3s245o7z8-main-and-services-dashboards), but large enterprises want to save deployment information to their centralized archives for years to come. + +Harness includes deployment exports to serve this need. Deployment exports give you control over what is stored and audited and allows you to save deployment information in your archives. + +Harness also provides an API for deployment exports, enabling you to extract logs programmatically. + +### Before You Begin + +Familiarize yourself with the different ways Harness displays deployment information: + +* [Custom Dashboards](https://docs.harness.io/article/rxlbhvwe6q-custom-dashboards) +* [Main and Services Dashboards](https://docs.harness.io/article/c3s245o7z8-main-and-services-dashboards) +* [Audit Trail](https://docs.harness.io/article/kihlcbcnll-audit-trail) +* [Harness API](https://docs.harness.io/article/tm0w6rruqv-harness-api) + +### Limitations + +* Exports are limited to 25 exports per day. +* There is a limit of 1000 deployments per export. You can use the Deployments page filtering to control how many deployments are in each log. +* Anyone can use the export file download link from Harness to download the logs. There is no authentication. You can specify the Harness User Group to notify in Harness Manager and in the API (`userGroupIds`). +* Export log download links expire 72 hours after they are generated. +* There is a hard limit of 25MB for logs produced by 1 Workflow step. Logs beyond this limit will be skipped and not available for download as well. + +### Review: Export Process + +There are two options for exporting deployment logs from Harness: + +* Harness Manager +* Harness API + +In both options, the process is: + +1. Once you export logs from Harness, Harness is notified of your request. +2. Harness processes your request in the background. +3. Once Harness is finished: + 1. If you are using the Harness Manager: Harness sends you an email or Slack message with a link to your log file. You can specify the Harness User Group to notify. + 2. If you using the Harness API, the `status` field is updated with `READY`. +4. You can then download the log archive file. + +### Option 1: Export Using the Harness Manager + +In the Harness Manager **Deployments** page, you filter all deployments to get the exact deployments you want to export, specify which Harness User Groups to notify when Harness has the export file, and then export the logs. + +Next, you check email or Slack to get the download link. + +#### Filter Deployments + +In the Harness Manager, filter the deployment logs before exporting. This ensures that you have exactly the deployment logs you want. + +1. In **Deployments**, click the filter button to show the filter settings. + ![](./static/export-deployment-logs-21.png) +2. Use the filter settings to get the exact logs you want to export. Be sure to use the **Filter by** and **Filter by Time** settings to control which deployments you download and their date range. + +Once you have the deployments you want to export, you can begin the export process. + +#### Export Deployment Logs + +1. In the **Deployments** page, click the options button and click **Export Deployment Logs**. + ![](./static/export-deployment-logs-22.png) +2. In **Export Deployment Logs**, specify the Harness User Groups to notify when the log file is ready. + ![](./static/export-deployment-logs-23.png) + The User Groups are notified using their email and/or Slack settings, as described in [Manage User Notifications](https://docs.harness.io/article/kf828e347t-notification-groups). +3. Click **Submit**. + +#### Download Log File + +Check the User Group's email or Slack channels (or other notification tools) for the Export Execution Logs message: + +![](./static/export-deployment-logs-24.png) + +As you can see, the message has a **Download link**. Click that link to download the log archive. + +### Option 2: Export Using the Harness API + +You can use Harness API mutation `exportExecutions` to fetch your deployment logs. It provides all of the same options as the Harness Manager. + +`exportExecutions` includes the following arguments: + + +``` +clientMutationId: String + +filters: [ExecutionFilter!] + Execution filters + +notifyOnlyTriggeringUser: Boolean + Notify only the triggering user + +userGroupIds: [String!] + User group ids +``` +You will use these arguments to specify the deployment logs you want. You can also use `Executionfilter` fields to control which deployments you download and their date range. + +Here's an example that downloads an Application's deployments and uses the `startTime` field to filter: + + +``` +mutation { + exportExecutions(input: { + clientMutationId: "gqahal-test" + filters: [ + {application: {operator: IN, values: ["ekftWF9jQRewZt7FhM7KTw"]}}, + {startTime: {operator: AFTER, value: 1590969600000}}, + ] + userGroupIds: "HhsqnZRlSCyqTxhVNCqzfA" + }) { + clientMutationId + requestId + status + totalExecutions + triggeredAt + downloadLink + expiresAt + errorMessage + } +} +``` +The `clientMutationId` option is simply used here to demonstrate the case where multiple clients are making updates.Note the use of `userGroupIds` to indicate which Harness User Group to notify via its [Notification Settings](https://docs.harness.io/article/kf828e347t-notification-groups). You can get the ID using the API queries `userGroup` or `userGroupByName` or from the Harness Manager URL when looking at the User Group. + +Here is the example response: + + +``` +{ + "data": { + "exportExecutions": { + "clientMutationId": "gqahal-test", + "requestId": "o63US8waSImWhoEnlDAEcg", + "status": "QUEUED", + "totalExecutions": 2, + "triggeredAt": 1591390258752, + "downloadLink": "https://pr.harness.io/export-logs/api/export-executions/download/o63US8waSImWhoEnlDAEcg?accountId=kmpySmUISimoRrJL6NL73w", + "expiresAt": 1591660258751, + "errorMessage": null + } + } +} +``` +Here is a screenshot showing them together: + +![](./static/export-deployment-logs-25.png) + +The `downloadLink` field contains the link to your export log archive. + +Look for the `status` code in the results. It will show `"status": "QUEUED"` until the export is ready. + +There are four statuses: + +* EXPIRED +* FAILED +* QUEUED +* READY + +When the export is ready, it will show `"status": "READY"`. + +When the status code is READY you can download the log archive using the link. + +### Step 1: Examine the Log Files + +Once you have downloaded and extracted the log archive, you will have the folder named **HarnessExecutionsLogs**. + +Inside, you will see separate folders for each deployment: + +![](./static/export-deployment-logs-26.png) + +The folder name shows the deployment timestamp and unique ID. + +Open a folder and you will see that each deployment step has a different log file. + +![](./static/export-deployment-logs-27.png) + +These correspond to the deployment steps and subcommands displayed in the Harness **Deployments** page. + +The README file contains a summary of all log files. + +![](./static/export-deployment-logs-28.png) + +Open a log file and you will see the deployment information for the step subcommands. + +### Step 2: Use the JSON Export File + +In each deployment folder there is a file named **execution.json**. + +The execution.json file contains your entire Pipeline or Workflow structure, all Workflow steps and subcommands, Services, artifacts, Harness Delegates, and so on. + +The execution log file for each subcommand is listed also: + + +``` +... + "subCommands" : [ { + "name" : "Execute", + "type" : "COMMAND", + "status" : "FAILURE", + "executionLogFile" : "Shell Script_Execute_Oc8AGrsXR6KimnFPTN97AgXlt2.log" + } ], +... +``` diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/filtering-deployments.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/filtering-deployments.md new file mode 100644 index 00000000000..02d5290e03d --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/filtering-deployments.md @@ -0,0 +1,65 @@ +--- +title: Filtering Deployments +description: This content is for Harness FirstGen. Switch to NextGen. You can filter deployments on the the Deployments page according to multiple criteria. You can save these filters as a quick way to filter dep… +sidebar_position: 30 +helpdocs_topic_id: lbiv3zwwm0 +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/phiv0zaoex).You can filter deployments on the the Deployments page according to multiple criteria. + +You can save these filters as a quick way to filter deployments in the future. + + +### Before You Begin + +* [Deployments Overview](deployments-overview.md) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Limitations + +The **Aborting** status filter is in progress and does not work presently. + +### Visual Summary + +Here is what the Deployments page filter options looks like: + +![](./static/filtering-deployments-05.png) + +As you can see, you can filter according to multiple criteria. Next, you can give your filter a name and save it. + +### Option 1: Filter by Deployment Tag + +In **Filter by Deployment Tag**, enter one or more Tags in your account's **Tags Management**. + +You can enter a Tag name, name:value pair, or evaluated expression. + +See [Assign Metadata Using Tags](https://docs.harness.io/article/nrxfix3i58-tags), [Use Expressions in Workflow and Pipeline Tags](https://docs.harness.io/article/285bu842gb-use-expressions-in-workflow-and-pipeline-tags), and [Apply Filters Using Tags](https://docs.harness.io/article/nyxf7g8erd-apply-filters-using-tags). + +### Option 2: Filter by Applications + +In **Filter by Applications**, select the Application entities to filter on. + +First, select an Application. This will populate the remaining settings with the subordinate entities of the Application. + +If you select multiple Applications, then all of the subordinate entities of the Applications are provided. + +### Option 3: Filter by Statuses + +In **Filter by Statuses**, select all of the statuses you want to filter on. + +### Option 4: Filter by Time + +In **Filter by Time**, select a date range for the filter. + +### Step: Save a Filter + +In **Filter Name**, enter a name for the filter and click **Save**. + +![](./static/filtering-deployments-06.png) + +The filter is now available from the **Load Saved Filter**: + +![](./static/filtering-deployments-07.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/publish-pipeline-events-to-an-http-endpoint.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/publish-pipeline-events-to-an-http-endpoint.md new file mode 100644 index 00000000000..6e6bf08c582 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/publish-pipeline-events-to-an-http-endpoint.md @@ -0,0 +1,200 @@ +--- +title: Publish Pipeline Events to an HTTP Endpoint +description: Send key Pipeline deployment events to a URL endpoint as a JSON payload. +sidebar_position: 100 +helpdocs_topic_id: scrsak5124 +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To help you analyze how Pipelines are performing, Harness can send key Pipeline deployment events to a URL endpoint as a JSON payload. Next, you can use other tools to consume and build dashboards for the events. + + +### Before You Begin + +* [Create a Pipeline](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) +* [Monitor Deployments in Dashboards](https://docs.harness.io/article/c3s245o7z8-main-and-services-dashboards) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Review: Events Published + +Currently, Harness publishes the following events: + +* PIPELINE START +* PIPELINE COMPLETE +* PIPELINE PAUSED +* PIPELINE CONTINUED + +Harness will be adding more events soon. + +### Step 1: Add Event Rule + +In your Application, in **Event Rules**, click **Add**. + +![](./static/publish-pipeline-events-to-an-http-endpoint-40.png) + +The **Event Rule Configuration** settings appear. + +![](./static/publish-pipeline-events-to-an-http-endpoint-41.png) + +Enter a name for the event rule. + +### Option: Pipeline or Send Me Everything + +You can select to send specific events for specific Pipelines or all events for all Pipelines. + +#### Pipeline + +In **Pipelines**, select the Pipelines you want events for, or select **All Pipelines**. + +In **Events**, select the event types you want to publish or select **All Events**. + +![](./static/publish-pipeline-events-to-an-http-endpoint-42.png) + +#### Send Me Everything + +This option sends all events that Harness currently publishes. + +When Harness supports more events, these events will also be published. + +### Step 2: Enter the HTTP Endpoint URL + +In **Webhook URL**, enter the HTTP endpoint URL where Harness will publish the events for this rule. + +This is the URL you will use to consume events. + +### Option: Add Headers + +Add any HTTP headers that you want to send with the events. + +### Option: Delegate Selector + +By default, Harness will select any available Delegate. You might want to use a specific Delegate because you know it has access to the endpoint URL. + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. + +You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector.### Step 3: Enable the Rule + +You can enable and disable a rule using the toggle button. + +### Step 4: Test the Event Rule + +You can test an event rule if the rule is enabled or disabled. + +To test the rule, click **Test**. + +Here is a Pipeline Start event payload example: + + +``` +{ + "id": "123-456-789-123-456", + "version": "v1", + "eventType": "pipeline_start", + "data": { + "application": { + "id": "Ze3W2gzaS7aQ-h8MKzJ-fA", + "name": "app name" + }, + "pipeline": { + "id": "Ze3W2gzaS7aQ-h8MKzJ-fA", + "name": "Dummy Pipeline" + }, + "triggeredBy": { + "uuid": "lv0euRhKRCyiXWzS7pOg6g", + "name": "Admin", + "email": "john.doe@harness.io" + }, + "triggeredByType": "USER", + "startedAt": 1626951572549, + "services": [ + { + "id": "sySeUbIXSE6CIh_aTXCA8g" + } + ], + "environments": [ + { + "id": "ULwtK2C1ROCwPD--1g_uag" + } + ], + "infraDefinitions": [ + { + "id": "foHTHGmMTVSHmojA2PCLCw" + } + ], + "executionId": "XN9OQ3S6Rli2yBYXFwSCBQ" + } +} +``` +Here is a Pipeline Complete event payload example: + + +``` +{ + "id": "123-456-789-123-456", + "version": "v1", + "eventType": "pipeline_end", + "data": { + "application": { + "id": "Ze3W2gzaS7aQ-h8MKzJ-fA", + "name": "app name" + }, + "pipeline": { + "id": "Ze3W2gzaS7aQ-h8MKzJ-fA", + "name": "Dummy Pipeline" + }, + "triggeredBy": { + "uuid": "lv0euRhKRCyiXWzS7pOg6g", + "name": "Admin", + "email": "john.doe@harness.io" + }, + "triggeredByType": "USER", + "startedAt": 1626951572549, + "services": [ + { + "id": "sySeUbIXSE6CIh_aTXCA8g" + } + ], + "environments": [ + { + "id": "ULwtK2C1ROCwPD--1g_uag" + } + ], + "infraDefinitions": [ + { + "id": "foHTHGmMTVSHmojA2PCLCw" + } + ], + "executionId": "XN9OQ3S6Rli2yBYXFwSCBQ", + "completedAt": 1626951580029, + "status": "SUCCESS" + } +} +``` +You can see various Pipeline event details in both payloads example. The Ids correspond to Harness entities in the Harness Application such as Services and Environments. + +To perform analysis on Pipeline performance, you can use the `startedAt` and `completedAt` timestamps (in milliseconds). + +### Review: Event Rules using GraphQL + +You can create and query the event rules for an Application using the Harness GraphQL API. + +See [Publish Pipeline Events to an HTTP Endpoint using the API](https://docs.harness.io/article/cfrqinjhci-publish-pipeline-events-to-an-http-endpoint-using-the-api). + +### Notes + +* The Event Rules do not appear in the Configure As Code YAML. +* If you delete an Application, its Event Rules are also deleted. + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/publish-workflow-events-to-an-http-endpoint.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/publish-workflow-events-to-an-http-endpoint.md new file mode 100644 index 00000000000..5d19b166dbf --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/publish-workflow-events-to-an-http-endpoint.md @@ -0,0 +1,228 @@ +--- +title: Publish Workflow Events to an HTTP Endpoint +description: Send key Workflow deployment events to a URL endpoint as a JSON payload. +sidebar_position: 110 +helpdocs_topic_id: okinra1xu2 +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To help you analyze how Workflows are performing, Harness can send key Workflow deployment events to a URL endpoint as a JSON payload. Next, you can use other tools to consume and build dashboards for the events. + +This topic explains how to publish Workflow events. For information on how to publish Pipeline events, see [Publish Pipeline Events to an Http Endpoint](publish-pipeline-events-to-an-http-endpoint.md). + + +### Before You Begin + +* [Create a Workflow](https://docs.harness.io/article/o86qyexcab-tags-how-tos) +* [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms) + +### Review: Events Published + +Currently, Harness publishes the following events: + +* WORKFLOW START +* WORKFLOW COMPLETE +* WORKFLOW PAUSE +* WORKFLOW CONTINUE + +Harness will be adding more events soon. + +### Step 1: Add Event Rule + +In your Application, in **Event Rules**, click **Add**. + +![](./static/publish-workflow-events-to-an-http-endpoint-35.png) + +The **Rule Configuration** settings appear. + +![](./static/publish-workflow-events-to-an-http-endpoint-36.png) + +Enter a name for the event rule. + +#### Option: Send Me Everything + +Select **Send me everything** to send all the events that Harness currently publishes. + +#### Option: Workflow + +Select **Workflow** to send events for specific Workflows. In **Workflows**, select the Workflows you want events for, or select **All Workflows**. + +![](./static/publish-workflow-events-to-an-http-endpoint-37.png) + +#### Option: Pipeline + +Select **Pipeline** to send events for specific Pipelines. For more information, see [Publish Pipeline Events to an HTTP Endpoint](publish-pipeline-events-to-an-http-endpoint.md). + +### Step 2: Select Events + +In **Events**, select the event types you want to publish or select **All Events**. + +![](./static/publish-workflow-events-to-an-http-endpoint-38.png) + +### Step 3: Enter the Webhook URL + +In **Webhook URL**, enter the HTTP endpoint URL where Harness will publish the events for this rule. + +This is the URL you will use to consume events. + +### Step 4: Add Headers + +Add any HTTP headers that you want to send with the events. + +### Step 5: Delegate Selector + +By default, Harness will select any available Delegate. You might want to use a specific Delegate because you know it has access to the endpoint URL. + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. + +You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector.Click **Submit**. + +### Step 6: Enable the Rule + +By default, the Rule you just created is disabled. You can enable the rule using the toggle button. + +![](./static/publish-workflow-events-to-an-http-endpoint-39.png) + +### Step 7: Test the Event Rule + +You can test an event rule to see if the rule is enabled or disabled. + +To test the rule, click **Test**. + +Here is a Workflow Start event payload example: + + +``` +{ + "id": "86afca34-4cb1-4859-b1c6-11f398f55a4b", + "eventType": "workflow_start", + "data": { + "pipeline": { + "name": "K8s Prod Pipeline", + "id": "T4XKvdmXRM-zjyfyMHBlxg" + }, + "application": { + "name": "Harness Sample App", + "id": "V9DxliUiS_SVzmNPkkR5Ow" + }, + "workflow": { + "name": "To-Do List K8s Rolling", + "id": "smcCVuvCT4SCyayxG85U8w" + }, + "environments": [ + { + "id": "5Rx44pB4RC-eAt2SB10l_A" + } + ], + "workflowExecution": { + "id": "Ji0GRNDvSKCaiNmrLT8Fmg" + }, + "pipelineExecution": { + "id": "j_V2nAroRbCdy8tmiy6eoA" + }, + "startedAt": 1644938260133, + "infraDefinitions": [ + { + "id": "MfxXXKPdRjysPEE_KZCaHA" + } + ], + "services": [ + { + "id": "2U9X2Z3YTBmQOvUMaWDy-Q" + } + ], + "triggeredBy": { + "name": "Admin", + "uuid": "lv0euRhKRCyiXWzS7pOg6g", + "email": "admin@harness.io" + } + }, + "version": "v1" + }, + "query": {} + }, + "eventName": "wfEvents", + "createdAt": 1644938267179, + "expiry": 1647530267, + "id": "1644938267179-LU/fwFNTueqnFVx315rU7A==" + } +``` +Here is a Workflow Complete event payload example: + + +``` +{ + "id": "0c3eecb7-e9cf-400a-b3b1-be6ce9d2bf9f", + "eventType": "workflow_end", + "data": { + "pipeline": { + "name": "K8s Prod Pipeline", + "id": "T4XKvdmXRM-zjyfyMHBlxg" + }, + "completedAt": 1644938268018, + "application": { + "name": "Harness Sample App", + "id": "V9DxliUiS_SVzmNPkkR5Ow" + }, + "workflow": { + "name": "To-Do List K8s Rolling", + "id": "smcCVuvCT4SCyayxG85U8w" + }, + "environments": [ + { + "id": "5Rx44pB4RC-eAt2SB10l_A" + } + ], + "workflowExecution": { + "id": "Ji0GRNDvSKCaiNmrLT8Fmg" + }, + "pipelineExecution": { + "id": "j_V2nAroRbCdy8tmiy6eoA" + }, + "startedAt": 1644938260133, + "infraDefinitions": [ + { + "id": "MfxXXKPdRjysPEE_KZCaHA" + } + ], + "services": [ + { + "id": "2U9X2Z3YTBmQOvUMaWDy-Q" + } + ], + "triggeredBy": { + "name": "Admin", + "uuid": "lv0euRhKRCyiXWzS7pOg6g", + "email": "admin@harness.io" + }, + "status": "FAILED" + }, + "version": "v1" + }, + "query": {} + }, + "eventName": "wfEvents", + "createdAt": 1644938276855, + "expiry": 1647530276, + "id": "1644938276855-zy4xu3CNdCFbm/gn9QCJog==" + } +``` +You can see various Workflow event details in both payload examples. The Ids correspond to Harness entities in the Harness Application such as Services and Environments. + +To perform analysis on Workflow performance, you can use the `startedAt` and `completedAt` timestamps (in milliseconds). + +### Notes + +* The Event Rules do not appear in the Configure As Code YAML. +* If you delete an Application, its Event Rules are also deleted. + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/restrict-deployment-access-to-specific-environments.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/restrict-deployment-access-to-specific-environments.md new file mode 100644 index 00000000000..e37fb26bd52 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/restrict-deployment-access-to-specific-environments.md @@ -0,0 +1,151 @@ +--- +title: Restrict Deployment Access to Specific Environments +description: This content is for Harness FirstGen. Switch to NextGen. By default, all Harness User Group members have full permissions on all Applications. Using Harness RBAC functionality, you can restrict the d… +sidebar_position: 60 +helpdocs_topic_id: twlzny81xl +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io).By default, all Harness User Group members have full permissions on all Applications. + +Using Harness RBAC functionality, you can restrict the deployments a User Group may perform to specific Harness Applications and their subordinate Environments. + +Restricting a User Group's deployments to specific Environments enables you to manage which target infrastructures are impacted by your different teams. For example, you can have Dev environments only impacted by Dev teams, and QA environments only impacted by QA teams. + +### Before You Begin + +Ensure you are familiar with the following Harness features: + +* [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) +* [Create an Application](https://docs.harness.io/article/bucothemly-application-configuration) +* [Add an Environment](https://docs.harness.io/article/n39w05njjv-environment-configuration) + +### Visual Summary + +In the following image, you can see that the Application Permissions for a Harness User Group are set for a specific Application and three of its Environments: + +![](./static/restrict-deployment-access-to-specific-environments-14.png) + +Members of this User Group will have permission to execute Workflow and Pipeline deployments for those target Environments only. + +### Option: Create or Edit a User Group + +Harness User Groups are managed in **Security** > **Access Management** > **User Groups**. + +Open a User Group. You will edit its **Application Permissions** to restrict its members deployment permissions to specific Application Environments. + +For steps on setting up a User Group, see [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). + +### Option: Set Application Permissions + +By default, all Harness User Groups members have full permissions on all Applications. + +![](./static/restrict-deployment-access-to-specific-environments-15.png) + +In **Application Permissions**, click the pencil icon to edit the default permissions. + +![](./static/restrict-deployment-access-to-specific-environments-16.png) + +The Application Permission settings appear. Configure the following settings: + +1. In **Permission Type**, enable **Deployments** or any other permissions other than **All Permission Types**. +2. In **Application**, select the Application(s) you want to grant deployment permissions for. Use the search feature if needed. +3. In **Filter**, select the Environments in the Applications you selected. These are the Environments that you want to allow the User Group members to deploy to. Use the search feature if needed. +4. In **Action**, select **Read**, **Execute Workflow**, **Execute Pipeline,** and **Rollback Workflow**. + +When you're done, the Application Permission will look something like this: + +![](./static/restrict-deployment-access-to-specific-environments-17.png) + +Now this User Group's members can only deploy to the Environments you selected. + +Add and remove members as needed. + +### Option: Set Application Permissions for Execute Workflow + +In **Application Permissions**, click the pencil icon to edit the default permissions. + +The Application Permission settings appear. Configure the following settings: + +1. In **Permission Type**, enable **Deployments** or any other permissions other than **All Permission Types**. +2. In **Application**, select the Application(s) you want to grant deployment permissions for. Use the search feature if needed. +3. In **Filter**, select the Environments in the Applications you selected. These are the Environments that you want to allow the User Group members to deploy to. Use the search feature if needed. +4. In **Action**, select **Read** and **Execute Workflow**. + +User Group members with only **Execute Workflow** permissions cannot **Execute Pipeline** or **Rollback Workflow**.When you're done, the Application Permission will look something like this: + +![](./static/restrict-deployment-access-to-specific-environments-18.png) + +### Option: Set Application Permissions for Execute Pipeline + +In **Application Permissions**, click the pencil icon to edit the default permissions. + +The Application Permission settings appear. Configure the following settings: + +1. In **Permission Type**, enable **Deployments** or any other permissions other than **All Permission Types**. +2. In **Application**, select the Application(s) you want to grant deployment permissions for. Use the search feature if needed. +3. In **Filter**, select the Environments in the Applications you selected. These are the Environments that you want to allow the User Group members to deploy to. Use the search feature if needed. +4. In **Action**, select **Read** and **Execute Pipeline**. + +User Group members with only **Execute Pipeline** permissions cannot **Execute Workflow** or **Rollback Workflow**.When you're done, the Application Permission will look something like this: + +![](./static/restrict-deployment-access-to-specific-environments-19.png) + +### Option: Set Application Permissions for Rollback Workflow + +In **Application Permissions**, click the pencil icon to edit the default permissions. + +The Application Permission settings appear. Configure the following settings: + +1. In **Permission Type**, enable **Deployments** or any other permissions other than **All Permission Types**. +2. In **Application**, select the Application(s) you want to grant deployment permissions for. Use the search feature if needed. +3. In **Filter**, select the Environments in the Applications you selected. These are the Environments that you want to allow the User Group members to deploy to. Use the search feature if needed. +4. In **Action**, select **Read** and **Rollback Workflow**. + +User Group members with only **Rollback Workflow** permissions cannot **Execute Pipeline** or **Execute Workflow**.When you're done, the Application Permission will look something like this: + +![](./static/restrict-deployment-access-to-specific-environments-20.png) + +### Deployments Permissions + +The following table lists the permissions of User Group members based on the Action assigned corresponding to the Deployments Permission Type: + + + +| | | | | +| --- | --- | --- | --- | +| **Permission Type** | **Action** | **Users Can** | **Users Cannot** | +| Deployments | Read | * Can View Workflow Executions +* Can View Pipeline Executions + | * Cannot Execute/Pause/Resume/Abort/Rollback Workflow Executions +* Cannot perform any Manual Intervention Workflow +* Cannot Execute/Pause/Resume/Abort Pipeline Executions +* Cannot perform any Manual Intervention or Runtime Inputs for Pipelines + | +| | Execute Workflow | * Can View/Execute/Pause/Resume/Abort Workflow Executions +* Can perform Manual Intervention Workflow (except Rollback) + | * Cannot Rollback Workflow Executions +* Cannot Execute/Pause/Resume/Abort Pipeline Executions +* Cannot perform Manual Intervention Pipeline + | +| | Execute Pipeline | * Can View/Execute/Pause/Resume/Abort Pipeline Executions +* Can perform Manual Intervention Pipeline (except Rollback) + | * Cannot Execute/Pause/Resume/Abort/Rollback Workflow Executions +* Cannot perform Manual Intervention Workflow (except Rollback) + | +| | Rollback Workflow | * Can View Workflow Executions +* Can View Pipeline Executions +* Can Rollback Workflow Executions + | * Cannot Execute/Pause/Resume/Abort Workflow Executions +* Cannot Execute/Pause/Resume/Abort Pipeline Executions +* Cannot perform Manual Intervention Workflows +* Cannot perform Manual Intervention Pipelines + | + +### Related + +* **Assign Permissions** in [Use Users and Groups API](https://docs.harness.io/article/p9ssx4cv5t-sample-queries-create-users-user-groups-and-assign-permissions) +* [Manage User Notifications](https://docs.harness.io/article/kf828e347t-notification-groups) + diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/resume-a-pipeline-deployment.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/resume-a-pipeline-deployment.md new file mode 100644 index 00000000000..41114509440 --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/resume-a-pipeline-deployment.md @@ -0,0 +1,82 @@ +--- +title: Resume Pipeline Deployments +description: Describes how to resume Pipeline deployments that fail during execution. +sidebar_position: 80 +helpdocs_topic_id: 4dvyslwbun +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io).You can resume Pipeline deployment executions that meet the following criteria: + +* Failed +* Aborted +* Expired +* Rejected + +You cannot resume Successful or Paused executions.Pipeline deployments might not be completed for many reasons, such as changes to resource access. In these cases, rerunning an entire Pipeline can be costly and also time-consuming. + +Harness provides an option to resume your Pipeline deployment from any previously executed stage. + +The stages after the resumed stage are executed. Stages preceding the stage you selected are not executed again. + + +### Before You Begin + +* [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) +* [Create Pipeline Templates](https://docs.harness.io/article/60j7391eyy-templatize-pipelines) +* [Pipeline Skip Conditions](https://docs.harness.io/article/6kefu7s7ne-skip-conditions) + +### Limitations + +You can't modify any of the below mentioned settings: + +* **You cannot change mandatory settings when you resume:** You cannot change **Start New Deployment** inputs, variables and Artifacts that are passed when you started your deployment. +* You can resume a Pipeline that failed, expired, aborted, or was rejected. +* The Pipeline and the Workflows used in the Pipeline can't be changed. +* The templatization can't be changed. +* You cannot add any new stage or change any of the existing stages. + +**Aborting** and **Rollback** are different. When you abort, Harness does not clean up any deployed resources or rollback to a previous release and infrastructure. + +### Review: Permissions + +To resume a Pipeline, a Harness User must belong to a User Group that has the following Application Permissions: + +* **Permission Type:** Deployments, **Action:** Execute Pipeline +* **Permission Type:** Deployments, **Action:** Execute Pipeline, **Application:** <Application> +* **Permission Type:** Deployments, **Action:** Execute Pipeline, **Environment:** <Environment Type>, **Application:** <Application>, but only to <Environment Type> + +### Step: Resume Pipeline + +1. From the **Continuous Deployment**, go to your **Deployments**. + + ![](./static/resume-a-pipeline-deployment-00.png) + +2. Click on the failed deployment that you would want to resume and click on the **Resume Pipeline** icon. + + ![](./static/resume-a-pipeline-deployment-01.png) + +3. In **Resume Pipeline**, select the stage from where you want to resume your Pipeline deployment and click **Resume**. + + ![](./static/resume-a-pipeline-deployment-02.png) + +Harness will execute the stage you choose and all the subsequent stages. + +#### Multiple Workflow Sets Running in Parallel + +Resume capability runs at the stage level. Even if you have multiple Workflow sets running in parallel, they belong to the same stage. In that case, the resume option is run for the whole set and the full stage gets resumed. + +For example, in the following image, **To-Do List K8s Rolling** and **Failing** are set up to execute in parallel. They belong to the same stage, STAGE 4. Even if one of them fails, both the stages will rerun when you resume the Pipeline deployment. + +![](./static/resume-a-pipeline-deployment-03.png) + +### Option: View Execution History + +1. To view the execution history, go to the **Deployments** page. +2. Click on the  history button to view the execution history. It lists the detail of the previous executions.![](./static/resume-a-pipeline-deployment-04.png) + +You can click on the previously failed execution to view its detail. + +**Continuous Deployment** page lists only the most recent Pipeline executions that have been resumed. \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-08.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-08.png new file mode 100644 index 00000000000..49eeff42dc8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-08.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-09.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-09.png new file mode 100644 index 00000000000..237557fa88a Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-09.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-10.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-10.png new file mode 100644 index 00000000000..49eeff42dc8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-10.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-11.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-11.png new file mode 100644 index 00000000000..b7bc1ca5168 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-11.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-12.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-12.png new file mode 100644 index 00000000000..f6137437a95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-12.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-13.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-13.png new file mode 100644 index 00000000000..bfb0fe9bb3b Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/deploy-to-multiple-infrastructures-13.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-21.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-21.png new file mode 100644 index 00000000000..3075f5ca7e0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-21.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-22.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-22.png new file mode 100644 index 00000000000..47d269da306 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-22.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-23.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-23.png new file mode 100644 index 00000000000..c061dfe4710 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-23.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-24.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-24.png new file mode 100644 index 00000000000..6144550b3eb Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-24.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-25.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-25.png new file mode 100644 index 00000000000..c8205ae3183 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-25.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-26.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-26.png new file mode 100644 index 00000000000..9f89dca13b9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-26.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-27.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-27.png new file mode 100644 index 00000000000..da946acca9a Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-27.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-28.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-28.png new file mode 100644 index 00000000000..d7b886e19dd Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/export-deployment-logs-28.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-05.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-05.png new file mode 100644 index 00000000000..c0e42a28556 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-05.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-06.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-06.png new file mode 100644 index 00000000000..bca84bb8052 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-06.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-07.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-07.png new file mode 100644 index 00000000000..f99a253780b Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/filtering-deployments-07.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-40.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-40.png new file mode 100644 index 00000000000..e95c2aecacb Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-40.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-41.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-41.png new file mode 100644 index 00000000000..31092518d95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-41.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-42.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-42.png new file mode 100644 index 00000000000..7bc72a1719e Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-pipeline-events-to-an-http-endpoint-42.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-35.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-35.png new file mode 100644 index 00000000000..49ec732c4ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-35.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-36.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-36.png new file mode 100644 index 00000000000..1d61787ebf3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-36.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-37.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-37.png new file mode 100644 index 00000000000..6c797aa33a7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-37.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-38.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-38.png new file mode 100644 index 00000000000..46fb14fcc8c Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-38.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-39.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-39.png new file mode 100644 index 00000000000..76137aaa805 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/publish-workflow-events-to-an-http-endpoint-39.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-14.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-14.png new file mode 100644 index 00000000000..a36773d9129 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-14.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-15.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-15.png new file mode 100644 index 00000000000..071745e7632 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-15.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-16.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-16.png new file mode 100644 index 00000000000..50ac17d8795 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-16.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-17.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-17.png new file mode 100644 index 00000000000..7bbfc945973 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-17.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-18.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-18.png new file mode 100644 index 00000000000..97e0e4b0e3b Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-18.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-19.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-19.png new file mode 100644 index 00000000000..4784f2fc430 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-19.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-20.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-20.png new file mode 100644 index 00000000000..84876d8d153 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/restrict-deployment-access-to-specific-environments-20.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-00.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-00.png new file mode 100644 index 00000000000..41dfdf52a71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-00.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-01.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-01.png new file mode 100644 index 00000000000..6681ae9c376 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-01.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-02.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-02.png new file mode 100644 index 00000000000..e335db71bb4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-02.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-03.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-03.png new file mode 100644 index 00000000000..41dfdf52a71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-03.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-04.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-04.png new file mode 100644 index 00000000000..ea92eee94bb Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/resume-a-pipeline-deployment-04.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-29.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-29.png new file mode 100644 index 00000000000..9f2bfec7fde Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-29.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-30.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-30.png new file mode 100644 index 00000000000..5ab0e061652 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-30.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-31.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-31.png new file mode 100644 index 00000000000..3b52b9c0bdb Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-31.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-32.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-32.png new file mode 100644 index 00000000000..7feab18f26e Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-32.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-33.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-33.png new file mode 100644 index 00000000000..49d226c7a20 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-33.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-34.png b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-34.png new file mode 100644 index 00000000000..33d61b25b09 Binary files /dev/null and b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/static/view-the-delegates-used-in-a-deployment-34.png differ diff --git a/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/view-the-delegates-used-in-a-deployment.md b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/view-the-delegates-used-in-a-deployment.md new file mode 100644 index 00000000000..f513b5c847e --- /dev/null +++ b/docs/first-gen/continuous-delivery/concepts-cd/deployments-overview/view-the-delegates-used-in-a-deployment.md @@ -0,0 +1,89 @@ +--- +title: View the Delegates Used in a Deployment +description: This content is for Harness FirstGen. Switch to NextGen. Each task performed by a Harness deployment is assigned to a Delegate. Knowing which Delegate was used for a task can be useful when diagnosin… +sidebar_position: 40 +helpdocs_topic_id: nf19np91sg +helpdocs_category_id: cwefyz0jos +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/0slo2gklsy).Each task performed by a Harness deployment is assigned to a Delegate. Knowing which Delegate was used for a task can be useful when diagnosing issues, or when planning on infrastructure changes. + +Harness displays which Delegate performed a task in the Deployments page. You simply click on a command in a deployment's graph and select **View Delegate Selection** in its details. + +This topic will walk you through the process. + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) + +### Visual Summary + +The following illustration shows you how to view Delegate selection for each deployment task: + +![](./static/view-the-delegates-used-in-a-deployment-29.png) + +1. Select a command in the deployment graph. +2. Click the *more options* button (**︙**) in the command's details +3. Select **View Delegate Selection**. +4. Click the Delegate name to see the Delegate on the **Harness Delegates** page. + +### Review: How Does Harness Pick Delegates? + +When a task is ready to be assigned, the Harness Manager first validates its list of Delegates to see which Delegate should be assigned the task. + +The following information describes how the Harness Manager validates and assigns tasks to a Delegate: + +* **Heartbeats** - Running Delegates send heartbeats to the Harness Manager in 1 minute intervals. If the Manager does not have a heartbeat for a Delegate when a task is ready to be assigned, it will not assign the task to that Delegate. +* **Selectors and Scoping** - For more information, see  [Delegate Selectors](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_selectors) and  [Delegate Scope](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_scope). +* **Whitelisting** - Once a Delegate has been validated for a task, it is whitelisted for that task and will likely be used again for that task. The whitelisting criteria is the URL associated with the task, such as a connection to a cloud platform, repo, or API. A Delegate is whitelisted for all tasks using that URL. The Time-To-Live (TTL) for the whitelisting is 6 hours, and the TTL is reset with each successful task validation. +* **Blacklisting** - If a Delegate fails to perform a task that Delegate is blacklisted for that task and will not be tried again. TTL is 5 minutes. This is true if there is only one Delegate and even if the Delegate is selector for that task with a Selector, such as with a Shell Script command in a Workflow. + +### Step 1: Select a Deployment Command + +In Harness, click **Continuous Deployment** to see the Harness **Deployments** page. + +Click on a **deployment**. You can use the filter options to located specific deployments by Tag, Application name, date range, and so on. + +In the deployment, click a command. The details for the command appear. + +![](./static/view-the-delegates-used-in-a-deployment-30.png) + +### Step 2: Select View Delegate Selection + +In the details pane for the command, click the more options button (**︙**) and select **View Delegate Selection**. + +![](./static/view-the-delegates-used-in-a-deployment-31.png) + +### Step 3: View the Delegate Selection for the Command + +The Delegate selection information contains the Delegate(s) used for the command, their assessment (Selected, etc), details about how the Delegate was used, and the timestamp of the log entry. + +![](./static/view-the-delegates-used-in-a-deployment-32.png) + +The **Assessment** and **Details** sections explain why a Delegate was or wasn't selected. + +#### Delegate selection logs do not apply + +If you see the message `Delegate selection logs do not apply`, it means that the Workflow step does not require a Delegate or that there is no information in the database. + +### Step 4: View the Delegate Status and Settings + +In the **Delegate** column you will see the Delegate name. You can click the name to go to the Harness Delegates page and see the Delegates. + +![](./static/view-the-delegates-used-in-a-deployment-33.png) + +### Limitations + +Full support for all Workflow steps will be added soon. A tooltip in **View Delegate Selection** will indicate any currently unsupported Workflow steps. + +### Deployments Prior to Enabling View Delegate Selection + +For deployments that occurred before the View Delegate Selection feature was enabled, you will see the message `There are no records available`: + +![](./static/view-the-delegates-used-in-a-deployment-34.png) + +After View Delegate Selection is enabled (the Feature Flag is removed), only subsequent deployments will display Delegate selection records for supported steps. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/_category_.json new file mode 100644 index 00000000000..cbbd095c18a --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "24/7 Service Guard", + "position": 15, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "24/7 Service Guard" + }, + "customProps": { + "helpdocs_category_id": "kam4aj3pd4" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/custom-thresholds-24-7.md b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/custom-thresholds-24-7.md new file mode 100644 index 00000000000..ecb5dbb7097 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/custom-thresholds-24-7.md @@ -0,0 +1,69 @@ +--- +title: Apply Custom Thresholds to 24/7 Service Guard +description: Define Ignore Hints rules, which instruct Harness to remove certain metrics/value combinations from 24/7 Service Guard analysis. +# sidebar_position: 2 +helpdocs_topic_id: 53t35yit8p +helpdocs_category_id: kam4aj3pd4 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Use Custom Thresholds to define **Ignore Hints** rules that instruct Harness to remove certain metrics/value combinations from 24/7 Service Guard analysis. + +### Before You Begin + +* See [24/7 Service Guard Overview](set-up-service-guard.md). +* In your Application's Environment, select **24/7 Service Guard** > **Add Service Verification**. +* Select a verification provider compatible with Custom Thresholds.Harness currently supports Custom Thresholds with [Prometheus](../prometheus-verification/2-24-7-service-guard-for-prometheus.md) and [Custom APMs](../custom-metrics-and-logs-verification/custom-verification-overview.md). + + ![](./static/custom-thresholds-24-7-00.png) + +* Configure at least one Metrics Collection for this verification provider. + + +### Step 1: Invoke Custom Thresholds + +To begin defining one or more Ignore Hints: + +1. In the configuration dialog for your selected 24/7 Service Guard verification provider, click the pencil icon shown below. + + ![](./static/custom-thresholds-24-7-01.png) + +2. In the resulting dialog, click **Add Threshold** to begin defining a rule, as shown below. + + ![](./static/custom-thresholds-24-7-02.png) + + +### Step 2: Define a Rule + +Use the drop-downs to select a desired **Transaction Name** and **Metric Name** from your defined Metrics Collections. + + +### Step 3: Select Criteria + +Select the **Criteria** for this rule, and enter a corresponding **Value**. (Depending on your **Criteria** selection, the **Value** field's label will change to **Less than** and/or **Greater than**.) + +![](./static/custom-thresholds-24-7-03.png) + +Here are the **Criteria** and **Value** options available for the metric you've selected. + + + +| | | +| --- | --- | +| **Criteria** | **Value** | +| Absolute Value | Enter literal values of the selected metric in the **Greater than** and **Less than** fields. Observed values between these two threshold boundaries will be removed from 24/7 Service Guard analysis. | +| Percentage Deviation | Enter a threshold percentage at which to remove the metric from 24/7 Service Guard analysis. Units here are percentages, so entering **Less than:** `3` will instruct Harness to ignore anomalies less 3% away from the norm. | +| Deviation | This also sets a threshold deviation from the norm. But here, the units are not percentages, but literal values of the selected metric. Observed anomalies **Less than** the threshold you enter will be removed from analysis. | + + +### Step 4: Add Rules and Save + +1. If you want to define additional rules, click **Add Threshold**, then repeat Steps 2–3. +2. Click **Submit** to save your rules and apply them to 24/7 Service Guard verification for this Service. + + +### Next Steps + +* In Harness' Continuous Verification dashboard, the [24/7 Service Guard heat map](set-up-service-guard.md#setup-overview) for this Service and verification provider will show no risk indicators for events that fall within your Ignore Hints. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/set-up-service-guard.md b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/set-up-service-guard.md new file mode 100644 index 00000000000..6661cacb949 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/set-up-service-guard.md @@ -0,0 +1,124 @@ +--- +title: Set Up 24/7 Service Guard +description: Summarizes how to add supported APMs and logging tools to Harness 24/7 Service Guard, and how to configure alert notifications. +# sidebar_position: 2 +helpdocs_topic_id: zrwsldxn94 +helpdocs_category_id: kam4aj3pd4 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To go to the 24/7 Service Guard setup for a specific tool, see its listing in [Continuous Verification](https://docs.harness.io/category/continuous-verification). + +To see the list of all the APM and logging tools Harness supports, see [CV Summary and Provider Support](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + +For information on analysis strategies and best practices, see [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Limitations + +You can set up 20 Service Guards in your account by default. You can request an increase from Harness. + +Harness places this limitation to ensure that your Service Guards can be supported adequately in our SaaS environment. + +Note that a Service Guard is not related to the Services deployed by Harness. Service Guard is a combination of Service and Environment. Service Guard is actually managed at the Environment level. Consequently, the Service Guard limitation is a limitation of Service and Environment combinations. + +### Scope Harness Delegates for 24/7 Service Guard + +You can scope Harness Delegates to perform different tasks. + +As a best practice, you should scope one or more Delegates to perform 24/7 Service Guard tasks. + +Scoping ensures that 24/7 Service Guard tasks do not consume resources from other real time processing such as deployments and pulling artifacts. + +You scope the Delegate by adding a Delegate Scope that uses the **Service Guard** task. + +Harness recommends you use a Delegate for no more than 40 Continuous Verification setups. + +For more information, see [Scope Delegates to Harness Components and Commands](https://docs.harness.io/article/hw56f9nz7q-scope-delegates-to-harness-components-and-commands). + +### 24/7 Service Guard is for Production Environments + +Harness enables you to organize your infrastructure into groups called Environments. Environments represent your deployment infrastructures, such as Dev, QA, Stage, Production, etc. + +Harness uses two types of Environments: **Production** and **Non-Production**. + +24/7 Service Guard monitors live applications and is for Production Environments only. + +### Setup Overview + +Here are the high-level steps for setting up 24/7 Service Guard using one or more APM and logging tools: + +1. Connect each of your APM and logging tools to Harness as Verification Providers. Verification Providers contain the APM and logging tool account information Harness will use to access the tools via their APIs. + +For information on setting up a Verification Provider, see [Add Verification Providers](https://docs.harness.io/article/r6ut6tldy0-verification-providers).1. Create a Harness Application. The Application will identify the application you want to monitor, will identify the production environment where the application is running, and will allow you to use Harness RBAC to control who can set up 24/7 Service Guard. + +For more information on setting up a Harness Application, see [Application Checklist](../../model-cd-pipeline/applications/application-configuration.md).1. Add a Harness Service to your Application. The Service is a logical representation of your production application. You will add a Service for each application you want to monitor with 24/7 Service Guard. +2. Add a Harness Environment to your Application. The Environment represents the production environments for one or more applications. +3. Add a 24/7 Service Guard configuration for each Service in the Environment using a Verification Provider. + +Once 24/7 Service Guard is set up in a Harness Environment, the new configuration is listed according to its Service name (in this example, the Service name **Dev-CV-Todolist**). + + ![](./static/set-up-service-guard-04.png) + + In a few minutes, the Continuous Verification dashboard will display the 24/7 Service Guard configuration. + + ![](./static/set-up-service-guard-05.png) + + No deployment is needed to add the 24/7 Service Guard configuration to the dashboard. + +### Alert Notifications + +For each Verification Provider, you can customize the threshold and timing for alert notifications. To do so: + +1. Click the pencil icon to the right of the **Alert Notification** row. ![](./static/set-up-service-guard-06.png) +2. In the resulting **Alert Notification** dialog, select the **Enable Alert Notification** check box. ![](./static/set-up-service-guard-07.png) +3. Adjust the **Alert Threshold** slider to set the minimum severity level at which you want Harness to send alert notifications. + +The slider's scale represents the Overall Risk Level that Harness evaluates, based on data from your Verification Providers, transaction history, and machine-learning models. Harness' alerts are dynamic: over time, they will escalate or decrease, as we observe anomalies, regressions, and other factors. The scale's range corresponds to risk indicators on the dashboard's heat map as shown below. + + ![](./static/set-up-service-guard-08.png) + +1. Select the **Number of Occurrences** from the drop down list after which you want to receive the alert notifications. + +![](./static/set-up-service-guard-09.png) + +By default, the notifications that you configure here will appear under Harness Manager's bell-shaped **Alerts** indicator, and will also be sent to your [Catch-All Notification User Group](https://docs.harness.io/article/kf828e347t-notification-groups#catch_all_notification_rule). However, you can also configure detailed conditions that [route alert notifications to other User Groups](https://docs.harness.io/article/kf828e347t-notification-groups#alert_thresholds). This dialog includes a link to Harness Manager's corresponding **Notification Settings** controls. + + +##### Suspending (Snoozing) Alerts + +Optionally, you can pause alerts—for example, during lightly staffed periods. You'd do so in the **Alert Notification** dialog's **Snooze Alert** section, as follows: + +1. Click in the **From** field, to reveal the calendar and clock display for the snooze start time. ![](./static/set-up-service-guard-10.png) +2. After setting the **From** date and time, use the **To** field's similar controls to set the snooze period's ending date and time. +3. Once the whole **Alert Notification** dialog is set to your specifications, click **SUBMIT** to save them. + +### Harness Variables and 24/7 Service Guard + +No Harness variable expressions may be used in 24/7 Service Guard setup. + +Harness variable expressions are evaluated at deployment runtime, and 24/7 Service Guard does not involve deployments. It only monitors live services. + +### Add Workflow Steps + +Once you have set up 24/7 Service Guard in an Environment, you can use the 24/7 Service Guard setup to quickly configure the **Verify Service** step in any Workflow that uses the Environment. + +For example, the following Canary Deployment Workflow uses an Environment with 24/7 Service Guard set up. In **Phase 1** of the Workflow, in **Verify Service**, you can add a Verification Provider. + + ![](./static/set-up-service-guard-11.png) + +1. Under **Verify Service**, click **Add Verification**. +2. In the **Add Command** dialog, under **Verifications**, select a Verification Provider that is also used in the 24/7 Service Guard of the Environment used by this Workflow. For example, **AppDynamics**. + + ![](./static/set-up-service-guard-12.png) + + The **AppDynamics** dialog appears. + + ![](./static/set-up-service-guard-13.png) + +3. At the top of the dialog, click **Populate from Service Verification**, and then click the name of the 24/7 Service Guard configuration you want to use. + + ![](./static/set-up-service-guard-14.png) + +The dialog is automatically configured with the same settings as the 24/7 Service Guard configuration you selected. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-00.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-00.png new file mode 100644 index 00000000000..85196200b20 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-01.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-01.png new file mode 100644 index 00000000000..613bb6dd65e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-02.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-02.png new file mode 100644 index 00000000000..27387064d5f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-03.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-03.png new file mode 100644 index 00000000000..23ef02092b9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/custom-thresholds-24-7-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-04.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-04.png new file mode 100644 index 00000000000..0a264c4b1b3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-05.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-05.png new file mode 100644 index 00000000000..4c4bcfd07f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-06.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-06.png new file mode 100644 index 00000000000..36420c71e15 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-07.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-07.png new file mode 100644 index 00000000000..c6296ad8e47 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-08.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-08.png new file mode 100644 index 00000000000..f5a5133324d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-09.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-09.png new file mode 100644 index 00000000000..7fbc5c79c27 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-10.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-10.png new file mode 100644 index 00000000000..f7eb7c6e2ea Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-11.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-11.png new file mode 100644 index 00000000000..6c421120202 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-12.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-12.png new file mode 100644 index 00000000000..b1c467fc51d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-13.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-13.png new file mode 100644 index 00000000000..ba73949829d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-14.png b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-14.png new file mode 100644 index 00000000000..b85e3aa8883 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/24-7-service-guard/static/set-up-service-guard-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/_category_.json new file mode 100644 index 00000000000..82575045927 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/_category_.json @@ -0,0 +1,15 @@ +{ + "label": "Continuous Verification", + "position": 400, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Continuous Verification" + }, + "customProps": { + "helpdocs_category_id": "gurgsl2gqt", + "helpdocs_parent_category_id": "1qtels4t8p" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/1-app-dynamics-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/1-app-dynamics-connection-setup.md new file mode 100644 index 00000000000..a1219bdee46 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/1-app-dynamics-connection-setup.md @@ -0,0 +1,83 @@ +--- +title: Add AppDynamics as a Verification Provider +description: Connect Harness to AppDynamics, and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: cqy0jmm71h +helpdocs_category_id: bpoqe48x7r +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To set up AppDynamics to work with Harness' Continuous Verification features, you must add AppDynamics as Harness Verification Provider. + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [AppDynamics Verification Overview](../continuous-verification-overview/concepts-cv/app-dynamics-verification-overview.md). +* The AppDynamics account that you use to connect Harness to AppDynamics must have the following [General Permission](https://docs.appdynamics.com/21.9/en/appdynamics-essentials/account-management/tenant-user-management/create-and-manage-custom-roles/application-permissions#ApplicationPermissions-GeneralPermissions): `View`. + + +### Limitations + +Harness does not support [AppDynamics Lite](https://www.appdynamics.com/lite/). If you set up AppDynamics with Harness using an AppDynamics Pro Trial account, and that trial expires, you will be using AppDynamics Lite—which will *not* work with Harness. + + +If you require more flexibility than the standard integration outlined here, you also have the option to [add AppDynamics as a Custom APM](../custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md). + + +### Step 1: Add Verification Provider + +To begin adding AppDynamics as a Harness Verification Provider,: + +1. In Harness, click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **AppDynamics**. The **Add AppDynamic Verification Provider** dialog appears. + + ![](./static/1-app-dynamics-connection-setup-13.png) + +### Step 2: Display Name + +Enter a name for this connection. You will use this name when selecting the Verification Provider in Harness Environments and Workflows. + +If you plan to use multiple providers of the same type, ensure that you give each provider a different name. + + +### Step 3: Account Name + +In the **Account Name** field, enter the name of AppDynamics account you want to use. + +For Harness On-Prem, enter **customer1**. +### Step 4: Controller URL + +In the **Controller URL** field, enter the URL of the AppDynamic controller in the format: + +**http://<Controller\_Host>:<port>/controller** + +For example: + +**https://xxxx.saas.appdynamics.com/controller** + + +### Step 5: User Name and Password + +In the **User Name** and **Encrypted** **Password** fields, enter the credentials to authenticate with the AppDynamics server. + +In **Encrypted** **Password**, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +Only local AppDynamics users are supported. SAML users are not supported. +### Step 6: Usage Scope + +Usage scope is inherited from the secret used in **Encrypted Password**. + + +### Step 7: Test and Save + +1. When you have set up the dialog, click **Test**. +2. Once the test is successful, click **Submit** to add this Verification Provider. + + +### Next Step + +* [Use 24/7 Service Guard with AppDynamics](2-24-7-service-guard-for-app-dynamics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/2-24-7-service-guard-for-app-dynamics.md b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/2-24-7-service-guard-for-app-dynamics.md new file mode 100644 index 00000000000..c6c2f274b52 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/2-24-7-service-guard-for-app-dynamics.md @@ -0,0 +1,94 @@ +--- +title: Monitor Applications 24/7 with AppDynamics +description: Combined with AppDynamics, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: i6f2irsz9v +helpdocs_category_id: bpoqe48x7r +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Enabling AppDynamics in Harness 24/7 Service Guard helps Harness monitor your live applications, catching problems that surface minutes or hours following deployment. + +### Before You Begin + +* See the [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md).For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, nor configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +* Add AppDynamics monitoring to Harness 24/7 Service Guard in your Environment. See [Add AppDynamics as a Verification Provider](1-app-dynamics-connection-setup.md). + + +### Step 1: Add AppDynamics Verification + +To set up 24/7 Service Guard for AppDynamics, do the following: + +1. In your Harness Application, click **Environments**. +2. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +3. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +4. In the **Environment** page, locate **24/7 Service Guard**.![](./static/2-24-7-service-guard-for-app-dynamics-01.png) +5. In **24/7 Service Guard**, click **Add Service Verification**, and then click **AppDynamics**.![](./static/2-24-7-service-guard-for-app-dynamics-02.png)The **AppDynamics** dialog appears.![](./static/2-24-7-service-guard-for-app-dynamics-03.png) + + +### Step 2: Display Name + +Enter a name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the Environment and monitoring tool, such as **AppDynamics Prod**. + + +### Step 3: Service + +Select the Harness Service to monitor with 24/7 Service Guard. + + +### Step 4: AppDynamics Server + +Select the [AppDynamics Verification Provider](1-app-dynamics-connection-setup.md) to use. + + +### Step 5: Application Name + +Select the Application Name used by the monitoring tool. In **AppDynamics**, the applications are listed in the **Applications** tab. + +![](./static/2-24-7-service-guard-for-app-dynamics-04.png) + +### Step 6: Tier Name + +The **Tier Name** drop-down is populated with tiers from the application you selected. Pick the tier from which you want usage metrics, code exceptions, error conditions, and exit calls. In **AppDynamics**, the tiers are displayed in the **Tiers & Nodes** page. + +![](./static/2-24-7-service-guard-for-app-dynamics-05.png) +### Step 7: Algorithm Sensitivity + +Specify the sensitivity to determine what events are identified as anomalies. See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + + +### Step 8: Enable 24/7 Service Guard + +Select this check box to turn on 24/7 Service Guard for AppDynamics. + +If you simply want to set up 24/7 Service Guard support, but not enable it, leave this check box empty.The dialog will now look something like this: + +![](./static/2-24-7-service-guard-for-app-dynamics-06.png) +### Step 9: Test and Save + +1. Click **Test**. Harness verifies the settings you entered. +2. Click **Submit**. + +AppDynamics is now configured for 24/7 Service Guard. + +![](./static/2-24-7-service-guard-for-app-dynamics-07.png) +### Step 10: Examine Verification Results + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/2-24-7-service-guard-for-app-dynamics-08.png)For information on using this dashboard, see the [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + + +### Limitations + +For 24/7 Service Guard, the queries you define to collect logs are specific to the Application or Service that you want monitored. (Verification is Application/Service level.) This is unlike Workflows, where verification is performed at the host/node/pod level. + + +### Next Step + +* [Verify Deployments with AppDynamics](3-verify-deployments-with-app-dynamics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/3-verify-deployments-with-app-dynamics.md b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/3-verify-deployments-with-app-dynamics.md new file mode 100644 index 00000000000..a3168544a14 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/3-verify-deployments-with-app-dynamics.md @@ -0,0 +1,192 @@ +--- +title: Verify Deployments with AppDynamics +description: Harness can analyze AppDynamics data to verify, rollback, and improve deployments. +sidebar_position: 30 +helpdocs_topic_id: ehezyvz163 +helpdocs_category_id: bpoqe48x7r +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze AppDynamics data to verify, rollback, and improve your deployments. To enable this analysis, you must configure AppDynamics as a verification step in a Harness Workflow. + +### Before You Begin + +* Add AppDynamics as a Harness [Verification Provider](1-app-dynamics-connection-setup.md). + + +### Limitations + +Add the AppDynamics Verification Provider to your Workflow only *after* you have run at least one successful Workflow deployment. This enables the AppDynamics integration to obtain the names of the host(s) or container(s) where your service is deployed. + +See [general limitations](https://docs.harness.io/article/9dvxcegm90-variables#limitations) on the Harness variable expressions that are presented as options below. + + +### Step 1: Add Verification Step + +To add an AppDynamics verification step to your Workflow: + +1. In your Workflow, under **Verify Service**, click **Add Verification**. +2. In the resulting **Add Step** settings, select **Performance Monitoring** > **AppDynamics**. + + ![](./static/3-verify-deployments-with-app-dynamics-14.png) + +3. Click **Next**. The **Configure** **AppDynamics** settings appear, ready for you to configure in the steps below. + + ![](./static/3-verify-deployments-with-app-dynamics-15.png) + + +### Step 2: AppDynamics Server + +In the **AppDynamics Server** drop-down, select the server you added when you set up your [AppDynamics Verification Provider](1-app-dynamics-connection-setup.md). + +You can also enter variable expressions, such as: `${serviceVariable.appd_QA}`. Do not use hyphens (dashes) in variable expressions; also, see other [limitations on Harness variables](https://docs.harness.io/article/9dvxcegm90-variables#limitations). + +If the **AppDynamics Server** field contains an expression, the **Application Name** and **Tier Name** fields must also use an expression. [Templatized](#templatize) fields cannot take variable expressions, and cannot be edited. Using expressions or templatization will disable the **Test** button. +### Step 3: Application Name + +This field's drop-down list is populated with the applications available on the AppDynamcis server you selected. Select an application from the list. In **AppDynamics**, the applications are listed in the **Applications** tab. + +![](./static/3-verify-deployments-with-app-dynamics-16.png) + +You can also enter variable expressions, such as: `${app.name}`. + +Do not use hyphens (dashes) in variable expressions. See other [limitations on Harness variables](https://docs.harness.io/article/9dvxcegm90-variables#limitations). If the **AppDynamics Server** field contains an expression, the **Application Name** and **Tier Name** fields must also use an expression. [Templatized](#templatize) fields cannot take variable expressions, and cannot be edited. Using expressions or templatization will disable the **Test** button. +### Step 4: Tier Name + +The field's drop-down list is populated with tiers from the AppDynamics application you selected. Pick the tier from which you want usage metrics, code exceptions, error conditions, and exit calls. In **AppDynamics**, the tiers are displayed in the **Tiers & Nodes** page. + +![](./static/3-verify-deployments-with-app-dynamics-17.png) + +You can also enter variable expressions, such as: `${service.name}`. + +Do not use hyphens (dashes) in variable expressions. See other [limitations on Harness variables](https://docs.harness.io/article/9dvxcegm90-variables#limitations). If the **Application Name** field contains an expression, the **Tier Name** field must also use an expression. [Templatized](#templatize) fields cannot take variable expressions, and cannot be edited. Using expressions or templatization will disable the **Test** button.For PCF deployments, application tiers will match tier information from the [application manifest file](https://docs.harness.io/article/3ekpbmpr4e-adding-and-editing-inline-pcf-manifest-files). + + +### Step 5: Expression for Host/Container Name + +Any expression that you enter in this field should resolve to a host/container name in your deployment environment. By default, the expression is `${instance.host.hostName}`. + +For most use cases, you can leave this field empty, to apply the default. However, if you want to add a prefix or suffix, enter an expression as outlined here.If you begin typing an expression into the field, the field provides expression assistance. + +![](./static/3-verify-deployments-with-app-dynamics-18.png) + +For AWS EC2 hostnames, use the expression `${instance.hostName`}. + +#### Do Not Use Reuse Node Name + +When the AppDynamics [Reuse Node Name property](https://docs.appdynamics.com/display/PRO45/Java+Agent+Configuration+Properties#JavaAgentConfigurationProperties-reusenodenameReuseNodeName) is set to true, it reuses node names in AppDynamics. As a result, you don't need to supply a node name, but you do need to provide a node name prefix using -Dappdynamics.agent.reuse.nodeName.prefix. + +For verifying Harness deployments, we suggest you refrain the Reuse Node Name property unless it is absolutely needed. This property made sense in the past, but with Kubernetes and Docker deployments Harness always uses a unique identifier for the pod/node and that is sufficient. + +On AppDynamics, the node retention settings of these nodes can be adjusted to delete older instances. + +#### Find Node Names in AppDynamics + +Harness does not support AppDynamics dynamic, or reuse, node names.To find the node names in **AppDynamics**, do the following: + +1. Click **Applications**. +2. Click the name of your application. +3. Click **Application Dashboard**, and then click **Dashboard**. + + ![](./static/3-verify-deployments-with-app-dynamics-19.png) + +4. Change your display to **Application Flow Map**. + + ![](./static/3-verify-deployments-with-app-dynamics-20.png) + +5. Click a node in your Application Flow Map to display its details. + + ![](./static/3-verify-deployments-with-app-dynamics-21.png) + +6. In the details, click the **Nodes** tab. + + ![](./static/3-verify-deployments-with-app-dynamics-22.png) + +#### Expression for AWS EC2 Tags + +In some cases, hosts are provisioned using a dynamic naming format uses prefixes/suffixes or some other convention that makes it difficult to identify them consistently. + +In these cases, tagging hosts and using EC2 tag names to identify them can be a successful workaround. + +You can the Harness expression `${aws.tags.find(host.ec2Instance.tags, '[tag_name]')}` to locate the hosts by EC2 tag name. + +For example, `${aws.tags.find(host.ec2Instance.tags, 'Project')}`: + +![](./static/3-verify-deployments-with-app-dynamics-23.png) + +#### Expressions for Tanzu Application Service (formerly PCF) Hosts + +You can use expressions for some CF [Environment variables](https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#view-env). For example, you might enter an expression like: + + +``` +${host.pcfElement.displayName}_${host.pcfElement.instanceIndex} +``` +...which could yield something like: `harness_example-1`, where the `displayName` is `harness-example` and `instanceIndex` is `1`. + +![](./static/3-verify-deployments-with-app-dynamics-24.png) + +See [PCF Built-in Variables](https://docs.harness.io/article/ojd73hseby-pcf-built-in-variables). + +When you are setting up the Workflow for the first time, Harness will *not* be able to help you create an expression, because there has not been a host/container deployed yet. This is another reason why Harness recommends adding the **Verify Step** *after* you have done one successful deployment. +### Step 6: Analysis Time Duration + +Use the **Analysis Time Duration** field to set the duration for the verification step. If a verification step exceeds the value, the Workflow's [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. + +For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed**, but the Workflow execution continues. For details, see [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + + +### Step 7: Baseline for Risk Analysis + +To select among the options available on this drop-down list, [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + + +### Step 8: Algorithm Sensitivity + +Use this drop-down list to specify the sensitivity of the failure criteria. When the criteria are met, the Workflow's **Failure Strategy** is triggered. For details about the options, see [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + + +### Step 9: Include Instances from Previous Phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. + +Do not apply this setting to the first phase in a multi-phase deployment. +### Step 10: Execute with Previous Steps + +Select this checkbox to run this verification step in parallel with the previous steps in the Workflow's **Verify Service**. + + +### Step 11: Test and Save + +To finish configuring this AppDynamics verification step: + +1. Click **Test**. Harness verifies the settings you entered. +2. When testing is successful, click **Submit**. This AppDynamics verification step is now configured. + +The **Test** button will be disabled if any of the **AppDynamics Server**, **Application Name**, and/or **Tier Name** fields contain [templatized values](templatize-app-dynamics-verification.md) or variable expressions. This is because Harness can't test the abstract values. As a workaround, you can fill these fields with static values from their drop-down lists, then click **Test** to verify all the static values, and then swap in your intended variable expressions before clicking **Submit**. + + +### Step 12: View Verification Results + +Once you have executed the Workflow, Harness performs the verification you configured and displays the results in the **Deployments** and **Continuous Verification** pages. Verification is executed in real time, quantifying the business impact of every production deployment. + +For a quick overview of the verification UI elements, see [Continuous Verification Tools](https://docs.harness.io/article/xldc13iv1y-meet-harness#continuous_verification_tools). For details about viewing and interpreting verification results, see [Verification Results Overview](../continuous-verification-overview/concepts-cv/deployment-verification-results.md). + +### Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +Here is an example using Dynatrace, but it applies to all providers. + +![](./static/3-verify-deployments-with-app-dynamics-25.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + + +### Next Steps + +* [Templatize AppDynamics Verification](templatize-app-dynamics-verification.md) +* [Set AppDynamics Environment Variables](app-dynamics-environment-variables.md) +* [AppDynamics as a Custom APM](../custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/_category_.json new file mode 100644 index 00000000000..fdedcdfc74c --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "AppDynamics Verification", + "position": 30, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "AppDynamics Verification" + }, + "customProps": { + "helpdocs_category_id": "bpoqe48x7r" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/app-dynamics-environment-variables.md b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/app-dynamics-environment-variables.md new file mode 100644 index 00000000000..31b9799ebfd --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/app-dynamics-environment-variables.md @@ -0,0 +1,48 @@ +--- +title: Set AppDynamics Environment Variables +description: Set environment variables in the Docker Image you deploy, or in the Harness Service that uses this image. +sidebar_position: 50 +helpdocs_topic_id: e1qar9w373 +helpdocs_category_id: bpoqe48x7r +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic covers how to set required environment variables to monitor Java applications in the AppDynamics Controller. + + +### Before You Begin + +* Add AppDynamics as a Harness [Verification Provider](1-app-dynamics-connection-setup.md). +* Add an AppDynamics [verification step](3-verify-deployments-with-app-dynamics.md) to a Workflow. + + +### Step 1: Install the Java Agent + +To monitor Java applications in the AppDynamics Controller, you must install the AppDynamics Java Agent on each server that hosts applications to be monitored. The Java Agent requires that certain environment variables be set. + + +### Option: Set Variables in Artifact + +For a Docker Image artifact, you can include the Java Agent in the Docker Image you deploy, and set these environment variables in the artifact. You can do this using a `controller-info.xml` file, such as [this one located on GitHub](https://github.com/Appdynamics/appdynamics-openshift-quickstart/blob/master/AppServerAgent/conf/controller-info.xml). + + +### Option: Set Variables in Service + +You can also set these variables in the Harness Service that is using the Docker Image. Here is an example of a Harness Service containing the environment variables as Config Variables. + +![](./static/app-dynamics-environment-variables-00.png) + +#### Identifying Environment Variables + +For a list of the required environment variables, see [Use Environment Variables for Java Agent Settings](https://docs.appdynamics.com/display/PRO42/Use+Environment+Variables+for+Java+Agent+Settings) from AppDynamics. You might also include the `JAVA_OPTS` variable to add the Java Agent path to `JAVA_OPTS`. + +The Config Variables in the Harness Service can be overwritten by the Harness Environment [Service Overrides](../../model-cd-pipeline/environments/environment-configuration.md#override-a-service-configuration). +### Limitations + +Do not hard-code the node name (`APPDYNAMICS_AGENT_NODE_NAME`) in any environment variables. Doing so will prevent certain deployment features—such as Canary and Blue/Green strategies, and rollback—from executing. +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [AppDynamics as a Custom APM](../custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/1-app-dynamics-connection-setup-13.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/1-app-dynamics-connection-setup-13.png new file mode 100644 index 00000000000..9e3fcdf13e6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/1-app-dynamics-connection-setup-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-01.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-01.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-02.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-02.png new file mode 100644 index 00000000000..e201dc2830c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-03.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-03.png new file mode 100644 index 00000000000..e1f920db606 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-04.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-04.png new file mode 100644 index 00000000000..e91bdbe6d00 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-05.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-05.png new file mode 100644 index 00000000000..9a13ab6d5b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-06.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-06.png new file mode 100644 index 00000000000..6dc8935e9d1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-07.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-07.png new file mode 100644 index 00000000000..76e817bd29f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-08.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-08.png new file mode 100644 index 00000000000..731babc60a5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/2-24-7-service-guard-for-app-dynamics-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-14.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-14.png new file mode 100644 index 00000000000..5ad34badd4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-15.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-15.png new file mode 100644 index 00000000000..33573effbe4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-16.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-16.png new file mode 100644 index 00000000000..33bbb050208 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-17.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-17.png new file mode 100644 index 00000000000..5f489bc710a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-18.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-18.png new file mode 100644 index 00000000000..17f2a7a588b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-19.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-19.png new file mode 100644 index 00000000000..8a9249dfceb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-20.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-20.png new file mode 100644 index 00000000000..2a59b4f9c3e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-21.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-21.png new file mode 100644 index 00000000000..1166c4b0535 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-22.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-22.png new file mode 100644 index 00000000000..e5a5f84ae35 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-23.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-23.png new file mode 100644 index 00000000000..a0d97ae5fd0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-24.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-24.png new file mode 100644 index 00000000000..f628c3698ff Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-25.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-25.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/3-verify-deployments-with-app-dynamics-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/app-dynamics-environment-variables-00.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/app-dynamics-environment-variables-00.png new file mode 100644 index 00000000000..9b48a0bad4e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/app-dynamics-environment-variables-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-09.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-09.png new file mode 100644 index 00000000000..429c93457e3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-10.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-10.png new file mode 100644 index 00000000000..4f96f5371d7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-11.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-11.png new file mode 100644 index 00000000000..56f14c145b0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-12.png b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-12.png new file mode 100644 index 00000000000..5a277f17055 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/static/templatize-app-dynamics-verification-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/templatize-app-dynamics-verification.md b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/templatize-app-dynamics-verification.md new file mode 100644 index 00000000000..e740c914cb3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/appdynamics-verification/templatize-app-dynamics-verification.md @@ -0,0 +1,56 @@ +--- +title: Templatize AppDynamics Verification +description: Templatize settings within an AppDynamics verification step, to provide values at your Workflow's or Pipelines' runtime. +sidebar_position: 40 +helpdocs_topic_id: yyi1c69jeq +helpdocs_category_id: bpoqe48x7r +helpdocs_is_private: false +helpdocs_is_published: true +--- + +By templatizing certain settings in an AppDynamics verification step, you can use that verification step in a Workflow (and in multiple Pipelines) without having to provide settings until runtime. + +### Before You Begin + +* Add AppDynamics as a Harness [Verification Provider](1-app-dynamics-connection-setup.md). +* Add an AppDynamics [verification step](3-verify-deployments-with-app-dynamics.md) to a Workflow. + + +### Step 1: Templatize Settings + +You templatize settings by click the **[T]** icon next to the setting. + +![](./static/templatize-app-dynamics-verification-09.png) + +The settings are replaced by [Workflow variables](../../model-cd-pipeline/workflows/workflow-configuration.md#add-workflow-variables): + +![](./static/templatize-app-dynamics-verification-10.png) + +You will now see them in the **Workflow Variables** section of the Workflow: + +![](./static/templatize-app-dynamics-verification-11.png) +### Step 2: Deploy a Templatized Workflow + +When you deploy the Workflow, **Start New Deployment** prompts you to enter values for templatized settings: + +![](./static/templatize-app-dynamics-verification-12.png) + +You can select the necessary settings and deploy the Workflow. + + +### Option: Trigger Variables + +You can also use a Trigger to pass variables and set Workflow values. For details, see [Passing Variables into Workflows and Pipelines from Triggers](../../model-cd-pipeline/expressions/passing-variable-into-workflows.md). + + +### Limitations + +* When templatized, fields cannot be edited. +* If any of the fields within the **Configure AppDynamics** settings contain templatized values (or variable expressions), the settings' **Test** button is disabled. This is because Harness can't test the abstract values. As a workaround, you can fill these fields with static values from their drop-down lists, click **Test** to verify all the static values, and then swap in your intended variable expressions before clicking **Submit**. + + +### Next Steps + +* [Set AppDynamics Environment Variables](app-dynamics-environment-variables.md) +* [AppDynamics as a Custom APM](../custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/1-bugsnag-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/1-bugsnag-connection-setup.md new file mode 100644 index 00000000000..70bdb8f3387 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/1-bugsnag-connection-setup.md @@ -0,0 +1,74 @@ +--- +title: Connect to Bugsnag +description: Connect Harness to Bugsnag and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: dml2vk0ec3 +helpdocs_category_id: zfre1xei7u +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Bugsnag with Harness is to set up an Bugsnag Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as Bugsnag. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your Bugsnag data and analysis. + +### Before You Begin + +* See the [Bugsnag Verification Overview](../continuous-verification-overview/concepts-cv/bugsnag-verification-overview.md). + +### Step 1: Add Bugsnag Verification Provider + +To add Bugsnag as a verification provider, do the following: + +1. Click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **Bugsnag**. The **Add Bugsnag Verification Provider** dialog for your provider appears. + + ![](./static/1-bugsnag-connection-setup-00.png) + +The **Add Bugsnag Verification Provider** dialog has the following fields. + +### Step 2: Bugsnag URL + +Enter **https://api.bugsnag.com/**. This is the URL for the Bugsnag API. + +### Step 3: Encrypted Auth Token + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets).Enter the **Data Access API** personal auth token for your Bugsnag account. Here is how you get the token in Bugsnag: + +1. In **Bugsnag**, click your account icon in the top-right of any page, and then click **Settings**. + + [![](./static/1-bugsnag-connection-setup-01.png)](./static/1-bugsnag-connection-setup-01.png) + +2. In the **My account** page, next to **Data Access API**, click **Personal auth tokens**. + + [![](./static/1-bugsnag-connection-setup-03.png)](./static/1-bugsnag-connection-setup-03.png) + +3. In **Personal auth tokens**, click **GENERATE NEW TOKEN**. + + [![](./static/1-bugsnag-connection-setup-05.png)](./static/1-bugsnag-connection-setup-05.png) + + The **Generate new auth token** dialog appears. + + [![](./static/1-bugsnag-connection-setup-07.png)](./static/1-bugsnag-connection-setup-07.png) + +4. In **Token description**, enter a name, such as **Harness**, and click **GENERATE**. The auth token is generated. + + [![](./static/1-bugsnag-connection-setup-09.png)](./static/1-bugsnag-connection-setup-09.png) + +5. Click **Copy to clipboard**. +6. In **Harness**, in the **Add Bugsnag Verification Provider** dialog, paste the token in the **Auth Token** field. + +### Step 4: Display Name + +The name for this Bugsnag verification provider connection in Harness. If you will have multiple Bugsnag connections, enter a unique name. You will use this name to select this connection when integrating Bugsnag with the **Verify Steps** of your workflows, described below. + +### Step 5: Usage Scope + +Usage scope is inherited from the secret used in **Encrypted Auth Token**. + +### Next Steps + +* [Monitor Applications 24/7 with Bugsnag](2-24-7-service-guard-for-bugsnag.md) +* [Verify Deployments with Bugsnag](3-verify-deployments-with-bugsnag.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/2-24-7-service-guard-for-bugsnag.md b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/2-24-7-service-guard-for-bugsnag.md new file mode 100644 index 00000000000..384e3705741 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/2-24-7-service-guard-for-bugsnag.md @@ -0,0 +1,89 @@ +--- +title: Monitor Applications 24/7 with Bugsnag +description: Combined with Bugsnag, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: 4usahbjhp6 +helpdocs_category_id: zfre1xei7u +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Bugsnag monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see . + +This section assumes you have a Harness Application set up and containing a Service and Environment. For steps on setting up a Harness Application, see [Application Checklist](../../model-cd-pipeline/applications/application-configuration.md). + +### Before You Begin + +* See the [Bugsnag Verification Overview](../continuous-verification-overview/concepts-cv/bugsnag-verification-overview.md). +* See [Connect to Bugsnag](1-bugsnag-connection-setup.md). + +### Step 1: Set Up 24/7 Service Guard for Bugsnag + +To set up 24/7 Service Guard for Bugsnag, do the following: + +1. Ensure that you have added Bugsnag as a Harness Verification Provider, as described in [Connect to Bugsnag](1-bugsnag-connection-setup.md). +2. In your Harness Application, ensure that you have added a Service, as described in  [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see  [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + + [![](./static/2-24-7-service-guard-for-bugsnag-23.png)](./static/2-24-7-service-guard-for-bugsnag-23.png) + + +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Bugsnag**.![](./static/2-24-7-service-guard-for-bugsnag-25 + + The **Bugsnag** dialog appears. + + ![](./static/2-24-7-service-guard-for-bugsnag-26.png) + +Fill out the dialog. The **Bugsnag** dialog has the following fields. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Step 2: Display Name + +The name that will identify this service on the Continuous Verification dashboard. Use a name that indicates the environment and monitoring tool, such as Bugsnag. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: Bugsnag Server + +Select the Bugsnag verification provider you added to Harness, as described above. Harness will immediately use the connection to obtain organization and project information from Bugsnag. + +### Step 5: Organization ID + +Select the **Organization ID** for your Bugsnag account. In Bugsnag, this is the **Organization name** in the account's **Organization** page: + +[![](./static/2-24-7-service-guard-for-bugsnag-27.png)](./static/2-24-7-service-guard-for-bugsnag-27.png) + +### Step 6: Project ID + +Select the Project ID for the Bugsnag project you want to use. In Bugsnag, this is the **Project Name** in the **Projects** page: + +[![](./static/2-24-7-service-guard-for-bugsnag-29.png)](./static/2-24-7-service-guard-for-bugsnag-29.png + +### Step 7: Release Stage + +Enter the Bugsnag [release stage](https://docs.bugsnag.com/product/releases/#configuring-the-release-stage), if necessary. + +### Step 8: Search Keywords + +The keywords to search, such as `*exception*`. + +### Step 9: Browser Application + +Click the checkbox to have Harness ignore host/node events and focus on the browser events Bugsnag captures. + +### Step 10: Baseline + +Select the baseline time unit for monitoring. For example, if you select **For 4 hours**, Harness will collect the logs for the last 4 hours as the baseline for comparisons with future logs. If you select **Custom Range** you can enter a **Start Time** and **End Time**. + +### Next Steps + +* [Verify Deployments with Bugsnag](3-verify-deployments-with-bugsnag.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/3-verify-deployments-with-bugsnag.md b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/3-verify-deployments-with-bugsnag.md new file mode 100644 index 00000000000..47ea98baf3d --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/3-verify-deployments-with-bugsnag.md @@ -0,0 +1,149 @@ +--- +title: Verify Deployments with Bugsnag +description: Harness can analyze Bugsnag data and analysis to verify, rollback, and improve deployments. +sidebar_position: 30 +helpdocs_topic_id: 2tfwoxl1dj +helpdocs_category_id: zfre1xei7u +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following procedure describes how to add Bugsnag as a verification step in a Harness workflow. For more information about workflows, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md). + +Once you run a deployment and Bugsnag preforms verification, Harness' machine-learning verification analysis will assess the risk level of the deployment. + +In order to obtain the useful comparison data, the verification provider should be added to your workflow **after** you have run at least one successful deployment. + +## Before You Begin + +* See the [Bugsnag Verification Overview](../continuous-verification-overview/concepts-cv/bugsnag-verification-overview.md). +* See [Connect to Bugsnag](1-bugsnag-connection-setup.md). + +## Visual Summary + +Here's an example Bugsnag deployment verification configuration. + +![](./static/3-verify-deployments-with-bugsnag-11.png) + +## Step 1: Set Up the Deployment Verification + +To verify your deployment with Bugsnag, do the following: + +1. Ensure that you have added Bugsnag as a verification provider, as described in [Connect to Bugsnag](1-bugsnag-connection-setup.md). +2. In your workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select **Log Analysis** > **Bugsnag**. + + ![](./static/3-verify-deployments-with-bugsnag-12.png) + +4. Click **Next**. The **Configure Bugsnag** settings appear. + + ![](./static/3-verify-deployments-with-bugsnag-13.png) + +These settings include the following fields. + +## Step 2: Bugsnag Server + +Select the Bugsnag verification provider you added to Harness, as described above. Harness will immediately use the connection to obtain organization and project information from Bugsnag. + +## Step 3: Organization ID + +Select the **Organization ID** for your Bugsnag account. In Bugsnag, this is the **Organization name** in the account's **Organization** page: + +[![](./static/3-verify-deployments-with-bugsnag-14.png)](./static/3-verify-deployments-with-bugsnag-14.png) + +## Step 4: Project ID + +Select the Project ID for the Bugsnag project you want to use. In Bugsnag, this is the **Project Name** in the **Projects** page: + +[![](./static/3-verify-deployments-with-bugsnag-16.png)](./static/3-verify-deployments-with-bugsnag-16.png) + +## Step 5: Release Stage + +Enter the Bugsnag  [release stage](https://docs.bugsnag.com/product/releases/#configuring-the-release-stage), if necessary. + +## Step 6: Browser Application + +Click the checkbox to have Harness ignore host/node deployment events and focus on the browser events Bugsnag captures. + +## Step 7: Expression for Host/Container name + +If you do not enable the **Browser Application** checkbox, Harness will use the host/node/container event data Bugsnag captures. Add a variable that evaluates to the hostname value in the **host** field of event messages. For example, in a Bugsnag message in a Harness deployment verification, if you look at a event message, you will see a **hosts** field: + +[![](./static/3-verify-deployments-with-bugsnag-18.png)](./static/3-verify-deployments-with-bugsnag-18.png) + +Next, look in the JSON for the host/container/pod in the deployment environment and identify the label containing the same hostname. The path to that label is what the expression should be in **Expression for Host/Container name**. The default variable is **${instance.host.hostName}**. In most cases, this expression will work. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}. + +## Step 8: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +## Step 9: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +## Step 10: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +## Step 11: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +## Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-bugsnag-20.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +## Step 12: View Verification Results + +Once you have deployed your workflow (or pipeline) using the Bugsnag verification step, you can automatically verify cloud application and infrastructure performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Bugsnag verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **Bugsnag** step. + +![](./static/3-verify-deployments-with-bugsnag-21.png) + +### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +![](./static/3-verify-deployments-with-bugsnag-22.png) + +To learn about the verification analysis features, see the following sections. + +#### Deployments + +* **Deployment info:** See the verification analysis for each deployment, with information on its service, environment, pipeline, and workflows. +* **Verification phases and providers:** See the vertfication phases for each vertfication provider. Click each provider for logs and analysis. +* **Verification timeline:** See when each deployment and verification was performed. | + +#### Transaction Analysis + +* **Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. +* **Risk level analysis:** Get an overall risk level and view the cluster chart to see events. +* **Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + +#### Execution Analysis + +* **Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event. +* **Cluster chart:** View the chart to see how the selected event contrast. Click each event to see its log details. | + +#### Event Management + +* **Event-level analysis:** See the threat level for each event captured. +* **Tune event capture:** Remove events from analysis at the service, workflow, execution, or overall level. +* **Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. + +## Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Users and Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/_category_.json new file mode 100644 index 00000000000..c41f655997d --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Bugsnag Verification", + "position": 40, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Bugsnag Verification" + }, + "customProps": { + "helpdocs_category_id": "zfre1xei7u" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-00.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-00.png new file mode 100644 index 00000000000..3f60839a81e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-01.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-01.png new file mode 100644 index 00000000000..3e089399d71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-02.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-02.png new file mode 100644 index 00000000000..3e089399d71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-03.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-03.png new file mode 100644 index 00000000000..183d435a991 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-04.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-04.png new file mode 100644 index 00000000000..183d435a991 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-05.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-05.png new file mode 100644 index 00000000000..5b4e09eb566 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-06.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-06.png new file mode 100644 index 00000000000..5b4e09eb566 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-07.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-07.png new file mode 100644 index 00000000000..c188d162974 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-08.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-08.png new file mode 100644 index 00000000000..c188d162974 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-09.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-09.png new file mode 100644 index 00000000000..638882b2252 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-10.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-10.png new file mode 100644 index 00000000000..638882b2252 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/1-bugsnag-connection-setup-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-23.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-23.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-24.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-24.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-25.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-25.png new file mode 100644 index 00000000000..b581285a3ec Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-26.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-26.png new file mode 100644 index 00000000000..22b74aeb91b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-27.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-27.png new file mode 100644 index 00000000000..1215bb96b18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-28.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-28.png new file mode 100644 index 00000000000..1215bb96b18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-28.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-29.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-29.png new file mode 100644 index 00000000000..bbccc025c60 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-30.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-30.png new file mode 100644 index 00000000000..bbccc025c60 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/2-24-7-service-guard-for-bugsnag-30.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-11.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-11.png new file mode 100644 index 00000000000..1b1ba6be85a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-12.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-12.png new file mode 100644 index 00000000000..2c9d007299e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-13.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-13.png new file mode 100644 index 00000000000..1b1ba6be85a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-14.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-14.png new file mode 100644 index 00000000000..1215bb96b18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-15.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-15.png new file mode 100644 index 00000000000..1215bb96b18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-16.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-16.png new file mode 100644 index 00000000000..bbccc025c60 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-17.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-17.png new file mode 100644 index 00000000000..bbccc025c60 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-18.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-18.png new file mode 100644 index 00000000000..60326418d54 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-19.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-19.png new file mode 100644 index 00000000000..60326418d54 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-20.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-20.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-21.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-21.png new file mode 100644 index 00000000000..28e637ce5b9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-22.png b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-22.png new file mode 100644 index 00000000000..891c2c19bdf Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/bugsnag-verification/static/3-verify-deployments-with-bugsnag-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/2-24-7-service-guard-for-cloud-watch.md b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/2-24-7-service-guard-for-cloud-watch.md new file mode 100644 index 00000000000..3596b954115 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/2-24-7-service-guard-for-cloud-watch.md @@ -0,0 +1,116 @@ +--- +title: Monitor Applications 24/7 with CloudWatch +description: Combined with AWS CloudWatch, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 30 +helpdocs_topic_id: ngeq6xckpg +helpdocs_category_id: wyuv3zocfk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your CloudWatch monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see [Connect to CloudWatch](cloud-watch-connection-setup.md). + +This section assumes you have a Harness Application set up and containing a Service and Environment. For steps on setting up a Harness Application, see [Application Checklist](../../model-cd-pipeline/applications/application-configuration.md). You cannot configure CloudWatch 24/7 Service Guard for AWS ALB and AWS EKS. + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [CloudWatch Verification Overview](../continuous-verification-overview/concepts-cv/cloud-watch-verification-overview.md). + + +### Visual Summary + +Here's an example of a 24/7 Service Guard configuration for CloudWatch. + +![](./static/2-24-7-service-guard-for-cloud-watch-12.png) + +### Step 1: Set up 24/7 Service Guard for CloudWatch + +1. Ensure that you have added CloudWatch as a Harness Verification Provider, as described in [Connect to CloudWatch](cloud-watch-connection-setup.md). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + + ![](./static/2-24-7-service-guard-for-cloud-watch-13.png) + +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **CloudWatch**. The **CloudWatch** dialog appears. + + ![](./static/2-24-7-service-guard-for-cloud-watch-14.png) + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **CloudWatch**. + + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + + +### Step 4: CloudWatch Server + +Select the CloudWatch Verification Provider to use. + + +### Step 5: Region + +Select the AWS region where the ECS and/or ELB are located. + + +### Step 6: ELB Metrics + +Click **Add** for each load balancer you want to monitor. For more information, see  [Elastic Load Balancing Metrics and Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/elb-metricscollected.html) from AWS. + + +### Step 7: ECS Metrics + +This **Cluster** drop-down menu contains the available ECS clusters. The Metrics drop-down contains the available metrics. Select the metrics to monitor. + +You can see the available metrics in CloudWatch. + +![](./static/2-24-7-service-guard-for-cloud-watch-15.png)For more information, see [Using Amazon CloudWatch Metrics from AWS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html). + + +### Step 8: Lambda + +Select the Lambda function and metrics to monitor. The functions displayed are from the region you selected. Only functions that have been deployed are displayed. + + +### Step 9: Algorithm Sensitivity + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + + +### Step 10: Enable 24/7 Service Guard + +Enable this setting to turn on 24/7 Service Guard. If you just want to set up 24/7 Service Guard, but not enable it, leave this setting disabled. + +When you are finished, the dialog will look something like this: + +![](./static/2-24-7-service-guard-for-cloud-watch-16.png) +### Step 11: Verify Your Settings + +1. Click **TEST**. Harness verifies the settings you entered. +2. Click **SUBMIT**. The CloudWatch 24/7 Service Guard to configured. + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +![](./static/2-24-7-service-guard-for-cloud-watch-17.png) + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/2-24-7-service-guard-for-cloud-watch-18.png) + +For information on using the dashboard, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + + +### Next Steps + +* [Verify Deployments with CloudWatch](3-verify-deployments-with-cloud-watch.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/3-verify-deployments-with-cloud-watch.md b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/3-verify-deployments-with-cloud-watch.md new file mode 100644 index 00000000000..a6efb0935b3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/3-verify-deployments-with-cloud-watch.md @@ -0,0 +1,183 @@ +--- +title: Verify Deployments with CloudWatch +description: Harness can analyze CloudWatch data to verify, rollback, and improve deployments. +sidebar_position: 10 +helpdocs_topic_id: awerepjwlc +helpdocs_category_id: wyuv3zocfk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following procedure describes how to add CloudWatch as a verification step in a Harness workflow. For more information about workflows, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md). + +Once you run a deployment and CloudWatch preforms verification, Harness' machine-learning verification analysis will assess the risk level of the deployment. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow **after** you have run at least one successful deployment.You cannot configure CloudWatch Deployment Verification for AWS ALB and AWS EKS. + +## Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [CloudWatch Verification Overview](../continuous-verification-overview/concepts-cv/cloud-watch-verification-overview.md). + + +## Visual Summary + +Here's an example of a CloudWatch setup for verification. + +![](./static/3-verify-deployments-with-cloud-watch-00.png) + + +## Step 1: Set up the Deployment Verification + +1. Ensure that you have added AWS as a Cloud Provider, as described in [Connect to CloudWatch](cloud-watch-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Step**. +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **CloudWatch**. +4. Click **Next**. The **Configure****CloudWatch** settings appear. + + ![](./static/3-verify-deployments-with-cloud-watch-01.png) + + +## Step 2: CloudWatch Server + +Select the AWS Cloud Provider you set up in [Connect to CloudWatch](cloud-watch-connection-setup.md). You can also enter variable expressions, such as: `${serviceVariable.cloudwatch_connector_name}`. + + +## Step 3: Region + +Select the AWS region where the EC2 and/or ELB are located. You can also enter variable expressions, such as: `${env.name}`. + +If the **Cloudwatch Server** field contains an expression, the **Region** field must also use an expression. +## Step 4: Lambda + +Select this option for Harness to use the CloudWatch monitoring for the Lambda function(s) the Workflow is deploying. + +![](./static/3-verify-deployments-with-cloud-watch-02.png) + +You can select **ELB Metrics** (Load Balancers, Metric Names) but they are not required. For information on Lambda metrics, see [AWS Lambda Metrics](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html). + +The metrics are Invocations, Errors, Throttles, and Duration. + + +## Step 5: EC2 Metrics + +This drop-down menu contains the available EC2 metrics. Select the metrics to monitor. For more information, see [Using Amazon CloudWatch Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) from AWS. + +You can see the available metrics in **CloudWatch**. Click **Metrics**, and then click **All metrics**. + +![](./static/3-verify-deployments-with-cloud-watch-03.png) + +## Step 6: ECS Metrics + +Expand the **ECS Metrics** option and specify the **Cluster** and **Metric Names** for monitoring. + +In your are performing a Canary analysis, ECS metrics measure historical data because there is no host. + +See [Canary Analysis](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#canary-analysis). + + +## Step 7: ELB Metrics + +**ELB Metrics** are available for all of the CloudWatch types. Add each load balancer you want to monitor. For more information, see [Elastic Load Balancing Metrics and Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/elb-metricscollected.html) from AWS. + + +## Step 8: Load Balancers + +Select the load balancer to monitor. The list of load balancers is populated according to the AWS cloud provider and region you selected. (Note: For CloudWatch analysis to appear, you must select a load balancer that provides at least 7 days of historical data for comparison.) + + +## Step 9: Metrics Name + +This drop-down menu contains the available ELB metrics. Select the metrics you want to monitor. + + +## Step 10: Expression for Host/Container name + +The expression entered here should resolve to a host/container name in your deployment environment. By default, the expression is `${instance.host.hostName}`. If you begin typing the expression into the field, the field provides expression assistance. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}.![](./static/3-verify-deployments-with-cloud-watch-04.png) + +You can also click **Guide from Example** to select available hosts from a drop-down list and test them. + +![](./static/3-verify-deployments-with-cloud-watch-05.png) + + +## Step 11: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See  [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + + +## Step 12: Baseline for Risk Analysis + +See  [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +In your are performing a Canary analysis, ECS metrics measure historical data because there is no host. + +See [Canary Analysis](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#canary-analysis). + + +## Step 13: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +Here is an example of a completed CloudWatch verification step. + +![](./static/3-verify-deployments-with-cloud-watch-06.png) + + +## Step 14: Verify Your Configuration + +When you are finished, click **Test** to verify your configuration. Once the test succeeds, click **Submit**. This adds the CloudWatch verification step to your Workflow. + +![](./static/3-verify-deployments-with-cloud-watch-07.png) + + +## Step 15: View Verification Results + +Once you have deployed your workflow (or pipeline) using the CloudWatch verification step, you can automatically verify cloud application and infrastructure performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +#### Workflow Verification + +To see the results of Harness machine-learning evaluation of your CloudWatch verification, in your Workflow or Pipeline deployment you can expand the **Verify Service** step and then click the **CloudWatch** step. + +![](./static/3-verify-deployments-with-cloud-watch-08.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +![](./static/3-verify-deployments-with-cloud-watch-09.png) + +Clicking a deployment instance open the deployment details in the **Deployments** page. + +![](./static/3-verify-deployments-with-cloud-watch-10.png) + +To learn about the verification analysis features, see the following sections. + +#### Deployments + +Harness supports Metrics from CloudWatch for Lambda, EC2, ECS, and ELB. + +* **Deployment info -** See the verification analysis for each deployment, with information on its service, environment, pipeline, and workflows. +* **Verification phases and providers -** See the verification phases for each verification provider. Click each provider for logs and analysis. +* **Verification timeline -** See when each deployment and verification was performed. + +#### Web Transaction Analysis + +* **Execution details -** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. +* **Risk level analysis -** Get an overall risk level and view the cluster chart to see events. +* **Web** **Transaction-level summary -** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + +## Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-cloud-watch-11.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +## Next Steps + +* [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/_category_.json new file mode 100644 index 00000000000..0b2ab42a17e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "CloudWatch Verification", + "position": 50, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "CloudWatch Verification" + }, + "customProps": { + "helpdocs_category_id": "wyuv3zocfk" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/cloud-watch-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/cloud-watch-connection-setup.md new file mode 100644 index 00000000000..985b2ef34c4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/cloud-watch-connection-setup.md @@ -0,0 +1,68 @@ +--- +title: Connect to CloudWatch +description: Connect Harness to AWS CloudWatch and verify the success of your deployments and live microservices. +sidebar_position: 20 +helpdocs_topic_id: huoann4npq +helpdocs_category_id: wyuv3zocfk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Harness Cloud Provider is a connection to AWS and its monitoring tools, such as CloudWatch. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your CloudWatch data and analysis. + +### Before You Begin + +* See the [CloudWatch Verification Overview](../continuous-verification-overview/concepts-cv/cloud-watch-verification-overview.md). + + +### Step 1: Assign the Required AWS Permissions + +Harness requires the IAM user to be able to make API requests to AWS. The **User Access Type** required is **Programmatic access**. This enables an access key ID and secret access key for the AWS API, CLI, SDK, and other development tools. For more information, see [Creating an IAM User in Your AWS Account](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) from AWS. + +Here is the CloudWatch policy used for this guide: + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": [ + "cloudwatch:*", + "cloudtrail:*", + "logs:*", + "events:*" + ], + "Resource": "*" + } + ] +} +``` + +### Step 2: Add AWS Cloud Provider for CloudWatch + +To perform verification with CloudWatch, you must create a Harness Cloud Provider that can read from CloudWatch using your access key ID and secret access key. This Cloud Provider should have the permissions listed above in [AWS Permissions Required](#aws_permissions_required). + +You might have already set up a Workflow using a Harness Delegate installed in your AWS VPC. (For AWS, the Shell Script Delegate and ECS Delegate are most commonly used.) In this case, to add CloudWatch verification, you must now add a Cloud Provider with the above credentials. + +For more information on setting up an AWS Cloud Provider in Harness, see Installation [Example: Amazon Web Services and ECS](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#installation_example_amazon_web_services_and_ecs) and [Amazon Web Services (AWS) Cloud](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#amazon_web_services_aws_cloud). + +Here is a summary of the steps to set up an AWS Cloud Provider in Harness: + +1. Click **Setup**, and then click **Cloud Providers**. +2. Click **Add Cloud Provider**., and then select **Amazon Web Services**. +3. Choose a name for this provider. This is to differentiate AWS providers in Harness. It is not the actual AWS account name. +4. Select **Assume IAM Role on Delegate** (recommended), or **Enter AWS Access Keys manually**. + 1. If you selected **Assume IAM Role on Delegate**, in **Delegate Selector**, enter the Selector of the Delegate that this Cloud Provider will use for all connections. For information about Selectors, see [Delegate Selectors](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_selectors). + 2. If you selected **Enter AWS Access Keys manually**, enter your Access Key and select/create a [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) in **Secret Key**. For more information, see [Access Keys (Access Key ID and Secret Access Key)](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) from AWS. + +The AWS [IAM Policy Simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) is a useful tool for evaluating policies and access.For more details, see [Amazon Web Services (AWS) Cloud](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#amazon_web_services_aws_cloud). + + +### Next Steps + +* [Monitor Applications 24/7 with CloudWatch](2-24-7-service-guard-for-cloud-watch.md) +* [Verify Deployments with CloudWatch](3-verify-deployments-with-cloud-watch.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-12.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-12.png new file mode 100644 index 00000000000..83734029ff1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-13.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-13.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-14.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-14.png new file mode 100644 index 00000000000..ae6fc4482e0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-15.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-15.png new file mode 100644 index 00000000000..f375727659e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-16.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-16.png new file mode 100644 index 00000000000..83734029ff1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-17.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-17.png new file mode 100644 index 00000000000..55ee8d6e179 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-18.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-18.png new file mode 100644 index 00000000000..890cbad6833 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/2-24-7-service-guard-for-cloud-watch-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-00.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-00.png new file mode 100644 index 00000000000..c47d786cb05 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-01.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-01.png new file mode 100644 index 00000000000..c47d786cb05 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-02.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-02.png new file mode 100644 index 00000000000..d87565b511c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-03.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-03.png new file mode 100644 index 00000000000..23b331e4e8c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-04.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-04.png new file mode 100644 index 00000000000..17f2a7a588b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-05.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-05.png new file mode 100644 index 00000000000..88a1e133f3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-06.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-06.png new file mode 100644 index 00000000000..4fd9e2c46ee Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-07.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-07.png new file mode 100644 index 00000000000..adb6e628ccb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-08.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-08.png new file mode 100644 index 00000000000..8928e8f5887 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-09.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-09.png new file mode 100644 index 00000000000..b0891135c7d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-10.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-10.png new file mode 100644 index 00000000000..4efad9dbec0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-11.png b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-11.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/cloud-watch-verification/static/3-verify-deployments-with-cloud-watch-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/_category_.json new file mode 100644 index 00000000000..7ff25d77df2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Continuous Verification Overview", + "position": 10, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Continuous Verification Overview" + }, + "customProps": { + "helpdocs_category_id": "rfqhfm9od5" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md new file mode 100644 index 00000000000..30f7353cd9a --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md @@ -0,0 +1,93 @@ +--- +title: 24/7 Service Guard Overview +description: Summary of Harness 24/7 Service Guard features. +# sidebar_position: 2 +helpdocs_topic_id: dajt54pyxd +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This article introduces Harness 24/7 Service Guard. + +Most enterprises use multiple monitoring and verification tools for each stage of their microservice deployment, and multiple tools for monitoring the live microservice in production. Detecting and investigating regressions and anomalies across these tools consumes a lot of time. For those of you tasked with monitoring microservices, the following image will be familiar. + +![](./static/24-7-service-guard-overview-64.png) + +Harness solves this problem with Harness 24/7 Service Guard. + +![](./static/24-7-service-guard-overview-65.png) + +Harness 24/7 Service Guard: + +* Collects all of your monitoring and verification tools into a single dashboard. +* Applies Harness Continuous Verification unsupervised machine-learning to detect regressions and anomalies across transactions and events. +* Lets you drill down to the individual issue and open it in the related tool. + +Harness 24/7 Service Guard gives DevOps operational visibility across all your monitoring tools in all your production environments. + +![](./static/24-7-service-guard-overview-66.png) + +24/7 Service Guard's automatic anomaly and regression detection allows you to see when end users are impacted—without requiring configuration, thresholds (which you can [optionally add](#alert_notifications)), or rules. + +### Combined with Workflow Verifications + +24/7 Service Guard is an addition to Harness' basic deployment verification functionality, which is described in [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). Harness Workflow verification steps provide verification of Harness deployments and the running microservice for the first 15-30 minutes. 24/7 Service Guard provides detection of your microservices from then on, catching problems that surface minutes or hours following deployment. + +The following image shows how the Continuous Verification dashboard includes both 24/7 Service Guard and Harness Deployments verification. + +![](./static/24-7-service-guard-overview-67.png) + +1. 24/7 Service Guard detection. +2. Harness Deployments verification. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is at the application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Machine Learning Overview + +24/7 Service Guard sits on top of all your Application Performance Monitoring (APM), verification, and logging tools. 24/7 Service Guard applies: + +* Predictive machine learning models for short-term behavior: + + Applies deep neural nets to short-term history. + + Detects unusual patterns due to spikes. + + Adapts to drift over deployments. +* Applies memory models for long term behavior: + + Learns historical/cyclical trends. + + Quantifies app reliability over Web and business transactions, based on the history of anomalous behavior. + + Quantifies the importance of different Web and business transactions, based on app usage over short- and long-term periods. + +### Video Demonstration + +Here's a 2-minute video that explains Harness 24/7 Service Guard: + + + + + + +### Using the Dashboard + +To use 24/7 Service Guard, click Harness Manager's **Continuous Verification** link. + +![](./static/24-7-service-guard-overview-68.png) + +The Services configured with 24/7 Service Guard appear. In this example, we have two applications: + +![](./static/24-7-service-guard-overview-69.png) + +Let's look at the dashboard in detail. The following image describes the 24/7 Service Guard dashboard for the application. + +![](./static/24-7-service-guard-overview-70.png) + +1. **Monitoring sources:** Verification and metrics providers, such as AppDynamics, etc. For a list of the verification providers supported by Harness, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). +2. **Heat map:** The heat map is generated using the application and the monitoring sources. Each square is a time segment. +3. **Time resolution:** You can go high-level (for example, 30 days) or low-level (12 hours). +4. **Performance regressions:** Red and yellow are used to highlight regressions and anomalies. The colors indicate the Overall Risk Level for the monitoring segment. +5. **Transactions analysis:** Click a square to see the machine-learning details for the monitoring segment. The analysis details show the transactions for the monitoring segment. High-risk transactions are listed first. +6. **Drill-in to find the cause of the regression or anomaly:** When you click the dot for a transaction, you get further details and you can click a link to open the transaction in the monitoring tool. This allows you to go into the monitoring tool and find the root cause of the regression (specific queries, events, etc). + +### Next Step + +* [Set Up 24/7 Service Guard](../../24-7-service-guard/set-up-service-guard.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/_category_.json new file mode 100644 index 00000000000..a88a732dba9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "CV Verifier Overviews", + "position": 10, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "CV Verifier Overviews" + }, + "customProps": { + "helpdocs_category_id": "zxxvl8vahz" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/app-dynamics-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/app-dynamics-verification-overview.md new file mode 100644 index 00000000000..c793c5258a5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/app-dynamics-verification-overview.md @@ -0,0 +1,66 @@ +--- +title: AppDynamics Verification Overview +description: Overview of Harness' AppDynamics integration. +# sidebar_position: 2 +helpdocs_topic_id: 2zxfjt67yb +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic introduces you to integrating AppDynamics with Harness' Continuous Verification features. Once integrated, you can use AppDynamics to monitor your deployments and production applications using Harness' semi-supervised machine-learning functionality. Sections in this topic: + +* [Integration Overview](#integration_overview) +* [Setup and Verification Preview](#setup_preview) +* [Next Step](#next_step) + +If you are looking for How-tos, see: + +* [Add AppDynamics as a Verification Provider](../../appdynamics-verification/1-app-dynamics-connection-setup.md) +* [Monitor Applications 24/7 with AppDynamics](../../appdynamics-verification/2-24-7-service-guard-for-app-dynamics.md) +* [Verify Deployments with AppDynamics](../../appdynamics-verification/3-verify-deployments-with-app-dynamics.md) +* [Templatize AppDynamics Verification](../../appdynamics-verification/templatize-app-dynamics-verification.md) +* [Set AppDynamics Environment Variables](../../appdynamics-verification/app-dynamics-environment-variables.md) + + +### Integration Overview + +AppDynamics enables you to monitor and manage your entire application-delivery ecosystem, from client requests straight through to your networks, backend databases, and application servers. + +Harness Continuous Verification integrates with AppDynamics to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard** - Monitors your live, production applications. +* **Deployment Verification** - Monitors your application deployments, and performs automatic rollback according to your criteria. + + + +| | | +| --- | --- | +| **Microservices Environment using AppDynamics** | **Harness Verification and Impact Analysis** | +| ![](./static/appd-microservices-environment.png) | ![](./static/appd-harness-verification-and-impact-analysis.png) | + +Harness does not support [AppDynamics Lite](https://www.appdynamics.com/lite/). If you set up AppDynamics with Harness using an AppDynamics Pro Trial account and the trial expires, you will be using AppDynamics Lite and it will not work with Harness. + +If you require more flexibility than the standard integration outlined here, you also have the option to [AppDynamics as a Custom APM](../../custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md). +### Setup and Verification Preview + +You set up AppDynamics and Harness as follows: + +1. **AppDynamics** – These instructions assume that you are already using AppDynamics to monitor your application. +2. **Harness Application** – Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Create an Application](../../../model-cd-pipeline/applications/application-configuration.md). +3. **​Verification Provider Setup** – In Harness, you connect Harness to your AppDynamics account, adding AppDynamics as a [Harness Verification Provider](../../appdynamics-verification/1-app-dynamics-connection-setup.md). + +A Verification Provider is a connection to monitoring tools such as AppDynamics. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your AppDynamics data and analysis. +4. **​24/7 Service Guard Setup** – In the Environment, set up 24/7 Service Guard to monitor your live, production application. + +After completing this setup, you'll be able to [verify deployments](deployment-verification-results.md) as follows: + +1. Add a Workflow to your Harness Application, and deploy your microservice or application to your configured Environment. +2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. +3. Harness uses semi-supervised machine-learning and AppDynamics analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. You can use this information to set rollback criteria, and to improve your deployments. + + +### Next Step + +* [Add AppDynamics as a Verification Provider](../../appdynamics-verification/1-app-dynamics-connection-setup.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/bugsnag-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/bugsnag-verification-overview.md new file mode 100644 index 00000000000..67df9ea5876 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/bugsnag-verification-overview.md @@ -0,0 +1,67 @@ +--- +title: Bugsnag Verification Overview +description: Overview of Harness' Bugsnag integration. +# sidebar_position: 2 +helpdocs_topic_id: ac5piurukt +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness Continuous Verification, and monitor your deployments and production applications using its unsupervised machine-learning functionality, on Bugsnag. + +Bugsnag provides error reporting libraries for every major software platform, detecting and reporting errors in your apps, and capturing diagnostic data for each error. Bugsnag captures your app’s exceptions or events, and groups them into errors according to their root causes. + +### Visual Summary + +Harness Continuous Verification integrates with Bugsnag to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard** - Monitors your live, production applications. +* **Deployment Verification** - Monitors your application deployments, and performs automatic rollback according to your criteria. + +This document describes how to set up these Harness Continuous Verification features, and then monitor your deployments and production applications using its unsupervised machine-learning functionality. + +Bugsnag provides unique browser-focused reporting. See [Browser-Based Benefits](#browser_based_benefits) below. + +| | | +| --- | --- | +| **Reporting with Bugsnag** | **Harness Analysis** | +| ![](./static/bugsnag-left.png) | ![](./static/bugsnag-right.png) | + +### Integration Process Summary + +You set up Bugsnag and Harness in the following way: + +![](./static/bugsnag-verification-overview-05.png) + +1. **Bugsnag** - Monitor your application using Bugsnag. In this article, we assume that you are using Bugsnag to monitor your application already. +2. **​Verification Provider Setup** - In Harness, you connect Harness to your Bugsnag account, adding Bugsnag as a **Harness Verification Provider**. +3. **Harness Application** - Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Create an Application](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup****-** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and Bugsnag analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Browser-Based Benefits + +Bugsnag is particularly useful for browser-based apps, as it collects browser information as part of its exception and error capture. This can helpful in determining if a new version of a browser is causing problems for users. Here is an example from Bugsnag: + +![](./static/bugsnag-verification-overview-06.png) + +Once you have deployed your app via Harness, you can add host/node-focused verification to your Harness workflow using another [Verification Provider](https://docs.harness.io/article/myw4h9u05l-verification-providers-list), and use Bugsnag to focus on browser-based issues. Here is an example of a Harness verification where other verification tools have been used to verify host/node issues, and Bugsnag is added as the last verification step to capture browser-based issues: + +![](./static/bugsnag-verification-overview-07.png) + +When you set up Bugsnag as a verification step in a Harness workflow, you can indicate if your app is browser-based. When Harness arrives at the Bugsnag verification step, Harness will ignore deployment host or node information and focus on browser-based data. This browser focus enables you to capture browser issues on their own after you have already ensured that the deployment host/node environment is running correctly. + +Harness can now use this browser data with the machine-learning in its Continuous Verification and determine what events are causing errors or have to the potential to cause errors in the future. + +For information about advanced browser event capturing in Bugsnag, see [React integration guide](https://docs.bugsnag.com/platforms/browsers/react/#sending-diagnostic-data) from Bugsnag. + +### Next Steps + +* [Connect to Bugsnag](../../bugsnag-verification/1-bugsnag-connection-setup.md) +* [Monitor Applications 24/7 with Bugsnag](../../bugsnag-verification/2-24-7-service-guard-for-bugsnag.md) +* [Verify Deployments with Bugsnag](../../bugsnag-verification/3-verify-deployments-with-bugsnag.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cloud-watch-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cloud-watch-verification-overview.md new file mode 100644 index 00000000000..d2a16132b31 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cloud-watch-verification-overview.md @@ -0,0 +1,62 @@ +--- +title: CloudWatch Verification Overview +description: Overview of Harness' CloudWatch integration. +# sidebar_position: 2 +helpdocs_topic_id: q6ti811nck +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +[Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications and services that run on AWS, and on-premises servers. + +In this topic: + +* [Visual Summary](#visual_summary) +* [Integration Process Summary](#anchor_1) +* [Next Steps](#next_steps) + + +### Visual Summary + +Harness Continuous Verification integrates with CloudWatch to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard** - Monitors your live, production applications. +* **Deployment Verification** - Monitors your application deployments, and performs automatic rollback according to your criteria. + +This document describes how to set up these Harness Continuous Verification features and monitor your deployments and production applications using its unsupervised machine-learning functionality. + +| | | +| --- | --- | +| **Monitoring with CloudWatch** | **Harness Analysis** | +| ![](./static/cloudwatch-left.png) | ![](./static/cloudwatch-right.png) | + +Verification is limited to EC2 instance and ELB-related metrics data. +### Integration Process Summary + +You set up CloudWatch and Harness in the following way: + +![](./static/cloud-watch-verification-overview-33.png) + +1. **CloudWatch** - Using CloudWatch, you monitor the EC2 and ELB used to run your microservice or application. +2. **Cloud Provider** - In Harness, you connect Harness to your AWS account, adding AWS as a [Cloud Provider](https://docs.harness.io/article/whwnovprrb-infrastructure-providers). +3. **Harness Application** - Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Application Components](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup****-** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and CloudWatch analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +For information on setting up CloudWatch to monitor EC2, ECS, and ELB, see [Monitoring Your Instances Using CloudWatch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html) from AWS. When you enable CloudWatch monitoring on EC2, you are prompted with the following dialog. + +![](./static/cloud-watch-verification-overview-34.png) + +Click **Yes, Enable**, and then go to CloudWatch to view metrics. + + +### Next Steps + +* [Connect to CloudWatch](../../cloud-watch-verification/cloud-watch-connection-setup.md) +* [Monitor Applications 24/7 with CloudWatch](../../cloud-watch-verification/2-24-7-service-guard-for-cloud-watch.md) +* [Verify Deployments with CloudWatch](../../cloud-watch-verification/3-verify-deployments-with-cloud-watch.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/continuous-verification-metric-types.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/continuous-verification-metric-types.md new file mode 100644 index 00000000000..13252f002da --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/continuous-verification-metric-types.md @@ -0,0 +1,96 @@ +--- +title: Continuous Verification Metric Types +description: Use CV Custom Metrics to monitor risks and anomalies that occur during the deployment verification. +# sidebar_position: 2 +helpdocs_topic_id: 9e14ilkngd +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +While adding a service verification step in your Workflow, you can select the metrics that you like to be monitored, specify thresholds, and monitor the anomalies that occur during the deployment verification. + +You can also add your Custom metrics to Harness 24/7 Service Guard in your Harness Application Environment. + +### Before You Begin + +* See  [Continuous Verification Overview](https://www.google.com/url?q=https://docs.harness.io/article/ina58fap5y-what-is-cv&sa=D&ust=1596624436180000&usg=AOvVaw2-B986HwZG0UmYlOCU41s7). +* See  [Custom Verification Overview](https://www.google.com/url?q=https://docs.harness.io/article/e87u8c63z4-custom-verification-overview&sa=D&ust=1596624436181000&usg=AOvVaw1hDkv2iuHflv6VWR5AZT5g). + +### Identifying the Anomalies + +When you select a metric, the previously deployed host data or baseline is used as a yardstick for identifying the anomalies during this verification. While any change can be flagged as an anomaly, the learning engine takes into account the significance of the change and the ratio associated with the existing pattern. All the metrics that lie within the default threshold values will be excluded from the analysis and will always result in low risk. + +#### **Default Delta** + +It is the absolute deviation from the previous value. It is the absolute value calculated by subtracting the previous value from the current value. If this Delta value is less than the ratio, it will not be identified as an anomaly. If this value is higher than the ratio, analysis is run on the data to figure out the anomaly. + +The formula for **Default Delta** is represented as follows: + + `allow if abs(y - x ) < ratio` + +#### **Default Ratio** + +It is the ratio of the deviation from the previous value. It should be ideally less than the minimal threshold value that you set during the verification configuration. If this value is higher than the threshold value, machine learning algorithms are run to identify the anomalies and highlight them. + +The formula for **Default Ratio** is represented as follows: + +`allow if abs (y - x)/x < min_threshold_ratio` + +Here is a tabular summary of the various metrics, their thresholds, and the allowed Delta and Ratio computations. The x value for the metrics indicates the base value from the previous analysis. The y value is the new value derived from the current analysis. + +  + + + +| | | | | | +| --- | --- | --- | --- | --- | +| **Metric Type** | **Type of Values** | **Deviation Type** | **Default Delta** | **Default Ratio** | +| Error rate | Web / Business transactions | Higher is bad | 0 | 0 | +| Response Times | Web / Business transactions | Higher is bad | 20 | .2 | +| Throughput | Web / Business transactions | Lower is bad | 20 | .2 | +| Infra | Cpu, memory .... | Higher and Lower is bad | 20 | .2 | +| Apdex | value between 0 and 1 | Lower is bad | 0 | .2 | + +If the default thresholds are not relevant to your setup and do not make sense with these formulae, you can set up Custom Thresholds or Fail Fast Thresholds. For more information, see [Apply Custom Thresholds to Deployment Verification](../../tuning-tracking-verification/custom-thresholds.md). + +### Error Rate + +Error rate indicates the number of errors. There is no threshold associated with this metric. Unlike the other metrics, there is no **Default Delta** or **Default Ratio** against which this metric is measured. If there is a deviation from the previous value, analysis done on all the data without any filtering to find if there are any anomalies. + +For example, if the previous verification cycle had 10 errors, and if 2 more errors occur in this cycle, further analysis is done to identify the anomalies. + +![](./static/continuous-verification-metric-types-72.png) + +In this example, the number of **Errors per Minute** increased from 4 to 10.33 and hence it is flagged as a **High Risk** transaction. + +### Response Time + +This metric considers the response time of web/business transactions. Usually, a higher value is considered as an anomaly. + +Response Time indicates the time spent for the transaction from the beginning of a request. Usually, higher response times indicate issues in performance. + +For example, if the value of **Response Time** was 20 in the previous run and it increased to 30, it will not be flagged as an anomaly. The difference or delta in this case is less than the previous run. If you calculate the ratio of difference, it is not significant either. Unless the Response Time value crosses the ratio and the delta is high, the anomaly is not flagged. + +### Throughput + +Throughput indicates the number of successful requests per minute to your web server. The throughput values different from application data to web/browser data as a single user request may result in multiple requests by the application. + +If the number of requests per minute comes down, it is considered as an anomaly using the **Default Delta** and **Default Ratio** formulae. + +### Infra/Infrastructure + +This value measures the errors in infrastructure such as CPU, memory, and HTTP errors. + +If the memory usage or CPU usage is low, it is flagged as a High Risk anomaly, because it is an indicator of some other factor that might be underperforming. Unless there is a fundamental intentional change, it is highly unusual to have sudden change or reduction in the usage of Infrastructural resources. Harness CV indicates the need to identify such indirect factors using this anomaly. + +![](./static/continuous-verification-metric-types-73.png) + +In this example, you can notice that the CPU utilization value decreased from 255.91 to 95.04. Unless, there has been a deliberate change in the code or resources, this is usually highly unlikely to happen. Hence, it is flagged as a **High Risk** transaction. + +### Apdex + +The Apdex value is usually between 0 and 1. A lower Apdex score indicates that the performance is not as expected. + +Apdex measures user satisfaction with the response time. It indicates the measured response time against a specified threshold value. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cv-providers.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cv-providers.md new file mode 100644 index 00000000000..ec3a42b3006 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cv-providers.md @@ -0,0 +1,114 @@ +--- +title: Who Are Harness' Verification Providers? +description: Lists the verification (monitoring) providers that integrate with Harness, identifying the analysis strategies and deployment types that Harness supports for each provider. +sidebar_position: 50 +helpdocs_topic_id: 5vp1f7zt0a +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic lists the verification providers that integrate with Harness, and links to further details about each provider. It also covers the types of analysis strategies that Harness supports for each provider. + + +### Verification Providers Supported + +[See the list of providers](https://harness.helpdocs.io/l/en/category/gurgsl2gqt-continuous-verification). + + +### Analysis Support by Provider + +The following table lists which analysis strategies are supported for each Verification Provider. + + + +| | | | +| --- | --- | --- | +| **Provider** | **Previous** | **Canary** | +| AppDynamics | Yes | Yes | +| NewRelic | Yes | Yes | +| DynaTrace | Yes | Yes | +| Prometheus | Yes | Yes | +| SplunkV2 | Yes | Yes | +| ELK | Yes | Yes | +| Sumo | Yes | Yes | +| Datadog Metrics | Yes | Yes | +| Datadog Logs | Yes | Yes | +| CloudWatch | Yes | Yes | +| Custom Metric Verification | Yes | Yes | +| Custom Log Verification | Yes | Yes | +| BugSnag | Yes | No | +| Stackdriver Metrics | Yes | Yes | +| Stackdriver Logs | Yes | Yes | + + +### Deployment Type Support + +The following table lists which analysis strategies are supported in each deployment type. + + + +| | | +| --- | --- | +| **Deployment Type** | **Analysis Supported** | +| Basic | Previous | +| Canary | Canary | +| BlueGreen | Previous, | +| Rolling | Previous | +| Multi-service | No | +| Build  | No | +| Custom | No | + + +### Blog Articles + +The following blog articles cover a range of Verification Providers and discuss Harness Continuous Verification functionality. + +#### Machine Learning and Continuous Delivery + + [Can you apply Machine Learning to Continuous Delivery?](http://www.harness.io/blog/how-to-do-continuous-delivery-for-machine-learning-systems) + +![](./static/cv-providers-12.jpg) + +Interested in turning your deployment canary into a cybernetic living organism? Awesome, this blog is for you. + +#### Harness Eliminates False Positives + + [How Harness Eliminates False Positives with Neural Nets](http://www.harness.io/blog/eliminate-false-positives-with-neural-nets) + +![](./static/cv-providers-13.jpg) + +Harness analyzes application log events to help customers detect anomalies and verify their production deployments. To do this, our algorithms have been based on textual similarity and occurrence frequencies. Harness has native integrations for popular log aggregation tools like Splunk, ELK, and Sumo Logic. + +#### Dynatrace + + [Harness Extends Continuous Verification To Dynatrace](https://harness.io/2018/02/harness-extends-continuous-verification-dynatrace/) + +![](./static/cv-providers-14.jpg) + +One of our early customers, Build.com, used to verify production deployments with 5-6 team leads manually analyzing monitoring data and log files. This process took each team lead 60 minutes, and occurred 3 times a week. That’s 1,080 minutes, or 18 hours, of team lead time spent on verification. With Harness, Build.com reduced verification time to just 15 minutes, and also enabled automatic rollback to occur in production. + +#### AppDynamics + + [Introducing Harness Service Impact Verification for AppDynamics](https://harness.io/2018/05/introducing-harness-service-impact-verification-for-appdynamics/) + +![](./static/cv-providers-15.jpg)AppDynamics announced a new partnership with Harness to help customers embrace continuous delivery and understand the business impact of every application deployment. + +#### Prometheus + + [Automating Deployment Health Checks with Prometheus and Harness Continuous Delivery](http://www.harness.io/blog/verifying-ci-cd-pipelines-prometheus) + +![](./static/cv-providers-16.jpg)Overview of Harness' integration with Prometheus, the open-source monitoring project. + +#### Datadog + + [Harness Extends Continuous Verification To Datadog](https://harness.io/2018/05/harness-extends-continuous-verification-datadog/) + +![](./static/cv-providers-17.jpg)Overview of Harness' integration with Datadog APM. + +### Next Up + +Next, see how you can interpret the verification results that Harness provides: + +* [Verification Results Overview](deployment-verification-results.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md new file mode 100644 index 00000000000..e7568d7289a --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md @@ -0,0 +1,271 @@ +--- +title: CV Strategies, Tuning, and Best Practices +description: Learn about analysis strategies, results tuning, and best practices for Harness Continuous Verification (CV). +# sidebar_position: 2 +helpdocs_topic_id: 0avzb5255b +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic helps you pick the best analysis strategy when setting up Harness Continuous Verification (CV) for deployments, and helps you tune the results using your expertise. + +First, learn about the types of analysis strategies, and then learn about best practices and tuning. + + +### Where Are Analysis Strategies Set Up? + +When you set up a verification step in a Harness Workflow, each supported APM lists the available analysis strategies in its **Baseline for Risk Analysis** setting and for how long the verification should be performed in its **Analysis Time duration** setting. + +![](./static/cv-strategies-and-best-practices-21.png) + +These two settings are used to tune the verification Harness performs. They are discussed in detail in this topic.  + +### Types of Analysis Strategies + +Harness uses these types of analysis strategies, each with a different combination of load (datasets) and granularity: + + + +| | | | +| --- | --- | --- | +| **Analysis Strategy** | **Load** | **Granularity** | +| Previous | Synthetic | Container level | +| Canary | Real user traffic | Container level | + +Each strategy is defined below. + + +#### Previous Analysis + +In Previous Analysis, Harness compares the metrics received for the nodes deployed in each Workflow Phase with metrics received for all the nodes during the previous deployment. Remember that verification steps are used only after you have deployed successfully at least once: In order to verify deployments and find anomalies, Harness needs data from previous deployments. + +For example, if Phase 1 deploys app version 1.2 to node A, the metrics received from the APM during this deployment are compared to the metrics for nodes A, B, and C (all the nodes) during the previous deployment (version 1.1). Previous Analysis is best used when you have predictable load, such as in a QA environment. + +For Previous Analysis to be effective, the load on the application should be the same across deployments. For example, provide a (synthetic) test load using [Apache JMeter](https://jmeter.apache.org/). If the load varies between deployments, then Previous Analysis is not effective. + +##### Baseline for Previous Analysis + +How does Harness identify the baseline? As stated earlier, Harness uses the metrics received for all the nodes during the previous deployment. + +You can use [Pin as Baseline for Continuous Verification](#pin_as_baseline_for_continuous_verification) to specify a deployment to use as a baseline. + +But if you do not use **Pin as Baseline for Continuous Verification**, Harness uses a combination of the following Harness entities to define what deployment is compared: + +* Workflow (the specific Workflow that performed the deployment) +* Service +* Environment +* Infrastructure Definition (the specific Infrastructure Definition used for the specific deployment) + + +#### Canary Analysis + +For Canary Analysis, Harness compares the metrics received for all old app version nodes with the metrics for the new app version nodes. The nodes deployed in each Workflow Phase are compared with metrics received for all of the existing nodes hosting the application. + +In the following example, a Prometheus verification step is using Canary Analysis to compare a new node with two previous nodes: + +![](./static/cv-strategies-and-best-practices-22.png) + +For example, if Phase 1 deploys to 25% of your nodes, the metrics received for the new app versions on these nodes are compared with metrics received for the old app versions on these nodes. + +The metrics are taken for the period of time defined in **Analysis Time duration**. + +Harness supports Canary Analysis only in [Canary deployments](https://docs.harness.io/article/325x7awntc-deployment-concepts-and-strategies#canary_deployment). + +##### Canary Analysis without a Host + +Most providers have the concept of a host, where you use a host placeholder in the query used by Harness. In cases where the metrics provider does not have this concept (for example, Dynatrace), Canary analysis performs historical analysis. + +For example, if your deployment is from 10-10:15am, Harness will compare it with deployments from 10-10:15am over the last 7 days. That historical data is the control data. + +### Verification Best Practices + +When picking an analysis strategy, there are several factors to consider, such as the type of deployment, in which Phase of the Workflow to add verification, and whether the number of instances/nodes/etc are consistent between deployments.   + +This section provides help on selecting the right analysis strategy for your deployment. + +#### Previous Analysis + +Use the following best practices with Previous Analysis. + +##### Do + +* Use Previous Analysis in deployments where 100% of instances are deployed at once (single-phase deployments): + + Basic deployment. + + Canary deployment with only one phase. + + Blue/Green deployment. + + Rolling deployment. +* Use Previous Analysis if the number of instances deployed remains the same between deployments. +* In log verification, construct queries that selectively target errors. (This trains Continuous Verification to detect new failures.) +* In time-series verification, add signals that are strong indications of service issues: + + For example, a spike in error rates is cause for concern. + + Response times are also good candidates. + + Add CPU usage, memory usage, and similar metrics only if you are concerned about them. + + When configuring deployment verification, collect signals at the Service Instance level. + + When configuring 24/7 Service Guard, collect signals at the Service level. + +##### Don't + +* Don't use Previous Analysis in any phase of a *multiphase* Canary deployment. +* Don't use Previous Analysis when [Kubernetes Horizontal Pod Autoscaler (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) is configured for a deployment. +* In log verification, don't construct a generic query—such as one that will match all `info` messages. (Generic queries pull in a huge volume of data, without training Continuous Verification to recognize errors. The result is little signal, amid lots of noise.) + +#### Canary Analysis + +Use the following best practices with Canary Analysis. + +##### Do + +* Use Canary Analysis in multiphase Canary Workflows only. + +##### Don't + +* Don't use Canary Analysis if there is only one phase in the Canary Workflow. +* Don't use Canary Analysis in the last phase of a Canary Workflow because the final phase deploys to 100% of nodes and so there are no other nodes to compare. +* Don't use Canary Analysis when deploying 100% of instances at once. + +Harness supports Canary Analysis only in [Canary deployments](https://docs.harness.io/article/325x7awntc-deployment-concepts-and-strategies#canary_deployment). + +### Analysis Time Duration + +This is the number of data points Harness uses. If you enter 10 minutes, Harness will take the first 10 minutes worth of the log/APM data points and analyze it. + +The length of time it takes Harness to analyze the 10 min of data points depends on the number of instances being analyzed and the monitoring tool. If you have a 1000 instances, it can take some time to analyze the first 10 minutes of all of their logs/APM data points. + +The recommended Analysis Time Duration is 10 minutes for logging providers and 15 minutes for APM and infrastructure providers. + +Harness waits 2-3 minutes to allow enough time for the data to be sent to the verification provider before it analyzes the data. This wait time is a standard with monitoring tools. So, if you set the **Analysis Time Duration** to 10 minutes, it includes the initial 2-3 minute wait, and so the total sample time is 13 minutes. + +#### Wait Before Execution + +The **Verify Service** section of the Workflow has a **Wait before execution** setting. + +![](./static/cv-strategies-and-best-practices-23.png) + +As stated earlier, Harness waits 2-3 minutes before performing analysis to avoid initial noise. Use the **Wait before execution** setting only when your deployed application takes more than 3-4 minutes to reach steady state. This will help avoid initial noise when an application starts like CPU spikes. + +### Algorithm Sensitivity and Failure Criteria + +When adding a verification step to your Workflow, you can use the **Algorithm Sensitivity** setting to define the risk level that will be used as failure criteria during the deployment. + +![](./static/cv-strategies-and-best-practices-24.png) + +When the criteria are met, the Failure Strategy for the Workflow is executed. + +![](./static/cv-strategies-and-best-practices-25.png) + +For time-series analysis (APM), the risk level is determined using standard deviations, as follows: 5𝞼 ([sigma](https://watchmaker.uncommons.org/manual/ch03s05.html)) represents high risk, 4𝞼 represents medium risk, and 3𝞼 or below represents low risk. + +Harness also takes into account the number of points that deviated: 50%+ is high risk, 25%-50% is medium risk, and 25% or below is low risk. + +Harness will normally invoke a Workflow's Failure Strategy when it detects high risk; however, if you have set a verification step's sensitivity to **Very sensitive**, Harness will also invoke the Workflow's Failure Strategy upon detecting medium risk. + +Every successful deployment contributes to creating and shaping a healthy baseline that tells Harness what a successful deployment looks like, and what should be flagged as a risk. If a deployment failed due to verification, Harness will not consider any of the metrics produced by that deployment as part of the baseline. + +### Tuning Your Verification + +When you first start using Harness Continuous Verification, we recommend you examine the results and use the following features to tune your verification using your knowledge of your application and deployment environment: + +#### Customize Threshold + +In your deployment verification results, you can customize the threshold of each metric/transaction for a Harness Service in a Workflow. + +![](./static/cv-strategies-and-best-practices-26.png) + +You can tune each specific metric for each Harness Service to eliminate noise.  + +The example above helps you refine the response time. This means if the response time is less than the value entered in **Ignore if [95th Percentile Response Time (ms)] is [less]** then Harness will not mark it as a failure even if it is an anomaly. + +Let's say the response time was around 10ms and it went to 20ms. Harness' machine-learning engine will flag it as an anomaly because it jumped 100%. If you add a threshold configured to ignore a response time is less than 100ms, then Harness will not flag it. + +You can adjust the threshold for any metric analysis. The following example shows how you can adjust the min and max of host memory comparisons. + +![](./static/cv-strategies-and-best-practices-27.png) +#### 3rd-Party API Call History + +You can view each API call and response between Harness and a verification provider by selecting **View 3rd Party API Calls** in the deployment's verification details. + +![](./static/cv-strategies-and-best-practices-28.png) + +The **Request** section shows the API call made by Harness and the **Response** section shows what the verification provider returned: + + +``` + {"sdkResponseMetadata":{"requestId":"bd678748-f905-46bc-91e1-f17843f87ac2"},"sdkHttpMetadata":{"httpHeaders":{"Content-Length":"988","Content-Type":"text/xml","Date":"Wed, 07 Aug 2019 20:12:06 GMT","x-amzn-RequestId":"bd678748-f905-46bc-91e1-f17843f87ac2"},"httpStatusCode":200},"label":"MemoryUtilization","datapoints":[{"timestamp":1565208600000,"average":10.512906283101966,"unit":"Percent"},{"timestamp":1565208540000,"average":10.50788872163684,"unit":"Percent"},{"timestamp":1565208420000,"average":10.477005777531302,"unit":"Percent"},{"timestamp":1565208480000,"average":10.493672297485643,"unit":"Percent"}]}  +``` +The API response details allow you to drill down to see the specific datapoints and the criteria used for comparison. Failures can also examined in the **Response** section: + +![](./static/cv-strategies-and-best-practices-29.png) + +Harness' machine-learning engine can process a maximum of 1,000 logs per minute. + +#### Event Distribution + +You can view the event distribution for each event by clicking the graph icon: + +![](./static/cv-strategies-and-best-practices-30.png) + +The Event Distribution will show you the measured and baseline data, allowing you to see why the comparison resulted in an anomaly.  + +#### Pin as Baseline for Continuous Verification + +If you do not use Pin as Baseline, Harness uses default criteria. See [Baseline for Previous Analysis](#baseline_for_previous_analysis).By default in a Previous Analysis strategy, Harness uses the Continuous Verification data of the last successful Workflow execution **with data** as the baseline for the current analysis. This is an automatic setting, but you can select a specific deployment as a new baseline. + +Data is never deleted from a workflow if it is set as the baseline. Also the baseline assignment can only be done through the UI and it has no Git property reference.To set a specific deployment as the baseline, do the following: + +1. In Harness, click **Continuous Deployment**. +2. Locate the deployment you want to use as the new baseline. +3. Click the options button, and select **Pin as Baseline for Continuous Verification**. When you are prompted to confirm, click **Pin Baseline**. + +![](./static/cv-strategies-and-best-practices-31.png) + +If the deployment does not contain verification data, you will see the following error: + +`Either there is no workflow execution with verification steps or verification steps haven't been executed for the workflow.` + +Once the deployment is pinned as the baseline. You will see an icon on it and the option to unpin it: + +![](./static/cv-strategies-and-best-practices-32.png) + +### Analysis Support for Providers + +The following table lists which analysis strategies are supported for each Verification Provider. + + + +| | | | +| --- | --- | --- | +| **Provider** | **Previous** | **Canary** | +| AppDynamics | Yes | Yes | +| NewRelic | Yes | Yes | +| DynaTrace | Yes | Yes | +| Prometheus | Yes | Yes | +| SplunkV2 | Yes | Yes | +| ELK | Yes | Yes | +| Sumo | Yes | Yes | +| Datadog Metrics | Yes | Yes | +| Datadog Logs | Yes | Yes | +| CloudWatch | Yes | Yes | +| Custom Metric Verification | Yes | Yes | +| Custom Log Verification | Yes | Yes | +| BugSnag | Yes | No | +| Stackdriver Metrics | Yes | Yes | +| Stackdriver Logs | Yes | Yes | + +### Deployment Type Support + +The following table lists which analysis strategies are supported in each deployment type. + + + +| | | +| --- | --- | +| **Deployment Type** | **Analysis Supported** | +| Basic | Previous | +| Canary | Canary | +| BlueGreen | Previous | +| Rolling | Previous | +| Multi-service | No | +| Build  | No | +| Custom | No | + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/datadog-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/datadog-verification-overview.md new file mode 100644 index 00000000000..64ca51fe685 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/datadog-verification-overview.md @@ -0,0 +1,61 @@ +--- +title: Datadog Verification Overview +description: Overview of Harness' Datadog integration. +# sidebar_position: 2 +helpdocs_topic_id: ong5rbbn49 +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness Continuous Verification features, and monitor your deployments and production applications using Harness' unsupervised machine-learning functionality, on Datadog. + +In this topic: + +* [Visual Summary](#visual_summary) +* [Integration Process Summary](#integration_process_summary) +* [Next Steps](#next_steps) + +### Visual Summary + +Datadog delivers real-time and trending data about application performance by seamlessly aggregating metrics and events across the full DevOps stack. Datadog automatically collects logs from all your services, applications, and platforms. + +Harness Continuous Verification integrates with Datadog to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard** - Monitors your live, production applications. +* **Deployment Verification** - Monitors your application deployments, and performs automatic rollback according to your criteria. + +At this time, Datadog **Deployment Verification** is supported for Harness **Kubernetes** and **ECS Service** deployments only. To add deployment verification in Workflows for other Service types, use [Datadog as a Custom APM](../../custom-metrics-and-logs-verification/connect-to-datadog-as-a-custom-apm.md) and your Datadog monitoring. Datadog is fully supported for all Services in **24/7 Service Guard**.This document describes how to set up these Harness Continuous Verification features and monitor your deployments and production applications using its unsupervised machine-learning functionality. + + + +| | | +| --- | --- | +| **Analysis with Datadog** | **Harness Analysis** | +| ![](./static/datadog-left.png) | ![](./static/datadog-right.png) | + +### Integration Process Summary + +You set up Datadog and Harness in the following way: + +![](./static/datadog-verification-overview-08.png) + +1. **Datadog** - Monitor your application using Datadog. In this article, we assume that you are using Datadog to monitor your application already. +2. **​Verification Provider Setup** - In Harness, you connect Harness to your Datadog account, adding Datadog as a **Harness Verification Provider**. +3. **Harness Application** - Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Create an Application](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup****-** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the Service Infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and Datadog analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Connect to Datadog](../../datadog-verification/1-datadog-connection-setup.md) +* [Monitor Applications 24/7 with Datadog Metrics](../../datadog-verification/monitor-applications-24-7-with-datadog-metrics.md) +* [Monitor Applications 24/7 with Datadog Logging](../../datadog-verification/2-24-7-service-guard-for-datadog.md) +* [Verify Deployments with Datadog Logging](../../datadog-verification/3-verify-deployments-with-datadog.md) +* [Verify Deployments with Datadog Metrics](../../datadog-verification/verify-deployments-with-datadog-metrics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/deployment-verification-results.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/deployment-verification-results.md new file mode 100644 index 00000000000..5150cbb6c6e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/deployment-verification-results.md @@ -0,0 +1,215 @@ +--- +title: Verification Results Overview +description: Learn how Harness provides verification feedback as analysis, data, and summaries. +sidebar_position: 60 +helpdocs_topic_id: 2la30ysdz7 +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness provides verification results as analysis, data, and summaries. These results help you understand why deployments succeeded or failed, and why live production applications experience regressions and anomalies. + +This topic describes the different ways Harness provides verification results. + +### General Details + +The following diagrams call out some of the main analysis elements. + +Here is an example of the badges and summaries provided in the header. + +![](./static/deployment-verification-results-38.png) + +Here is an example showing a few of the elements for log analysis. + +![](./static/deployment-verification-results-39.png) + +Here is an example showing a few of the elements for metric analysis: + +![](./static/deployment-verification-results-40.png) + +The following sections describe the different verification elements in more detail. + +### Verification Badges + +Verification badges provide a summary of the success and failure of the verification steps at the Pipeline and Workflow levels: + +![](./static/deployment-verification-results-41.png) + +#### Badges Reflect Verification Steps + +The badges do not indicate the success or failure of the Workflow itself, just the verification steps within the Workflow. If a Workflow in the Pipeline fails to reach its verification steps because it fails to deploy, it does not impact the badge color or appear in the verification summary. + +#### Badges Apply to Integrated Providers + +Currently, the badges only apply to verification steps using the providers integrated into Harness (AppDynamics, Splunk, etc). The badges do not reflect the status of other steps in the **Verify** section of a Workflow, such as smoke tests and HTTP health checks. + +#### Badge Colors + +The badge colors indicate the following: + +* **Green:** All verifications passed. If a verification step (AppDynamics, Datadog, etc) passes in Phase 1 of a Workflow, but the same verification is aborted in Phase 2 of a Workflow, a green badge is displayed because the verification step did pass. +* **Orange:** A combination of passes, failures, or aborted verifications. +* **Red:** All verifications failed. +* **Gray:** All verifications are aborted. +* **No shield:** There are no verification steps in the Workflow. + +The green, red, and gray badges match the colors of the Pipeline/Workflow stages. Only orange is unique to the badges. + +### Execution Details + +Click on each verification step to see its execution details, including when the verification started and ended, the duration of the analysis (which is a subset of started and ended because Harness waits for data), the query used for logs, and the new and previous service instances/nodes used for metrics. + +![](./static/deployment-verification-results-42.png) + +### Execution Context + +You can select **View Execution Context** in the details options to see expressions used in the Workflow. + +![](./static/deployment-verification-results-43.png) + +The execution context changes according to the verification step selected. For example, here is the execution context for a log analysis step and a metric analysis step: + +![](./static/deployment-verification-results-44.png) + +The log analysis shows a [built-in Harness variable](https://docs.harness.io/article/9dvxcegm90-variables) expression used as part of the query `${host.hostname}`, and the metric analysis shows the `${host.ec2InstanceinstanceId}` expression that identifies the target AWS AMI used for gathering metrics. + +### Feedback + +The feedback features are described in [Harness Verification Feedback Overview](harness-verification-feedback-overview.md). + +### Analysis + +Deployment verification analysis is displayed in **Continuous Deployment**. Real-time verification analysis is displayed in **Continuous Verification**. + +#### Analysis in Continuous Deployments + +The machine-learning analysis performed by Harness is displayed differently for metric and log analysis. + +##### Metric Time-Series Analysis + +Harness performs time-series analysis for Verification Providers that provides metrics, such as AppDynamics. Analysis is performed on each transaction, and the differential between new and previous metrics is displayed. + +In the follow example, you can see analysis for four different transactions: + +![](./static/deployment-verification-results-45.png) + +The gray lines represent the previous successful deployment (baseline), and the colored lines (green, orange, red) represent the current deployment. + +* **Green:** Low Risk. +* **Orange:** Medium Risk. +* **Red:** High Risk. + +If there is little deviation, the lines are green. As the deviation increases, the lines go from orange to red, indicating the level of risk. + +Hover over each dot in the graph to see the hostname and the data point value: + +![](./static/deployment-verification-results-46.png) + +Each of the dots under the transaction names are the metrics analyzed. Hover over a dot to see the metric name and hosts: + +![](./static/deployment-verification-results-47.png) + +##### Log Analysis + +Log analysis is performed by clustering similar log messages and using machine learning to compare them on each deployment. + +Harness displays the analysis as a similarity chart, with anomalies displayed both by their color (red) and their deviation from known events (blue). + +![](./static/deployment-verification-results-48.png) + +The similarity chart is a multidimensional scale projected onto a 2D space, where each letter in each baseline log is compared to every letter in the test logs. + +You can click and drag to zoom in on an area of the chart. + +![](./static/deployment-verification-results-49.png) + +Double-click to zoom out. + +Below the chart are the log clusters organized as anomalous or other events. + +![](./static/deployment-verification-results-50.png) + +You can assign priority to events and file Jira tickets for the events. For more information, see [Refine Deployment Verification Analysis](../../tuning-tracking-verification/refine-deployment-verification-analysis.md) and [File Jira Tickets on Verification Events](../../tuning-tracking-verification/jira-cv-ticket.md). + +You can view the event distribution chart for each log cluster by clicking the **Event Distribution Chart** button. + +![](./static/deployment-verification-results-51.png) + +The Event Distribution for the log cluster appears, displaying the event count per minute and frequency. + +![](./static/deployment-verification-results-52.png) + +### Execution Logs + +Both metric and log analysis include **Execution Logs**. + +![](./static/deployment-verification-results-53.png) + +**Metric analysis:** For metric analysis, the **Execution Logs** shows the data collection and time series analysis for each minute of the duration you specified when you set up the **Verify** step in your Workflow. + +![](./static/deployment-verification-results-54.png) + +**Log analysis:** For log analysis, the **Execution Logs** shows the log analysis for the duration you specified when you set up the **Verify** step in your Workflow. + +![](./static/deployment-verification-results-55.png) + +### API Logs + +Both metric and log analysis include **API Logs**. + +![](./static/deployment-verification-results-56.png) + +The API request and response for each metric or log query is displayed. + +The **Response Body** can be copied and pasted into another program for debugging purposes: + + +``` +{"status":"success","data":{"resultType":"matrix","result":[{"metric":{"__name__":"container_cpu_usage_seconds_total","instance":"gke-qa-target-default-pool-ca25b5a9-zwph","job":"kubernetes-cadvisor"},"values":[[1575509040,"80.471532291"],[1575509100,"83.54884438"]]}]}} +``` +### Analysis in 24/7 Service Guard + +24/7 Service Guard uses a heatmap for summarizing the analysis of live production services and displays deployment verification below the heatmap as colored dots. + +![](./static/deployment-verification-results-57.png) + +The colored dots are links to the Deployment Verification. + +#### Metric Analysis + +24/7 Service Guard metric analysis uses the same type of graph as deployment verification. Clicking any square on the heatmap displays the chart. + +![](./static/deployment-verification-results-58.png) + +For some tools, like AppDynamics, you can click a link next to each transaction/metric pairing that opens the transaction data in the third party tool. Here the link is for the `/todolist/exeception` transaction and **Average Response Time** metric. + +![](./static/deployment-verification-results-59.png) + +There is also an option to customize the threshold for the metric. Here is the **Customize Threshold** setting for the **Average Response Time**. + +![](./static/deployment-verification-results-60.png) + +#### Execution Logs + +In Harness [24/7 Service Guard](24-7-service-guard-overview.md), Execution Logs is displayed when you click **Execution Logs**: + +![](./static/deployment-verification-results-61.png) + +#### Logs Analysis + +Logs analysis in 24/7 Service Guard uses the same features as [deployment verification logs analysis](deployment-verification-results.md#log-analysis). + +![](./static/deployment-verification-results-62.png) + +You can prioritize events, file Jira tickets, and view the Event Distribution, just as in deployment verification. + +### Next Steps + +* [24/7 Service Guard Overview](24-7-service-guard-overview.md) +* [CV Strategies, Tuning, and Best Practices](cv-strategies-and-best-practices.md) +* [File Jira Tickets on Verification Events](../../tuning-tracking-verification/jira-cv-ticket.md) +* [Harness Verification Feedback Overview](harness-verification-feedback-overview.md) +* [Refine 24/7 Service Guard Verification Analysis](../../tuning-tracking-verification/refine-24-7-service-guard-verification-analysis.md) +* [Refine Deployment Verification Analysis](../../tuning-tracking-verification/refine-deployment-verification-analysis.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/dynatrace-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/dynatrace-verification-overview.md new file mode 100644 index 00000000000..6bc9778a4e9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/dynatrace-verification-overview.md @@ -0,0 +1,51 @@ +--- +title: Dynatrace Verification Overview +description: Overview of Harness' Dynatrace integration. +# sidebar_position: 2 +helpdocs_topic_id: r3xtgg0e2k +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides an overview of how to set up Harness Continuous Verification features, and monitor your deployments and production applications using its unsupervised machine-learning functionality, on Dynatrace. + +In this topic: + +* [Visual Summary](#visual_summary) +* [Integration Process Summary](#integration_process_summary) +* [Next Steps](#next_steps) + +### Visual Summary + +Dynatrace provides constant monitoring of your application to manage performance and availability, and to provide diagnosis of performance problems and allow optimization across your stack. You can add a Dynatrace verification step to your workflow and Dynatrace will be used by Harness to verify the performance and quality of your deployments. + +With its Dynatrace integration, Harness can deploy and verify the performance of artifacts instantly in every environment. When a new artifact is deployed, Harness automatically connects to Dynatrace and starts analyzing the application/service performance data to understand the real business impact of each deployment. + +Harness applies unsupervised machine learning (Hidden Markov models and Symbolic Aggregate Representation) to understand whether performance deviated for key business transactions and flags performance regressions accordingly. + + + +| | | +| --- | --- | +| **Analysis with Dynatrace** | **Harness Analysis** | +| ![](./static/dynatrace-left.png) | ![](./static/dynatrace-right.png) | + +### Integration Process Summary + +You set up Dynatrace and Harness in the following way: + +![](./static/dynatrace-verification-overview-37.png) + +1. Using Dynatrace, you monitor your microservice or application. +2. In Harness, you connect Harness to the Dynatrace API, adding Dynatrace as a [Harness Verification Provider](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). +3. After you have built and run a successful deployment of your microservice or application in Harness, you then add Dynatrace verification steps to your Harness deployment workflow. +4. Harness uses Dynatrace to verify your future microservice/application deployments. +5. Harness Continuous Verification uses unsupervised machine-learning to analyze your deployments and Dynatrace analytics/logs, discovering events that might be causing your deployments to fail. Then you can use this information to improve your deployments. + +### Next Steps + +* [Connect to Dynatrace](../../dynatrace-verification/1-dynatrace-connection-setup.md) +* [Monitor Applications 24/7 with Dynatrace](../../dynatrace-verification/2-24-7-service-guard-for-dynatrace.md) +* [Verify Deployments with Dynatrace](../../dynatrace-verification/3-verify-deployments-with-dynatrace.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/elasticsearch-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/elasticsearch-verification-overview.md new file mode 100644 index 00000000000..e759a4a762b --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/elasticsearch-verification-overview.md @@ -0,0 +1,56 @@ +--- +title: Elasticsearch Verification Overview +description: Overview of Harness' integration with the Elastic Stack (ELK Stack) for log monitoring. +# sidebar_position: 2 +helpdocs_topic_id: qdajtgsqfj +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness Continuous Verification features, and monitor your deployments and production applications using Harness' unsupervised machine-learning functionality, on the Elastic Stack. + +Harness supports Elastic versions 5.0 to 7.x.In this topic: + +* [Visual Summary](#visual_summary) +* [Integration Process Summary](#integration_process_summary) +* [Next Steps](#next_steps) + +### Visual Summary + +Harness Continuous Verification integrates with ELK to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard** - Monitors your live, production applications. +* **Deployment Verification** - Monitors your application deployments, and performs automatic rollback according to your criteria. + +This topic describes how to set up these Harness Continuous Verification features, and how to monitor your deployments and production applications using its unsupervised machine-learning functionality. + + + +| | | +| --- | --- | +| **Exceptions with Elasticsearch via Kibana** | **Harness Analysis of Elasticsearch Verification** | +| ![](./static/elastic-left.png) | ![](./static/elastic-right.png) | + +### Integration Process Summary + +You set up ELK and Harness in the following way: + +![](./static/elasticsearch-verification-overview-10.png) + +1. **ELK** - Monitor your application using ELK. In this article, we assume that you are using ELK to monitor your application already. +2. **​Verification Provider Setup** - In Harness, you connect Harness to your ELK account, adding ELK as a **Harness Verification Provider**. +3. **Harness Application** - Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Application Components](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup****-** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and Elasticsearch analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Next Steps + +* [Connect to Elasticsearch (ELK)](../../elk-elasticsearch-verification/1-elasticsearch-connection-setup.md) +* [Monitor Applications 24/7 with Elasticsearch](../../elk-elasticsearch-verification/2-24-7-service-guard-for-elasticsearch.md) +* [Verify Deployments with Elasticsearch](../../elk-elasticsearch-verification/3-verify-deployments-with-elasticsearch.md) +* [Troubleshoot Verification with Elasticsearch](../../elk-elasticsearch-verification/4-troubleshooting-elasticsearch.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/harness-verification-feedback-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/harness-verification-feedback-overview.md new file mode 100644 index 00000000000..bba1a231394 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/harness-verification-feedback-overview.md @@ -0,0 +1,48 @@ +--- +title: Harness Verification Feedback Overview +description: A short review of the verification feedback you can apply in Harness Manager. +sidebar_position: 70 +helpdocs_topic_id: q1m740uwca +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness provides verification analysis for deployments and live production services. For each of the verification events, you can perform various operations to improve analysis and reactions to events. + +For details on refining analysis feedback for deployments and 24/7 Service Guard, see: + +* [Refine 24/7 Service Guard Verification Analysis](../../tuning-tracking-verification/refine-24-7-service-guard-verification-analysis.md) +* [Refine Deployment Verification Analysis](../../tuning-tracking-verification/refine-deployment-verification-analysis.md) +* [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications) + +### Verification Feedback Summary + +Verification Feedback is at the Harness Service and Environment level. For example, Continuous Verification Feedback for Service A in a **QA** Environment is different from feedback for Service A in a **Prod** or **Dev** Environment. + +Here are the Service and Environment listed in the **Services** and **Environment** headings in the **Deployments** page: + +![](./static/harness-verification-feedback-overview-18.png) + +For each event, Harness provides an event classification that you can change.  + +![](./static/harness-verification-feedback-overview-19.png) + +### Options for Refining Verification Feedback + +You can refine the verification analysis Harness performs on your application's logging data by providing feedback that clarifies verification events. For example, Harness might flag an event as a **Not a Risk** event, but you might like to increase the severity to a **P1**. + +![](./static/harness-verification-feedback-overview-20.png) + +You can update the priority level for an event in a Workflow deployment or in 24/7 Service Guard, and it is applied to events for the Service. It is not specific to that Workflow.  + +The feedback you provide for a Service in either the Deployments or Continuous Verification page is automatically visible in both pages.   + +Verification Feedback is available for log analysis only. It is not available for metrics. To refine your metrics, change the settings in your Workflow verification step or 24/7 Service Guard configuration. + +### Next Steps + +* [Refine 24/7 Service Guard Verification Analysis](../../tuning-tracking-verification/refine-24-7-service-guard-verification-analysis.md) +* [Refine Deployment Verification Analysis](../../tuning-tracking-verification/refine-deployment-verification-analysis.md) +* [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/how-cv.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/how-cv.md new file mode 100644 index 00000000000..a2d6ffe9f6d --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/how-cv.md @@ -0,0 +1,62 @@ +--- +title: How Does Harness Perform Continuous Verification? +description: Harness Continuous Verification can apply two analysis strategies to your metrics and logs -- Previous and Canary Analysis. +sidebar_position: 40 +helpdocs_topic_id: 6r02s541an +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic covers the analysis strategies that Harness Continuous Verification can apply to your metrics and logs: Previous and Canary Analysis. + + + +### Overview + +As outlined in [What Is Continuous Verification (CV)?](what-is-cv.md), Harness Continuous Verification can consume and analyze performance metrics and/or log data from your choice of providers. This topic covers your choice of analysis strategies: Previous Analysis and Canary Analysis. + +![](./static/how-cv-03.png) + + +### Types of Analysis Strategies + +Harness uses two types of analysis strategies, each with a different combination of load (datasets) and granularity: + + + +| | | | +| --- | --- | --- | +| **Analysis Strategy** | **Load** | **Granularity** | +| Previous | Synthetic | Container level | +| Canary | Real user traffic | Container level | + +Each strategy is defined below. + + +### Previous Analysis + +In Previous Analysis, Harness compares the metrics received for the nodes deployed in each Workflow Phase with metrics received for all the nodes during the previous deployment. Remember that verification steps are used only after you have deployed successfully at least once: In order to verify deployments and find anomalies, Harness needs data from previous deployments. + +For example, if Phase 1 deploys app version 1.2 to node A, the metrics received from the APM during this deployment are compared to the metrics for nodes A, B, and C (all the nodes) during the previous deployment (version 1.1). Previous Analysis is best used when you have predictable load, such as in a QA environment. + +For Previous Analysis to be effective, the load on the application should be the same across deployments. For example, provide a (synthetic) test load using [Apache JMeter](https://jmeter.apache.org/). If the load varies between deployments, then Previous Analysis is not effective. +### Canary Analysis + +For Canary Analysis, Harness compares the metrics received for all old app version nodes with the metrics for the new app version nodes. The nodes deployed in each Workflow Phase are compared with metrics received for all of the existing nodes hosting the application. + +In the following example, a Prometheus verification step is using Canary Analysis to compare a new node with two previous nodes: + +![](./static/how-cv-04.png) + +For example, if Phase 1 deploys to 25% of your nodes, the metrics received for the new app versions on these nodes are compared with metrics received for the old app versions on these nodes. + +The metrics are taken for the period of time defined in **Analysis Time duration**. + +Harness supports Canary Analysis only on [Canary deployments](https://docs.harness.io/article/325x7awntc-deployment-concepts-and-strategies#canary_deployment). +### Next Up + +Next, see our integrations: + +* [Who Are Harness' Verification Providers?](cv-providers.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/instana-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/instana-verification-overview.md new file mode 100644 index 00000000000..0daad9e4519 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/instana-verification-overview.md @@ -0,0 +1,60 @@ +--- +title: Instana Verification Overview +description: This guide describes how to set up Harness Continuous Verification features on Instana, and how to use Instana to monitor your deployments and production applications using Harness' unsupervised mach… +# sidebar_position: 2 +helpdocs_topic_id: s9qjvicmod +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This guide describes how to set up Harness Continuous Verification features on Instana, and how to use Instana to monitor your deployments and production applications using Harness' unsupervised machine-learning functionality. + +Walk through this guide in the following order: + +1. [Instana Connection Setup](../../instana-verification/instana-connection-setup.md) +2. [24/7 Service Guard for Instana](../../instana-verification/instana-service-guard.md) +3. [Verify Deployments with Instana](../../instana-verification/instana-verify-deployments.md) + +### Integration Overview + +Instana provides continuous, full-stack performance observability of all your server and application components, collecting metrics with 1-second data granularity. Instana automatically collects logs from all your services, applications, and platforms. + +Harness Continuous Verification integrates with Instana to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard** - Monitors your live, production applications. +* **Deployment Verification** - Monitors your application deployments, and performs automatic rollback according to your criteria. + + + +| | | +| --- | --- | +| **Instana 24/7 Service Guard** | **Instana Workflow Verification** | +| ![](./static/instana-left.png) | ![](./static/instana-right.png) | + +This guide describes how to set up and use these Harness Continuous Verification monitoring features. + + + +| | | +| --- | --- | +| **Analysis with Instana** | **Harness Analysis** | +| ![](./static/instana2-left.png) | ![](./static/instana2-right.png) | + +### Setup Preview + +You set up Instana and Harness in the following way: + +1. **Instana** – Monitor your application using Instana. In this article, we assume that you are using Instana to monitor your application already. +2. **​Verification Provider Setup** – In Harness, you connect Harness to your Instana account, adding Instana as a **Harness Verification Provider**. +3. **Harness Application** – Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Application Components](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup** – In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the [Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you next add verification steps to the Workflow using your Verification Provider. + 3. Harness combines semi-supervised machine-learning with Instana metrics to analyze your future deployments—discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Next Step + +* [1 – Instana Connection Setup](../../instana-verification/instana-connection-setup.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/new-relic-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/new-relic-verification-overview.md new file mode 100644 index 00000000000..eaa99820d7e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/new-relic-verification-overview.md @@ -0,0 +1,54 @@ +--- +title: New Relic Verification Overview +description: Overview of Harness' New Relic integration. +# sidebar_position: 2 +helpdocs_topic_id: ht3amzjvle +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness' Continuous Verification features, and monitor your deployments and production applications using its unsupervised machine-learning functionality, on New Relic. + + +### Visual Summary + +New Relic delivers real-time and trending data about application performance. New Relic can determine whether a performance blocker comes from the app itself, CPU availability, database loads, or another source. + +Harness Continuous Verification integrates with New Relic to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard**- Monitors your live, production applications. +* **Deployment Verification**- Monitors your application deployments, and performs automatic rollback according to your criteria. + +This document describes how to set up these Harness Continuous Verification features and monitor your deployments and production applications using its unsupervised machine-learning functionality. + +Verification analysis is limited to **Web** **Transactions** only. In **New Relic**, in your application, click **Transactions**, and in **Type**, click **Web.** + +| | | +| --- | --- | +| **Web Transactions in New Relic** | **Web Transactions analyzed in Harness** | +| ![](./static/newrelic-left.png) | ![](./static/newrelic-right.png) | + +### Integration Process Summary + +You set up New Relic and Harness in the following way: + +![](./static/new-relic-verification-overview-35.png) + +1. **New Relic**- Monitor your application using New Relic. In this article, we assume that you are using New Relic to monitor your application already. +2. **​Verification Provider Setup** - In Harness, you connect Harness to your New Relic account, adding New Relic as a **Harness Verification Provider**. +3. **Harness Application**- Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Application Checklist](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup** **-** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and New Relic analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Next Steps + +* [Connect to New Relic](../../new-relic-verification/1-new-relic-connection-setup.md) +* [Monitor Applications 24/7 with New Relic](../../new-relic-verification/2-24-7-service-guard-for-new-relic.md) +* [New Relic Deployment Marker](../../new-relic-verification/3-new-relic-deployment-marker.md) +* [Verify Deployments with New Relic](../../new-relic-verification/4-verify-deployments-with-new-relic.md) +* [Troubleshoot New Relic](../../new-relic-verification/5-troubleshooting-new-relic.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/prometheus-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/prometheus-verification-overview.md new file mode 100644 index 00000000000..f5d64b3d3e7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/prometheus-verification-overview.md @@ -0,0 +1,47 @@ +--- +title: Prometheus Verification Overview +description: Overview of Harness' Prometheus Integration. +# sidebar_position: 2 +helpdocs_topic_id: 5uh79dplbj +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness' Continuous Verification features, and monitor your deployments and production applications using its unsupervised machine-learning functionality, on Prometheus. + +### Visual Summary + +Prometheus uses a multi-dimensional data model with time-series data and key/value pairs, along with a flexible query language to leverage this dimensionality. Prometheus records any numeric time series, such as machine-centric monitoring and the monitoring of highly dynamic service-oriented architectures. For microservices, Prometheus support for multi-dimensional data collection and querying is very useful. + +Prometheus integrates with Harness to verify the performance of microservices instantly in every environment. + +When you use Prometheus with Harness Service Guard 24/7, or when you deploy a new microservice via Harness, Harness automatically connects to Prometheus and starts analyzing the multi-dimensional data model to understand what exceptions and errors are new or might cause problems for your microservice performance and quality. + +Here is an example of a deployment Pipeline Stage verified using Prometheus. + +![](./static/prometheus-verification-overview-74.png) + +### Integration Process Summary + +You set up Prometheus and Harness in the following way: + +![](./static/prometheus-verification-overview-75.png) + +1. **Prometheus**- Monitor your application using Prometheus. In this article, we assume that you are using Prometheus to monitor your application already. +2. **​Verification Provider Setup**- In Harness, you connect Harness to your Prometheus account, adding Prometheus as a **Harness Verification Provider**. +3. **Harness Application**- Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Application Checklist](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup****-**In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and Prometheus analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Connect to Prometheus](../../prometheus-verification/1-prometheus-connection-setup.md) +* [Monitor Applications 24/7 with Prometheus](../../prometheus-verification/2-24-7-service-guard-for-prometheus.md) +* [Verify Deployments with Prometheus](../../prometheus-verification/3-verify-deployments-with-prometheus.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/splunk-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/splunk-verification-overview.md new file mode 100644 index 00000000000..8d53df61c36 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/splunk-verification-overview.md @@ -0,0 +1,59 @@ +--- +title: Splunk Verification Overview +description: Overview of Harness' Splunk integration. +# sidebar_position: 2 +helpdocs_topic_id: dujtd6ek5p +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness Continuous Verification features, and monitor your deployments and production applications using its unsupervised machine-learning functionality, on Splunk. + +In this topic: + +* [Visual Summary](#visual_summary) +* [Integration Process Summary](#integration_process_summary) +* [Next Steps](#next_steps) + +### Visual Summary + +Splunk Enterprise enables you to search, analyze, and visualize data gathered from your microservices, websites, and apps. After you define the data source, Splunk Enterprise indexes the data stream and parses it into a series of individual events that you can view and search. Splunk provides a REST API with over 200 endpoints. Developers can programmatically index, search, and visualize data in Splunk from any app. + +Harness Continuous Verification integrates with Splunk to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard**- Monitors your live, production applications. +* **Deployment Verification**- Monitors your application deployments, and performs automatic rollback according to your criteria. + +This document describes how to set up these Harness Continuous Verification features and monitor your deployments and production applications using its unsupervised machine-learning functionality. + +For example, once you have integrated Splunk with your microservice or app, you can add a Splunk verification step to your Harness workflows and Harness will use Splunk to verify the performance and quality of your deployments and apply Harness machine-learning verification analysis to Splunk data. + + + +| | | +| --- | --- | +| **Verification with Splunk Enterprise** | **Harness Analysis** | +| ![](./static/splunk-left.png) | ![](./static/splunk-right.png) | + +### Integration Process Summary + +You set up Splunk and Harness as following way: + +![](./static/splunk-verification-overview-11.png) + +1. **Splunk** - Monitor your application using Splunk. In this article, we assume that you are using Splunk to monitor your application already. +2. **​Verification Provider Setup** - In Harness, you connect Harness to your Splunk account, adding Splunk as a **Harness Verification Provider**. +3. **Harness Application** - Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Create an Application](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup** **-** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and Splunk analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Next Steps + +* [Connect to Splunk](../../splunk-verification/1-splunk-connection-setup.md) +* [Monitor Applications 24/7 with Splunk](../../splunk-verification/2-24-7-service-guard-for-splunk.md) +* [Verify Deployments with Splunk](../../splunk-verification/3-verify-deployments-with-splunk.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md new file mode 100644 index 00000000000..10b6d0e5fe7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md @@ -0,0 +1,53 @@ +--- +title: Google Operations (formerly Stackdriver) Overview +description: Overview of Harness' Stackdriver integration. +# sidebar_position: 2 +helpdocs_topic_id: jn0axefdat +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Google Operations (formerly Stackdriver) aggregates metrics, logs, and events from infrastructure, giving developers and operators a rich set of observable signals that speed root-cause analysis and reduce mean time to resolution (MTTR). + + + +### Visual Summary + +Harness Continuous Verification integrates with Google Operations to verify your deployments and live production applications using the following Harness features: + +* **24/7 Service Guard** – Monitors your live, production applications. +* **Deployment Verification** – Monitors your application deployments, and performs automatic rollback according to your criteria. + +| | | +| --- | --- | +| **Verification with Stackdriver** | **Harness Analysis** | +| ![](./static/stackdriver-left.png) | ![](./static/stackdriver-right.png) | + +You can read more about Harness and Google Operations integration on the [Harness Blog](http://www.harness.io/blog/stackdriver-automated-canary-deployments). + + +### Integration Process Summary + +You set up Stackdriver and Harness in the following way: + +![](./static/stackdriver-and-harness-overview-71.png) + +1. **Google Operations –** Monitor your application using Stackdriver. In this article, we assume that you are using Stackdriver to monitor your application already. +2. **Cloud Provider Setup –** In Harness, you connect Harness to your Google account, adding Google Cloud Platform as a **Harness Cloud Provider**. For more informations, see [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). +3. **Harness Application** – Create a Harness Application with a Service, Environment, and Workflow. We do not cover Application setup in this sequence. See  [Application Components](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup****–** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/ [Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and Stackdriver monitoring to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + + +### Next Steps + +* [Connect to Stackdriver](../../stackdriver-verification/stackdriver-connection-setup.md) +* [Verify Deployments with Stackdriver Logging](../../stackdriver-verification/3-verify-deployments-with-stackdriver.md) +* [Verify Deployments with Stackdriver Metrics](../../stackdriver-verification/verify-deployments-with-stackdriver-metrics.md) +* [Monitor Applications 24/7 with Stackdriver Logging](../../stackdriver-verification/2-24-7-service-guard-for-stackdriver.md) +* [Monitor Applications 24/7 with Stackdriver Metrics](../../stackdriver-verification/monitor-applications-24-7-with-stackdriver-metrics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-64.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-64.png new file mode 100644 index 00000000000..a8dbc4d2c62 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-64.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-65.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-65.png new file mode 100644 index 00000000000..89ddc6bba94 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-65.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-66.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-66.png new file mode 100644 index 00000000000..fda33c3129a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-66.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-67.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-67.png new file mode 100644 index 00000000000..dac03cfae31 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-67.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-68.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-68.png new file mode 100644 index 00000000000..abe065b5b8b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-68.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-69.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-69.png new file mode 100644 index 00000000000..558b2bbe77e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-69.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-70.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-70.png new file mode 100644 index 00000000000..bf5289eda64 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/24-7-service-guard-overview-70.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/appd-harness-verification-and-impact-analysis.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/appd-harness-verification-and-impact-analysis.png new file mode 100644 index 00000000000..6a3a36dc8f4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/appd-harness-verification-and-impact-analysis.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/appd-microservices-environment.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/appd-microservices-environment.png new file mode 100644 index 00000000000..9f058a5e53c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/appd-microservices-environment.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-left.png new file mode 100644 index 00000000000..66b5ef05ea1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-right.png new file mode 100644 index 00000000000..d1d75f369c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-05.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-05.png new file mode 100644 index 00000000000..612d0b22714 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-06.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-06.png new file mode 100644 index 00000000000..5838deccabf Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-07.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-07.png new file mode 100644 index 00000000000..4b8d2636841 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/bugsnag-verification-overview-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloud-watch-verification-overview-33.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloud-watch-verification-overview-33.png new file mode 100644 index 00000000000..b9b71e8e65e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloud-watch-verification-overview-33.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloud-watch-verification-overview-34.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloud-watch-verification-overview-34.png new file mode 100644 index 00000000000..75dc737e20a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloud-watch-verification-overview-34.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloudwatch-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloudwatch-left.png new file mode 100644 index 00000000000..cf3445a7635 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloudwatch-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloudwatch-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloudwatch-right.png new file mode 100644 index 00000000000..e69162cba10 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cloudwatch-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/continuous-verification-metric-types-72.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/continuous-verification-metric-types-72.png new file mode 100644 index 00000000000..7261d677672 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/continuous-verification-metric-types-72.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/continuous-verification-metric-types-73.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/continuous-verification-metric-types-73.png new file mode 100644 index 00000000000..dd410ba8cbe Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/continuous-verification-metric-types-73.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-12.jpg b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-12.jpg new file mode 100644 index 00000000000..400dc329e72 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-12.jpg differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-13.jpg b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-13.jpg new file mode 100644 index 00000000000..c6e622f4824 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-13.jpg differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-14.jpg b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-14.jpg new file mode 100644 index 00000000000..7057924828c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-14.jpg differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-15.jpg b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-15.jpg new file mode 100644 index 00000000000..0dfb59af33b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-15.jpg differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-16.jpg b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-16.jpg new file mode 100644 index 00000000000..426d12cb227 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-16.jpg differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-17.jpg b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-17.jpg new file mode 100644 index 00000000000..a0078c2651a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-providers-17.jpg differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-21.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-21.png new file mode 100644 index 00000000000..9fd6c33cd14 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-22.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-22.png new file mode 100644 index 00000000000..e4ab031d807 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-23.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-23.png new file mode 100644 index 00000000000..20efb7208a5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-24.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-24.png new file mode 100644 index 00000000000..5f5c5d6fc2e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-25.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-25.png new file mode 100644 index 00000000000..c3a905660df Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-26.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-26.png new file mode 100644 index 00000000000..6d9ccb0b877 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-27.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-27.png new file mode 100644 index 00000000000..d07e70ffe58 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-28.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-28.png new file mode 100644 index 00000000000..2bd1eb29bdf Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-28.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-29.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-29.png new file mode 100644 index 00000000000..86412e753aa Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-30.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-30.png new file mode 100644 index 00000000000..0b03ac367df Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-30.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-31.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-31.png new file mode 100644 index 00000000000..f2616682678 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-31.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-32.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-32.png new file mode 100644 index 00000000000..267b7a6755b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/cv-strategies-and-best-practices-32.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-left.png new file mode 100644 index 00000000000..177c18634d4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-right.png new file mode 100644 index 00000000000..401f3ac43c6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-verification-overview-08.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-verification-overview-08.png new file mode 100644 index 00000000000..785c60c7e95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/datadog-verification-overview-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-38.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-38.png new file mode 100644 index 00000000000..406a40393e4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-38.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-39.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-39.png new file mode 100644 index 00000000000..755b22de86c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-39.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-40.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-40.png new file mode 100644 index 00000000000..a03d66d8a24 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-40.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-41.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-41.png new file mode 100644 index 00000000000..a12245165ad Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-41.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-42.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-42.png new file mode 100644 index 00000000000..26b135fb509 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-42.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-43.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-43.png new file mode 100644 index 00000000000..3aab9557c69 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-43.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-44.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-44.png new file mode 100644 index 00000000000..71e0a40c134 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-44.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-45.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-45.png new file mode 100644 index 00000000000..d13d9bc52c9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-45.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-46.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-46.png new file mode 100644 index 00000000000..ca6a9479eca Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-46.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-47.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-47.png new file mode 100644 index 00000000000..f707ad053b6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-47.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-48.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-48.png new file mode 100644 index 00000000000..4020d465425 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-48.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-49.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-49.png new file mode 100644 index 00000000000..16fd663f727 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-49.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-50.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-50.png new file mode 100644 index 00000000000..67ca8ab9c7c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-50.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-51.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-51.png new file mode 100644 index 00000000000..7bf46ae148e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-51.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-52.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-52.png new file mode 100644 index 00000000000..636b5bf250a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-52.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-53.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-53.png new file mode 100644 index 00000000000..84cb790b742 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-53.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-54.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-54.png new file mode 100644 index 00000000000..50e433c022d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-54.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-55.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-55.png new file mode 100644 index 00000000000..c91bc122e10 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-55.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-56.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-56.png new file mode 100644 index 00000000000..2a8fbcea784 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-56.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-57.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-57.png new file mode 100644 index 00000000000..bdfcee75d62 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-57.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-58.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-58.png new file mode 100644 index 00000000000..83c3a6c97f5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-58.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-59.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-59.png new file mode 100644 index 00000000000..f852897b406 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-59.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-60.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-60.png new file mode 100644 index 00000000000..7c295d7f2b3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-60.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-61.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-61.png new file mode 100644 index 00000000000..707e1e9ba2a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-61.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-62.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-62.png new file mode 100644 index 00000000000..68b952bf203 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/deployment-verification-results-62.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-left.png new file mode 100644 index 00000000000..a06f3402ee7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-right.png new file mode 100644 index 00000000000..94b67acd8a4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-verification-overview-37.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-verification-overview-37.png new file mode 100644 index 00000000000..113918194bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/dynatrace-verification-overview-37.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elastic-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elastic-left.png new file mode 100644 index 00000000000..1a8d9a6cc81 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elastic-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elastic-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elastic-right.png new file mode 100644 index 00000000000..2586fb69029 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elastic-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elasticsearch-verification-overview-10.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elasticsearch-verification-overview-10.png new file mode 100644 index 00000000000..8dd5804d891 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/elasticsearch-verification-overview-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/google-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/google-left.png new file mode 100644 index 00000000000..666cabc34a5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/google-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/google-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/google-right.png new file mode 100644 index 00000000000..1a95ff62756 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/google-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-18.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-18.png new file mode 100644 index 00000000000..220cda38fe5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-19.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-19.png new file mode 100644 index 00000000000..21ee4742905 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-20.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-20.png new file mode 100644 index 00000000000..3b59d7706c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/harness-verification-feedback-overview-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/how-cv-03.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/how-cv-03.png new file mode 100644 index 00000000000..16511393903 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/how-cv-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/how-cv-04.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/how-cv-04.png new file mode 100644 index 00000000000..e4ab031d807 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/how-cv-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/image.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/image.png new file mode 100644 index 00000000000..9f058a5e53c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/image.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana-left.png new file mode 100644 index 00000000000..96940bd0eb9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana-right.png new file mode 100644 index 00000000000..c42c76783f6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana2-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana2-left.png new file mode 100644 index 00000000000..c8c37da32fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana2-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana2-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana2-right.png new file mode 100644 index 00000000000..c96e8f34542 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/instana2-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/new-relic-verification-overview-35.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/new-relic-verification-overview-35.png new file mode 100644 index 00000000000..83edaa51c5a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/new-relic-verification-overview-35.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/newrelic-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/newrelic-left.png new file mode 100644 index 00000000000..39d16cdc7bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/newrelic-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/newrelic-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/newrelic-right.png new file mode 100644 index 00000000000..dbc95c281ce Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/newrelic-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/prometheus-verification-overview-74.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/prometheus-verification-overview-74.png new file mode 100644 index 00000000000..22ead9cc5b4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/prometheus-verification-overview-74.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/prometheus-verification-overview-75.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/prometheus-verification-overview-75.png new file mode 100644 index 00000000000..1f577fddb25 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/prometheus-verification-overview-75.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-left.png new file mode 100644 index 00000000000..007d4148564 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-right.png new file mode 100644 index 00000000000..7eb133ffcd0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-verification-overview-11.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-verification-overview-11.png new file mode 100644 index 00000000000..a5abc7b5ff9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/splunk-verification-overview-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-and-harness-overview-71.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-and-harness-overview-71.png new file mode 100644 index 00000000000..939c934f3a2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-and-harness-overview-71.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-left.png new file mode 100644 index 00000000000..666cabc34a5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-right.png new file mode 100644 index 00000000000..1a95ff62756 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/stackdriver-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-left.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-left.png new file mode 100644 index 00000000000..7cd57642e8c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-left.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-logic-verification-overview-63.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-logic-verification-overview-63.png new file mode 100644 index 00000000000..0d2c127ab84 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-logic-verification-overview-63.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-right.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-right.png new file mode 100644 index 00000000000..27d9c88a10e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/sumo-right.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/what-is-cv-36.jpeg b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/what-is-cv-36.jpeg new file mode 100644 index 00000000000..fbaf815d8c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/what-is-cv-36.jpeg differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-00.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-00.png new file mode 100644 index 00000000000..6368f9d3fd2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-01.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-01.png new file mode 100644 index 00000000000..04ff1e232fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-02.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-02.png new file mode 100644 index 00000000000..0d0b807593d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/when-verify-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/why-cv-09.png b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/why-cv-09.png new file mode 100644 index 00000000000..7916264d2a4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/static/why-cv-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/sumo-logic-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/sumo-logic-verification-overview.md new file mode 100644 index 00000000000..b8aba206c88 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/sumo-logic-verification-overview.md @@ -0,0 +1,54 @@ +--- +title: Sumo Logic Verification Overview +description: Overview of Harness' Sumo Logic integration. +# sidebar_position: 2 +helpdocs_topic_id: wb2k4u4kxm +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up Harness Continuous Verification features and monitor your deployments and production applications using its unsupervised machine-learning functionality on Sumo Logic. + + +### Visual Summary + +Using Sumo Logic, you can interact with, and analyze, your data in the cloud in real time. Sumo Logic uses a powerful and intuitive search capability to expedite functions like forensic analysis, troubleshooting, and system health checks. + +Harness Continuous Verification integrates with Sumo Logic to verify your deployments and live production applications, using the following Harness features: + +* **24/7 Service Guard** – Monitors your live, production applications. +* **Deployment Verification** – Monitors your application deployments, and performs automatic rollback according to your criteria. + +This document describes how to set up these Harness Continuous Verification features, and monitor your deployments and production applications, using Harness' unsupervised machine-learning functionality. + + + +| | | +| --- | --- | +| **Search with Sumo Logic** | **Harness Analysis** | +| ![](./static/sumo-left.png) | ![](./static/sumo-right.png) | + +### Integration Process Summary + +You set up Sumo Logic and Harness in the following way. + +![](./static/sumo-logic-verification-overview-63.png) + +1. **Sumo Logic** – Monitor your application using Sumo Logic. In this article, we assume that you are using Sumo Logic to monitor your application already. +2. **​Verification Provider Setup** – In Harness, you connect Harness to your Sumo Logic account, adding Sumo Logic as a **Harness Verification Provider**. +3. **Harness Application** – Create a Harness Application with a Service and an Environment. We do not cover Application setup in this sequence. See [Application Checklist](../../../model-cd-pipeline/applications/application-configuration.md). +4. **​24/7 Service Guard Setup****–** In the Environment, set up 24/7 Service Guard to monitor your live, production application. +5. ​**Verify Deployments**: + 1. Add a Workflow to your Harness Application and deploy your microservice or application to the service infrastructure/[Infrastructure Definition](../../../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition) in your Environment. + 2. After you have run a successful deployment, you then add verification steps to the Workflow using your Verification Provider. + 3. Harness uses unsupervised machine-learning and Sumo Logic analytics to analyze your future deployments, discovering events that might be causing your deployments to fail. Then you can use this information to set rollback criteria and improve your deployments. + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Connect to Sumo Logic](../../sumo-logic-verification/1-sumo-logic-connection-setup.md) +* [Monitor Applications 24/7 with Sumo Logic](../../sumo-logic-verification/2-24-7-service-guard-for-sumo-logic.md) +* [Verify Deployments with Sumo Logic](../../sumo-logic-verification/3-verify-deployments-with-sumo-logic.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/what-is-cv.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/what-is-cv.md new file mode 100644 index 00000000000..6f1e7777c3e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/what-is-cv.md @@ -0,0 +1,57 @@ +--- +title: What Is Continuous Verification (CV)? +description: Introduces Harness' Continuous Verification features, which integrate your choice of state-of-the-art APM and log monitoring services. +sidebar_position: 10 +helpdocs_topic_id: ina58fap5y +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes Harness' Continuous Verification features. + +### Visual Overview + +In a hurry? Here's a [one-minute video summary](https://fast.wistia.com/embed/medias/5sglzgol3u) of how Harness helps you monitor the health of your deployments through a streamlined, comprehensive interface. + + +### Verifying Services + +The more often you deploy software, the more you need to validate the health of newly deployed service instances. You need the ability to rapidly detect regressions or anomalies, and to rapidly roll back failed deployments. + +You have your choice of state-of-the-art APM (application performance monitoring) and logging software to continually measure your deployment data. But before Harness, you needed to connect your data to these multiple systems, and manually monitor each provider for unusual, post-deployment activity. + +Harness' Continuous Verification (CV) approach simplifies verification. First, Harness aggregates monitoring from multiple providers into one dashboard. Second, Harness uses machine learning to identify normal behavior for your applications. This allows Harness to identify and flag anomalies in future deployments, and to perform automatic rollbacks. + +### APM/Time-Series Data + +Application performance monitoring (APM) platforms like AppDynamics continuously measure and aggregate performance metrics across your service's transactions, database calls, third-party API calls, etc. We can mine these metrics to provide an excellent snapshot of the service's current state, and to predict its near-future behavior. + +Harness Continuous Verification uses real-time, semi-supervised machine learning to model and predict your service's behavior. We then apply anomaly-detection techniques to the modeled representation, to predict regressions in behavior or performance. + +### Log Data + +Harness Continuous Verification can also consume data from log providers like Sumo Logic and Elastic/ELK. Using semi-supervised machine learning, Harness analyzes and extracts clusters of log messages, based on textual and contextual similarity. This builds a further signature (model) of your service's current state and future behavior. + +Using this learned signature—and using real-time comparisons of the current signature to past versions—Harness then predict service anomalies and regressions, starting at deployment time and extending beyond. + +#### Queries and Limitations + +Log verification takes in a user-provided search query. Queries should be negative queries that look at errors or exceptions. Typically, no more than 100 to 1000 errors in a minute. + +Responses come from typical application logs and are 50–100 lines each, although there is limitation. There's an overall limit of 1MB per minute. + +### Data Storage + +Harness stores the data it receives in its database. 24/7 Service Guard retention is 30 days. Deployment verifications are available for months (as long as the Workflow is available). + +### Getting Alerts + +Harness Continuous Verification enables you to flexibly configure alerts, and alert thresholds, based on Harness' dynamic analysis of both time-series and log data. + +### Next Up + +Next, take a look at: + +* [Why Perform Continuous Verification?](why-cv.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/when-verify.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/when-verify.md new file mode 100644 index 00000000000..763c61f8805 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/when-verify.md @@ -0,0 +1,69 @@ +--- +title: When Does Harness Verify Deployments? +description: Covers Harness' short- and long-term verification features -- Deployment Verification and 24/7 Service Guard. +sidebar_position: 30 +helpdocs_topic_id: 95vzen6l4m +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic covers Harness' two complementary verification components, Deployment Verification and 24/7 Service Guard. + + +### Short- and Long-Term Verification + +Harness' Continuous Verification enables you to verify your services' health over two timeframes. At deployment time, Harness' **Deployment Verification** feature validates your artifacts, and then validates the individual service instances to which you've deployed. + +![](./static/when-verify-00.png) + +Over the long term, Harness' **24/7 Service Guard** feature continuously validates your service as a whole. + +![](./static/when-verify-01.png) + +You can use both or either of these verification types. Both types use the [Verification Providers](cv-providers.md) you set up in Harness. + + +### Deployment Verification + +Deployment Verification verifies the first 15 minutes of deployments. Deployment verification is set up using a Harness Workflow. + +For an excellent example of Deployment Verification, see the Harness Blog post, [How Build.com Rolls Back Production in 32 Seconds](https://harness.io/customers/case-studies/automated-ci-cd-rollback/). + +![](./static/when-verify-02.png) + +#### Video Webinar + +This video Webinar covers how Harness Continuous Delivery leverages unsupervised machine learning to verify production deployments, based on users' APM and log data. + + + + + +### 24/7 Service Guard + +Harness' 24/7 Service Guard feature verifies your live, production application continuously. You set up 24/7 Service Guard verification in a Harness [Environment](../../../model-cd-pipeline/environments/environment-configuration.md). + +#### Video Summary + +Here's a 2-minute video summary of how 24/7 Service Guard works: + + + + + +#### 24/7 Service Guard in Depth + +For an introduction to 24/7 Service Guard's uses cases and design, see the Harness Blog post, [Harness 24/7 Service Guard Empowers Developers with Total Operational Control](http://www.harness.io/blog/harness-24-7-service-guard). + +For further details about how to use 24/7 Service Guard in combination with other Harness capabilities, see the [24/7 Service Guard Overview](24-7-service-guard-overview.md) topic. + + +### Next Up + +Next, read details about: + +* [How Does Harness Perform Continuous Verification?](how-cv.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/why-cv.md b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/why-cv.md new file mode 100644 index 00000000000..9d587a7cf1a --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/continuous-verification-overview/concepts-cv/why-cv.md @@ -0,0 +1,51 @@ +--- +title: Why Perform Continuous Verification? +description: Compared to static rules, Harness' semi-supervised ML helps you respond faster to dynamic application data. +sidebar_position: 20 +helpdocs_topic_id: trmrvs1egp +helpdocs_category_id: zxxvl8vahz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic covers the rationale for Harness' Continuous Verification approach, and the benefits of adopting it. + +### Visual Introduction + +Harness Continuous Verification watches your applications over time—learning normal behavior, and alerting you to anomalies. Rather than writing static (and brittle) rules, you focus your effort on correcting and tuning this ongoing learning, which improves its anomaly detection. + +![](./static/why-cv-09.png) +### Static Rules Versus Dynamic Data + +In traditional application performance monitoring, you either manually watch dashboards, or write rules to define risk. But once you adopt continuous delivery, these approaches won't scale. + +Rules-based alerting relies on static rules, but with continuous delivery, your application data is highly dynamic. Your environment changes at accelerating velocity; the entropy of your system increases; and things break. + + +### Let the Machine Learn + +Ideally, then, performance monitoring under continuous delivery should be configuration-free: Users should not need to add rules at all. + +This implies a machine learning–based approach. Over time, unsupervised ML models models can analyze data from multiple monitoring providers, and can then predict your system's future behavior, identify anomalies, and respond to those anomalies. + + +### Benefits of Harness' Learning Approach + +Harness' semi-supervised ML takes this approach a step further. You can tune Harness' learning engine to generate alerts targeted to *your* service and environment. Compared to coarse, basic rules, this provides a much more flexible basis for pausing or rolling back deployments. + +Given the requirement for fast failure detection (low mean time to detect, or MTTD), and the vast amount of log data that can be generated by monitoring your services, Harness' semi-supervised ML drastically improves your MTTR (mean time to respond) to failures. + + +### Rich Ecosystem + +Harness supports the vast majority of monitoring providers, bringing them together in one place for combined visibility. + +Our simple and intuitive dashboards include a heat-map and time-series views. These interfaces also offer deep linking into your deployment system and your verification providers, for further debugging. + + +### Next Up + +Next, consider: + +* [When Does Harness Verify Deployments?](when-verify.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/_category_.json new file mode 100644 index 00000000000..2de27e27f26 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/_category_.json @@ -0,0 +1 @@ +{"label": "Custom Metrics and Logs Verification", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Custom Metrics and Logs Verification"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "ep5nt3dyrb"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md new file mode 100644 index 00000000000..44b6d5d5d40 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-app-dynamics-as-a-custom-apm.md @@ -0,0 +1,68 @@ +--- +title: Connect to AppDynamics as a Custom APM +description: As an alternative to Harness' standard AppDynamics integration , you can use this guide to add AppDynamics to Harness as a custom APM. This approach enables you to expand monitoring beyond Business T… +sidebar_position: 80 +helpdocs_topic_id: w7tcb2frp9 +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +As an alternative to Harness' [standard AppDynamics integration](../continuous-verification-overview/concepts-cv/app-dynamics-verification-overview.md), you can use this guide to add AppDynamics to Harness as a custom APM. This approach enables you to expand monitoring beyond Business Transactions, to cover specific metrics of interest (for example, JVM data). + + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). + +### Step 1: Add AppDynamics as a Custom Verification Provider + +To add a Custom Metrics Provider using AppDynamics, do the following: + +1. In Harness Manager, click **Setup** > **Connectors** > **Verification Providers**. +2. Click **Add Verification Provider**. From the drop-down, select **Custom Verification**. + + This opens the **Metrics Data Provider** settings. + + ![](./static/connect-to-app-dynamics-as-a-custom-apm-61.png) + +3. In **Type**, select **Metrics Data Provider**, as shown above. + +### Step 2: Display Name + +In **Display Name**, give the Verification Provider an arbitrary name. (You will use this name to select this provider in a Workflow.) + +### Step 3: Base URL + +In **Base URL**, enter: `https://.saas.appdynamics.com/controller/` + +Ensure that you include the forward slash at the end of the URL. + +### Step 4: Headers + +In **Headers**, click **Add Headers**, and add the following row: + + + +| | | | +| --- | --- | --- | +| **Key** | **Value** | **Encrypted Value** | +| Authorization | Enter a base 64–encoded version of this string, representing your AppDynamics credentials:`@:`You can also use an Open Authorization (OAuth) token-based authentication. Instead of the above credentials combination, enter your token.For more information about generating the token, see [AppDynamics API Clients documentation](https://docs.appdynamics.com/display/PRO45/API+Clients). | Checked | + +### Step 5: Validation Path + +In **Validation Path**, enter `rest/applications?output=json`. + +The settings will now look something like this: + +![](./static/connect-to-app-dynamics-as-a-custom-apm-62.png) + +### Step 6: Test and Submit + +1. Click **Test** to verify your custom verification provider. +2. If the test succeeds, click **Submit** to save the custom verification provider. + +### See Also + +* [Verify Deployments with AppDynamics as a Custom APM](verify-deployments-with-app-dynamics-as-a-custom-apm.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-custom-verification-for-custom-logs.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-custom-verification-for-custom-logs.md new file mode 100644 index 00000000000..a38d86d36ea --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-custom-verification-for-custom-logs.md @@ -0,0 +1,67 @@ +--- +title: Connect to Custom Verification for Custom Logs +description: Connect Harness to a Custom Logs Provider to have Harness verify the success of your deployments. +sidebar_position: 20 +helpdocs_topic_id: wya9qgjlrr +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Connect Harness to a Custom Logs Provider to have Harness verify the success of your deployments. Harness will use your tools to verify deployments and use its machine learning features to identify sources of failures. + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). + +### Step 1: Add Custom Verification Provider + +To connect a custom logs provider, do the following: + +1. Click **Setup**. +2. Click **Connectors**. +3. Click **Verification Providers**. +4. Click **Add Verification Provider**, and click **Custom Verification**. + +The **Logs Data Provider** dialog appears. In **Type**, select **Custom Logs Provider**. + +![](./static/connect-to-custom-verification-for-custom-logs-58.png) + +### Step 2: Display Name + +In **Display Name**, give the Verification Provider a name. You will use this name to select this provider in a Workflow. + +### Step 3: Base URL + +In **Base URL**, enter the base URL of the REST endpoint where Harness will connect. Often, the URL is the server name followed by the index name, such as `http://server_name/index_name`. + +### Step 4: Parameters + +In **Parameters**, click **Add Parameters**, and add any required parameters. + +### Step 5: Validation Path + +In **Validation Path**, you will define a validation path used by Harness to validate the connection and ensure a Harness Delegate can reach the provider. Harness expects an HTTP 200 response. + +### Step 6: Encrypted Text Secrets in Body + +In some cases, you might need to include a token in the **Body** in **Validation Path**. You can enter your token in **Body**, but to protect your token, you can add it to Harness as an Encrypted Text secret and then reference it in Body using the `${secrets.getValue("secret_name")}` syntax: + +![](./static/connect-to-custom-verification-for-custom-logs-59.png) + +The Encrypted Text secret must have **Scope to Account** enabled or it cannot be used in the Custom Logs Provider. + +If you want to use the same token in your 24/7 Service Guard Custom Logs setup, you must create another Encrypted Text secret for the same token and ensure that **Scope to Account** is **not** enabled. Encrypted Text secrets used in Harness account settings are not shared with Encrypted Text secrets used in Harness Applications. This enables you to prevent Application users from accessing account-level secrets. + +### Step 7: Test and Submit + +When you are finished, the dialog will look something like this: + +![](./static/connect-to-custom-verification-for-custom-logs-60.png) + +Click **TEST** to validate the settings and **SUBMIT** to add the Verification Provider. + +### See Also + +* [Connect to Custom Verification for Custom Metrics](connect-to-custom-verification-for-custom-metrics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-custom-verification-for-custom-metrics.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-custom-verification-for-custom-metrics.md new file mode 100644 index 00000000000..70e3df25e5b --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-custom-verification-for-custom-metrics.md @@ -0,0 +1,100 @@ +--- +title: Connect to Custom Verification for Custom Metrics +description: Connect Harness to a Custom Metrics Data Provider to have Harness verify the success of your deployments. +sidebar_position: 30 +helpdocs_topic_id: iocufp9eb2 +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Connect Harness to a Custom Metrics Data Provider to have Harness verify the success of your deployments. Harness will use your tools to verify deployments and use its machine learning features to identify sources of failures. + + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). + +### Step 1: Add Custom Verification Provider + +To connect a custom metrics data provider, do the following: + +1. Click **Setup**. +2. Click **Connectors**. +3. Click **Verification Providers**. +4. Click **Add Verification Provider**, and click **Custom Verification**. + +The **Metrics Data Provider** dialog appears. + +![](./static/connect-to-custom-verification-for-custom-metrics-112.png) + +In the **Metrics Data Provider** dialog, you can configure how Harness can query event data via API. + +For example, with New Relic Insights, you are configuring the **Metrics Data Provider** dialog to perform a cURL request like the following: + + +``` +curl -H "Accept: application/json" \ +-H "X-Query-Key: YOUR_QUERY_KEY" \ +"https://insights-api.newrelic.com/v1/accounts/**YOUR\_ACCOUNT\_ID**/query?nrql=**YOUR\_QUERY\_STRING**" +``` +To query event data via API in New Relic Insights, you will need to set up an API key in New Relic. For more information, see [Query Insights event data via API](https://docs.newrelic.com/docs/insights/insights-api/get-data/query-insights-event-data-api) from New Relic.The purpose of the **Metrics Data Provider** dialog is to validate the credentials and validation path you enter and return an HTTP 200 from your metrics provider. + +The **Metrics Data Provider** dialog has the following fields. + +### Step 2: Type + +Select **Metrics Data Provider**. + +### Step 3: Display Name + +The name for this Verification Provider connector in Harness. This is the name you will use to reference this Verification Provider whenever you use it to add a verification step to a Workflow. + +### Step 4: Base URL + +Enter the URL for API requests. For example, in New Relic Insights, you can change the default URL to get the Base URL for the API. + +**Default URL:** https://insights.newrelic.com/accounts/12121212 + +**Base URL for API:** https://**insights-api.newrelic.com/v1**/accounts/12121212 + +### Step 5: Headers + +Add the query headers required by your metrics data provider. For New Relic Insights, do the following: + +1. Click **Add Headers**. +2. In **Key**, enter **X-Query-Key**. For New Relic, a X-Query-Key must contain a valid query key. +3. In **Value**, enter the key, or click **Use Secret** and select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) for the API key you got from New Relic. +4. Click the checkbox under **Encrypted Value** to encrypt the key. +5. Click **Add Headers** again. +6. In **Key**, enter **Accept**. This is for the Content-Type of a query. +7. In **Value**, enter **application/json**. The Content-Type of a query must be application/json. + +### Step 6: Parameters + +Add any request parameters that do not change for every request. + +In **Value**, enter the key, or click **Use Secret** and select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) for the API key you got from New Relic. + +### Step 7: Validation Path + +Harness expects a JSON response, not simply text.In **Path**, you will define a validation path. + + Enter the query string from your metric provider. + +The resulting URL (**{base\_URL}/{validation\_path}**) is used to validate the connection to the metric provider. + +This query is invoked with the headers and parameters defined here.For example, in New Relic Insights, you can take the query from the **NRQL>** field and add it to the string **query?nrql=**, for example: + + +``` +query?nrql=SELECT%20average%28duration%29%20FROM%20PageView +``` +The field accepts URL encoded or unencoded queries. + +If you select **POST**, the **Body** field appears. Enter a sample JSON body to send as the payload when making the call to the APM provider. The requirements of the JSON body will depend on your APM provider. + +### See Also + +* [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-datadog-as-a-custom-apm.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-datadog-as-a-custom-apm.md new file mode 100644 index 00000000000..7e7851cf561 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/connect-to-datadog-as-a-custom-apm.md @@ -0,0 +1,86 @@ +--- +title: Connect to Datadog as a Custom APM +description: Currently, Datadog-Harness integration is for Kubernetes deployments only. To use Datadog with other deployment types, such as ECS, use the following example of how to use the Custom Metrics Provider… +sidebar_position: 65 +helpdocs_topic_id: nh868x8jim +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, Datadog-Harness integration is for Kubernetes deployments only. To use Datadog with other deployment types, such as ECS, use the following example of how to use the Custom Metrics Provider with Datadog. + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). + +### Step 1: Add Datadog as a Custom Verification Provider + +To add a Custom Metrics Provider using Datadog, do the following: + +1. In Harness Manager, click **Setup** > **Connectors** > **Verification Providers**. +2. Click **Add Verification Provider**, and click **Custom Verification**. The **Metrics Data Provider** dialog appears. +3. In **Type**, select **Metrics Data Provider**. + +### Step 2: Display Name + +In **Display Name**, give the Verification Provider a name. You will use this name to select this provider in a Workflow. + +### Step 3: Base URL + +In **Base URL**, enter `https://app.datadoghq.com/api/v1/`. + +### Step 4: Parameters + +In **Parameters**, click **Add Parameters**, and add the following parameters. + + + +| | | | +| --- | --- | --- | +| **Key** | **Value** | **Encrypted Value** | +| api\_key | Enter the API key. | Checked | +| application\_key | Enter the application key. | Checked | + +If you need help obtaining the API and Application keys, see the following: + +#### API Key + +To create an API key in Datadog, do the following: + +1. In **Datadog**, mouseover **Integrations**, and then click **APIs**. + [![](./static/connect-to-datadog-as-a-custom-apm-38.png)](./static/connect-to-datadog-as-a-custom-apm-38.png) + + The **APIs** page appears. + + [![](./static/connect-to-datadog-as-a-custom-apm-40.png)](./static/connect-to-datadog-as-a-custom-apm-40.png) + +2. In **API Keys**, in **New API key**, enter the name for the new API key, such as **Harness**, and then click **Create API key**. +3. Copy the API key and, in **Harness**, paste it into the **Value** field. + +#### Application Key + +To create an application key in Datadog, do the following: + +1. In **Datadog**, mouseover **Integrations**, and then click **APIs**. The **APIs** page appears. + + [![](./static/connect-to-datadog-as-a-custom-apm-42.png)](./static/connect-to-datadog-as-a-custom-apm-42.png) + + +2. In **Application Keys**, in **New application key**, enter a name for the application key, such as **Harness**, and click **Create Application Key**. +3. Copy the API key and, in **Harness**, paste it into the **Value** field. + +### Step 5: Validation Path + +In **Validation Path**, enter `metrics?from=1527102292`. This is the epoch seconds value used to ensure an HTTP 200 response with the credentials validated. + +When you are finished, the dialog will look something like this: + +[![](./static/connect-to-datadog-as-a-custom-apm-44.png)](./static/connect-to-datadog-as-a-custom-apm-44.png) + +Click **Submit**. + +### See Also + +* [Verify Deployments with Datadog as a Custom APM](verify-deployments-with-datadog-as-a-custom-apm.md). + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/custom-verification-overview.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/custom-verification-overview.md new file mode 100644 index 00000000000..3abd2afcec1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/custom-verification-overview.md @@ -0,0 +1,41 @@ +--- +title: Custom Verification Overview +description: Overview of Harness' integration with custom APM (metrics) and log providers. +sidebar_position: 10 +helpdocs_topic_id: e87u8c63z4 +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness includes first-class support for all of the major APM and logging vendors, but there are cases where a custom APM is needed. + +This topic describes how to set up Harness Continuous Verification features, and monitor your deployments using Harness' unsupervised machine-learning functionality, on Custom APMs. + +### Integration Process Summary + +You set up your Custom Metrics or Logs Provider and Harness in the following way: + +1. Using your Custom Metrics or Logs Provider, you monitor your microservice or application. +2. In Harness, you connect Harness to your Custom Metrics or Logs Provider account, adding the Custom Metrics or Logs Provider as a Harness Verification Provider. +3. After you have run a successful deployment of your microservice or application in Harness, you then add an Verification step(s) to your Harness deployment Workflow. +4. Harness uses your Custom Metrics or Logs Provider to verify your future microservice/application deployments. +5. Harness Continuous Verification uses unsupervised machine-learning to analyze your deployments and Custom Metrics or Logs Provider analytics, discovering events that might be causing your deployments to fail. Then you can use this information to improve your deployments. + +### Limitations + +Harness does not support **Azure Log Analytics** with Custom Verification at this time. We plan to support it in the near future as a first class integration. + +### Next Steps + +* [Connect to Custom Verification for Custom Metrics](connect-to-custom-verification-for-custom-metrics.md) +* [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md) +* [Monitor Applications 24/7 with Custom Metrics](monitor-applications-24-7-with-custom-metrics.md) +* [Monitor Applications 24/7 with Custom Logs](monitor-applications-24-7-with-custom-logs.md) +* [Verify Deployments with Custom Metrics](verify-deployments-with-custom-metrics.md) +* [Verify Deployments with Custom Logs](verify-deployments-with-custom-logs.md) +* [Connect to Datadog as a Custom APM](connect-to-datadog-as-a-custom-apm.md) +* [Verify Deployments with Datadog as a Custom APM](verify-deployments-with-datadog-as-a-custom-apm.md) +* [Connect to AppDynamics as a Custom APM](connect-to-app-dynamics-as-a-custom-apm.md) +* [Verify Deployments with AppDynamics as a Custom APM](verify-deployments-with-app-dynamics-as-a-custom-apm.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/monitor-applications-24-7-with-custom-logs.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/monitor-applications-24-7-with-custom-logs.md new file mode 100644 index 00000000000..aa06601f817 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/monitor-applications-24-7-with-custom-logs.md @@ -0,0 +1,149 @@ +--- +title: Monitor Applications 24/7 with Custom Logs +description: Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see 24/7 Service Guard Overview. While Harness… +sidebar_position: 40 +helpdocs_topic_id: dse21dgveu +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see  [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +While Harness supports [all of the common logging tools](https://docs.harness.io/category/continuous-verification), you can add your Custom logging to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see  [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md). + +This section assumes you have a Harness Application set up, containing a Service and Environment. For steps on setting up a Harness Application, see  [Application Components](../../model-cd-pipeline/applications/application-configuration.md). + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). +* See [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md). + +### Step 1: Set up 24/7 Service Guard for Custom Logs + +To set up 24/7 Service Guard for custom logs, do the following: + +1. Ensure that you have added your Custom Verification provider as a Harness Verification Provider, as described in  [Verification Provider Setup](../appdynamics-verification/2-24-7-service-guard-for-app-dynamics.md#verification-provider-setup). +2. In your Harness Application, ensure that you have added a Service, as described in  [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see  [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**.[![](./static/monitor-applications-24-7-with-custom-logs-94.png)](./static/monitor-applications-24-7-with-custom-logs-94.png) +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Custom Log Verification**.![](./static/monitor-applications-24-7-with-custom-logs-96.png) + + The **Custom Log Verification** dialog appears. + + ![](./static/monitor-applications-24-7-with-custom-logs-97.png) + + Fill out the dialog. The dialog has the following fields. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Step 2: Display Name + +Enter a name for this 24/7 Service Guard monitoring. This name will identify the monitoring in the **Environment** page and in **24/7 Service Guard** under **Continuous Verification**. + +### Step 3: Service + +Select the Harness Service that represents the production application you are monitoring with the custom log provider. + +### Step 4: Log Data Provider + +Select the Custom Logs Provider you added, described in  [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md). + +### Step 5: Log Collection + +Once you have added log collection details, this section will list the collection settings. + +### Step 6: Request Method + +Select **GET** or **POST**. If you select **POST**, **Search Body** field is mandatory. In **Search Body**, you can enter any HTTP body to send as part of the query. + +### Step 7: Search URL + +Enter the API query that will return a JSON response. In the remaining settings, you will map the keys in the JSON response to Harness settings to identify where data such as the hostname and timestamp are located in the JSON response. + +The **Search URL** is automatically filled with the URL from the Custom Log Provider you selected in **Log Data Provider**: + +[![](./static/monitor-applications-24-7-with-custom-logs-98.png)](./static/monitor-applications-24-7-with-custom-logs-98.png) + +For some custom logging tools, you will need to augment your Base URL with `query` or some other setting. For example: `https://www.scalyr.com/api/query`. + +### Step 8: Search Body + +You can enter any JSON search input for your query. For example, here is a log query for logs in Scalyr: + + +``` +{ + "token": "${secrets.getValue("scalyrSG")}", + "queryType": "log", + "filter": "exception", + "startTime": "${start\_time}", + "endTime": "${end\_time}", + "maxCount": "10", + "pageMode": "tail", + "priority": "low" +} +``` +You can see that the token field uses a Harness Encrypted Text secret containing the token, referenced using the syntax `${secrets.getValue("secret-name")}`. + +[![](./static/monitor-applications-24-7-with-custom-logs-100.png)](./static/monitor-applications-24-7-with-custom-logs-100.png) + +### Step 9: Response Type + +In **Response Type**, select **JSON**. + +### Step 10: Log Message JSON Path + +To use **Guide From Example**, your **Search URL** or **Search Body** must contain the start and end time placeholders `${start_time}` and `${end_time}`. + +Where the start and end time placeholders are required depends on your custom logs provider. For example, some providers will need them in the Search Body and other in the Search URL. + +Click **Guide From Example**. A popup appears, preset for you to query the provider. + +[![](./static/monitor-applications-24-7-with-custom-logs-102.png)](./static/monitor-applications-24-7-with-custom-logs-102.png) + +You can change the start and end times. By default, they are set for the last 30 minutes. + +Click **SEND**. The JSON response is displayed. + +[![](./static/monitor-applications-24-7-with-custom-logs-104.png)](./static/monitor-applications-24-7-with-custom-logs-104.png) + +Click the **message** field. The JSON path is added to **Log Message JSON Path**: + +[![](./static/monitor-applications-24-7-with-custom-logs-106.png)](./static/monitor-applications-24-7-with-custom-logs-106.png) + +### Step 11: Service Instance JSON Path + +Click **Guide From Example** to locate the JSON path for the service instance. If the JSON returned by your query does not have the path, then leave **Service Instance JSON Path** empty. + +### Step 12: Regex to Transform Hostname + +If the JSON value returned requires transformation in order to be used, enter the regex expression here. For example: If the value in the host name JSON path of the response is `pod_name:harness-test.pod.name` and the actual pod name is simply `harness-test.pod.name`, you can write a regular expression to remove the `pod_name` from the response value. + +### Step 13: Timestamp JSON Path + +Click **Guide From Example** to locate the JSON path for the timestamp, and click the timestamp label. + +[![](./static/monitor-applications-24-7-with-custom-logs-108.png)](./static/monitor-applications-24-7-with-custom-logs-108.png) + +### Step 14: Timestamp Format + +Enter a timestamp format. The format follows the  [Java SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). For example, a timestamp syntax might be **yyyy-MM-dd'T'HH:mm:ss.SSSX**. If you leave this field empty, Harness will use the default range of 1 hour previous (now - 1h) to now. + +### Step 15: Enable 24/7 Service Guard + +Select **Enable 24/7 Service Guard** to enable monitoring. + +### Step 16: Baseline + +When you select **Enable 24/7 Service Guard**, you can select a baseline time interval for monitoring. + +When you have added your Log Collection your Custom Log Verification will look something like this: + +![](./static/monitor-applications-24-7-with-custom-logs-110.png) + +The Custom Logs Verification is added to 24/7 Service Guard: + +![](./static/monitor-applications-24-7-with-custom-logs-111.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/monitor-applications-24-7-with-custom-metrics.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/monitor-applications-24-7-with-custom-metrics.md new file mode 100644 index 00000000000..963f2c0bda1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/monitor-applications-24-7-with-custom-metrics.md @@ -0,0 +1,257 @@ +--- +title: Monitor Applications 24/7 with Custom Metrics +description: Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see 24/7 Service Guard Overview. While Harness… +sidebar_position: 50 +helpdocs_topic_id: 15nvnoy8o8 +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see  [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +While Harness supports  [all of the common metrics tools](https://docs.harness.io/category/continuous-verification), you can add your Custom metrics to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see  [Connect to Custom Verification for Custom Metrics](connect-to-custom-verification-for-custom-metrics.md). + +This section assumes you have a Harness Application set up, containing a Service and Environment. For steps on setting up a Harness Application, see  [Application Components](../../model-cd-pipeline/applications/application-configuration.md). + + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). +* See [Connect to Custom Verification for Custom Metrics](connect-to-custom-verification-for-custom-metrics.md). + +### Step 1: Set up 24/7 Service Guard for Custom Metrics + +To set up 24/7 Service Guard for custom metrics, do the following: + +1. Ensure that you have added your Custom Verification provider as a Harness Verification Provider. +2. In your Harness Application, ensure that you have added a Service, as described in  [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see  [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**.![](./static/monitor-applications-24-7-with-custom-metrics-70.png) +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Custom APM Verification**. + +![](./static/monitor-applications-24-7-with-custom-metrics-71.png) + +The **Custom APM Verification** dialog appears. + +![](./static/monitor-applications-24-7-with-custom-metrics-72.png) + +Fill out the dialog. The dialog has the fields described below. + +For 24/7 Service Guard, the queries you define to collect metrics are specific to the Application or Service you want monitored. (Verification is at the Application/Service level.) This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Step 2: Display Name + +Enter the name that will identify this 24/7 Service Guard monitoring in Harness' **Continuous Verification** page. + +### Step 3: Service + +Select the Harness Service that represents your live, production application or service. + +### Step 4: Metrics Data Server + +Select the custom metric provider you added. + +### Step 5: Metric Type + +Select **Infrastructure** or **Transaction**. + +Consider what you monitoring before selecting. Each type has subtypes in **Metric Collections**: + +* **Infrastructure** + + Infrastructure + + Value +* **Transaction** + + Error + + Response Time + + Throughput + +For example, if you want to monitor if a value goes below the baseline, select **Infrastructure** and then **Value** in **Metric Collections**. + +If you want to monitor an error, then select **Transaction**. + +#### Always Use Throughput with Error and Response Time Metrics + +Whenever you use the Error metric type, you should also add another metric for Throughput with the same Transaction Name. + +![](./static/monitor-applications-24-7-with-custom-metrics-73.png) + +Harness analyze errors as error percentage and without the throughput the error number does not provide much information. + +The same setup should used with the Response Time metric also. Whenever you set up a Response Time metric, setup a Throughput metric with the same Transaction Name. + +![](./static/monitor-applications-24-7-with-custom-metrics-74.png) + +### Step 6: Metric Collections + +Click **Add** to add a new metric collection. This displays the dialog shown below. + +![](./static/monitor-applications-24-7-with-custom-metrics-75.png)![](./static/monitor-applications-24-7-with-custom-metrics-76.png) + +In the resulting **New Metrics Collection** dialog, use the following controls to define a metrics collection. + +### Step 7: Metrics Name + +Enter a name for the metrics you want analyzed, such as **ThreadCount**. + +### Step 8: Metrics Type + +For the **Infrastructure** Metrics Type, select the type of metric you want to collect: + +* **Infra:** Infrastructure metrics, such as CPU, memory, and HTTP errors. +* **Value:** [Apdex](https://docs.newrelic.com/docs/apm/new-relic-apm/apdex/apdex-measure-user-satisfaction) (measures user satisfaction with response time). +* **Lower Value:** Values below the average. + +For the **Transaction** Metrics Type, select the type of metric you want to collect: + +* Error +* Response Time +* Throughput + +### Step 9: Metrics Collection URL + +Enter a query for your verification. You can simply make the query in your Verification Provider and paste it in the **Metrics Collection URL** field. + +![](./static/monitor-applications-24-7-with-custom-metrics-77.png) + +You will use this query to obtain the JSON paths for the **Response Mapping** settings. + +In most cases, you will want to add the placeholders `${start_time}` and `${end_time}` in your query so that you can customize the range when making requests. + +### Step 10: Metrics Method + +Select **GET** or **POST**. If you select **POST**, the **Metric Collection Body** field appears. + +[![](./static/monitor-applications-24-7-with-custom-metrics-78.png)](./static/monitor-applications-24-7-with-custom-metrics-78.png) + +In **Metric Collection Body**, enter the JSON body to send as a payload when making a REST call to the APM Provider. The requirements of the JSON body will depend on your APM provider. + +You can use variables you created in the Service and Workflow in the JSON, as well as [Harness built-in variables](https://docs.harness.io/article/9dvxcegm90-variables). + +### Step 11: Response Mapping + +These settings are for specifying which JSON fields in the responses to use for monitoring. + +### Step 12: Transaction Name + +Select **Fixed** or **Dynamic**. + +**Fixed:** Use this option when all metrics are for the same transaction. For example, a single login page. + +**Dynamic:** Use this option when the metrics are for multiple transactions + +### Step 13: Name (Fixed) + +Enter a name to identify the transaction. + +### Step 14: Transaction Name Path (Dynamic) + +This is the JSON label for identifying a transaction name. + +For example, in a New Relic Insights query, the **FACET** clause is used to group results by the attribute **transactionName**. You can obtain the field name that records the **transactionName** by using the **Guide From Example** feature: + +1. Click **Guide From Example**. The Guide From Example popover appears. + + ![](./static/monitor-applications-24-7-with-custom-metrics-80.png) + + The Metrics URL Collection is based on the query you entered in the **Metric Collection URL field** earlier. + +2. Specify a time range using the `${startTime}` and `${endTime}`. +3. Click **SEND**. The query is executed and the JSON is returned. +4. Locate the field name that is used to identify transactions. In our New Relic Insights query, it is the **facets.name** field. +If no metrics are found, you will see a `METRIC DATA NOT FOUND` error. +Using New Relic Insights as an example, you can find the name in the JSON of your Insights query results. + + [![](./static/monitor-applications-24-7-with-custom-metrics-81.png)](./static/monitor-applications-24-7-with-custom-metrics-81.png) + + +5. Click the field **name** under facets. The field path is added to the **Transaction Name Path** field. + + [![](./static/monitor-applications-24-7-with-custom-metrics-83.png)](./static/monitor-applications-24-7-with-custom-metrics-83.png) + +### Step 15: Regex to transform Transaction Name (Dynamic) + +Enter a regex expression here to obtain the specific name from the transaction path. + +For example, if your Transaction Name Path JSON evaluated to a value such as `name : TxnName`, you can write a regex to remove everything other than `TxnName`. + +For example `(name:(.*),)` or `(?<=:).*(?=,)`. + +### Step 16: Metrics Value + +Specify the value for the event count. This is used to filter and aggregate data returned in a SELECT statement. To find the correct label for the value, do the following: + +1. Click **Guide From Example**. The example popover appears. + + ![](./static/monitor-applications-24-7-with-custom-metrics-85.png) + + The Metrics URL Collection is based on the query you entered in the **Metric Collection URL field** earlier. The **${host}** field refers to the `${host}` variable in your query. + +2. Specify a time range using the `${startTime}` and `${endTime}`. +3. Click **Send**. The query is executed and the JSON is returned. +If no metrics are found, you will see a `METRIC DATA NOT FOUND` error. +4. Locate the field name that is used to count events.![](./static/monitor-applications-24-7-with-custom-metrics-86.png) +5. Click the name of the field, such as **value**. The JSON path is added to the **Metrics Value** field. + +![](./static/monitor-applications-24-7-with-custom-metrics-87.png) + +### Step 17: Timestamp + +Specify the value for the timestamp in the query. To find the correct label for the value, do the following: + +1. Click **Guide From Example**. The popover appears. + + ![](./static/monitor-applications-24-7-with-custom-metrics-88.png) + + The Metrics URL Collection is based on the query you entered in the **Metric Collection URL field** earlier. + +2. Click **Send**. The query is executed and the JSON is returned. +3. Click the name of the label for the start time. + +![](./static/monitor-applications-24-7-with-custom-metrics-89.png) + +The JSON path is added to the **Timestamp** path: + +![](./static/monitor-applications-24-7-with-custom-metrics-90.png) + +### Step 18: Timestamp Format + +Enter a timestamp format. The format follows the Java [SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). For example, a timestamp syntax might be **yyyy-MM-dd'T'HH:mm:ss.SSSX**. If you leave this field empty, Harness will use the default range of 1 hour previous (now-1h) to now. + +### Step 19: Test the Settings + +Once you have filled in the **New Metrics Collection** dialog, click **Test** to check your settings. Once they test successfully, click **Add** to add this collection to the **Custom APM Verification** settings. + +This restores the **Custom APM Verification** dialog. Here, you have the option to click **Add** to define additional metrics collections, using the options just outlined. + +### Review: Custom Thresholds + +In the **Custom APM Verification** dialog, you can access the **Custom Thresholds** section once you have configured at least one Metrics Collection. Within Custom Thresholds, you can define **Ignore Hints** rules that instruct Harness to remove certain metrics/value combinations from 24/7 Service Guard analysis. + +For details about defining Custom Thresholds, see [Apply Custom Thresholds to 24/7 Service Guard](../24-7-service-guard/custom-thresholds-24-7.md). + +### Step 20: Algorithm Sensitivity + +Select the Algorithm Sensitivity. See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 21: Enable 24/7 Service Guard + +Select this check box to enable 24/7 Service Guard. + +When you are finished, the **Custom APM Verification Settings** dialog will look something like this: + +![](./static/monitor-applications-24-7-with-custom-metrics-91.png) + +Click **Submit**. The Custom Metrics 24/7 Service Guard verification is added. + +![](./static/monitor-applications-24-7-with-custom-metrics-92.png) + +### Review: Verification Results + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/monitor-applications-24-7-with-custom-metrics-93.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/_verify-ddog-00-trx-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/_verify-ddog-00-trx-anal.png new file mode 100644 index 00000000000..a0b1bcd15ed Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/_verify-ddog-00-trx-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/_verify-ddog-01-ex-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/_verify-ddog-01-ex-anal.png new file mode 100644 index 00000000000..96cdcd2e943 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/_verify-ddog-01-ex-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-app-dynamics-as-a-custom-apm-61.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-app-dynamics-as-a-custom-apm-61.png new file mode 100644 index 00000000000..abf86366776 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-app-dynamics-as-a-custom-apm-61.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-app-dynamics-as-a-custom-apm-62.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-app-dynamics-as-a-custom-apm-62.png new file mode 100644 index 00000000000..13df75d89cd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-app-dynamics-as-a-custom-apm-62.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-58.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-58.png new file mode 100644 index 00000000000..87c09a9698f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-58.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-59.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-59.png new file mode 100644 index 00000000000..91604bbd0c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-59.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-60.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-60.png new file mode 100644 index 00000000000..4974702f8f1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-logs-60.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-metrics-112.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-metrics-112.png new file mode 100644 index 00000000000..87c09a9698f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-custom-verification-for-custom-metrics-112.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-38.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-38.png new file mode 100644 index 00000000000..cd8488f2c6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-38.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-39.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-39.png new file mode 100644 index 00000000000..cd8488f2c6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-39.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-40.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-40.png new file mode 100644 index 00000000000..420ff61f477 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-40.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-41.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-41.png new file mode 100644 index 00000000000..420ff61f477 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-41.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-42.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-42.png new file mode 100644 index 00000000000..1a3ba606da9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-42.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-43.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-43.png new file mode 100644 index 00000000000..1a3ba606da9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-43.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-44.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-44.png new file mode 100644 index 00000000000..97006b5b2c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-44.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-45.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-45.png new file mode 100644 index 00000000000..97006b5b2c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/connect-to-datadog-as-a-custom-apm-45.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-100.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-100.png new file mode 100644 index 00000000000..05228a44fc4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-100.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-101.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-101.png new file mode 100644 index 00000000000..05228a44fc4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-101.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-102.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-102.png new file mode 100644 index 00000000000..c7fffa599c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-102.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-103.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-103.png new file mode 100644 index 00000000000..c7fffa599c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-103.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-104.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-104.png new file mode 100644 index 00000000000..fd9d9539066 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-104.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-105.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-105.png new file mode 100644 index 00000000000..fd9d9539066 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-105.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-106.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-106.png new file mode 100644 index 00000000000..272dd1f26c6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-106.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-107.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-107.png new file mode 100644 index 00000000000..272dd1f26c6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-107.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-108.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-108.png new file mode 100644 index 00000000000..c39ce69772a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-108.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-109.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-109.png new file mode 100644 index 00000000000..c39ce69772a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-109.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-110.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-110.png new file mode 100644 index 00000000000..ff19cb915a1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-110.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-111.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-111.png new file mode 100644 index 00000000000..41f2f5e15c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-111.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-94.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-94.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-94.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-95.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-95.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-95.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-96.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-96.png new file mode 100644 index 00000000000..bd6fc300d4d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-96.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-97.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-97.png new file mode 100644 index 00000000000..2b95fd346f1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-97.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-98.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-98.png new file mode 100644 index 00000000000..7337d5fec8b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-98.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-99.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-99.png new file mode 100644 index 00000000000..7337d5fec8b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-logs-99.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-70.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-70.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-70.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-71.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-71.png new file mode 100644 index 00000000000..c1396d88b71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-71.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-72.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-72.png new file mode 100644 index 00000000000..1279bc8ee3b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-72.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-73.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-73.png new file mode 100644 index 00000000000..d570f9bb2f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-73.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-74.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-74.png new file mode 100644 index 00000000000..03345893333 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-74.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-75.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-75.png new file mode 100644 index 00000000000..f957a2d2014 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-75.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-76.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-76.png new file mode 100644 index 00000000000..399007fd8e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-76.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-77.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-77.png new file mode 100644 index 00000000000..09d67227157 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-77.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-78.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-78.png new file mode 100644 index 00000000000..13149a6998b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-78.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-79.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-79.png new file mode 100644 index 00000000000..13149a6998b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-79.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-80.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-80.png new file mode 100644 index 00000000000..46525a95552 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-80.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-81.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-81.png new file mode 100644 index 00000000000..d84fe43b89c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-81.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-82.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-82.png new file mode 100644 index 00000000000..d84fe43b89c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-82.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-83.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-83.png new file mode 100644 index 00000000000..06debe6466d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-83.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-84.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-84.png new file mode 100644 index 00000000000..06debe6466d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-84.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-85.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-85.png new file mode 100644 index 00000000000..7f3c91c143e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-85.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-86.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-86.png new file mode 100644 index 00000000000..72b02154f7e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-86.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-87.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-87.png new file mode 100644 index 00000000000..51d6cd74a4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-87.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-88.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-88.png new file mode 100644 index 00000000000..f051178bb89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-88.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-89.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-89.png new file mode 100644 index 00000000000..a17b77b23a5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-89.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-90.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-90.png new file mode 100644 index 00000000000..8cde0d4f7b5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-90.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-91.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-91.png new file mode 100644 index 00000000000..7d5200868d2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-91.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-92.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-92.png new file mode 100644 index 00000000000..6e453cb7a8b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-92.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-93.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-93.png new file mode 100644 index 00000000000..3490bbf187b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/monitor-applications-24-7-with-custom-metrics-93.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-46.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-46.png new file mode 100644 index 00000000000..3e288fb480b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-46.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-47.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-47.png new file mode 100644 index 00000000000..a26640ec9ff Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-47.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-48.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-48.png new file mode 100644 index 00000000000..74895c86587 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-48.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-49.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-49.png new file mode 100644 index 00000000000..d570f9bb2f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-49.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-50.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-50.png new file mode 100644 index 00000000000..03345893333 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-50.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-51.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-51.png new file mode 100644 index 00000000000..4d1124b87b8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-51.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-52.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-52.png new file mode 100644 index 00000000000..57643b23cb8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-52.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-53.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-53.png new file mode 100644 index 00000000000..a301b3b7d25 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-53.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-54.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-54.png new file mode 100644 index 00000000000..5ea01d02cfe Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-54.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-55.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-55.png new file mode 100644 index 00000000000..5ea01d02cfe Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-55.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-56.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-56.png new file mode 100644 index 00000000000..284b3177a99 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-56.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-57.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-57.png new file mode 100644 index 00000000000..810da372814 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-app-dynamics-as-a-custom-apm-57.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-63.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-63.png new file mode 100644 index 00000000000..82249486595 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-63.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-64.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-64.png new file mode 100644 index 00000000000..2519aa10c13 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-64.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-65.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-65.png new file mode 100644 index 00000000000..67d8552f985 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-65.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-66.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-66.png new file mode 100644 index 00000000000..ff0cca3765e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-66.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-67.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-67.png new file mode 100644 index 00000000000..39af2e585f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-67.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-68.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-68.png new file mode 100644 index 00000000000..ee95342df80 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-68.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-69.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-69.png new file mode 100644 index 00000000000..1d7c084a373 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-logs-69.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-13.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-13.png new file mode 100644 index 00000000000..82249486595 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-14.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-14.png new file mode 100644 index 00000000000..deca6dbbc39 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-15.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-15.png new file mode 100644 index 00000000000..d570f9bb2f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-16.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-16.png new file mode 100644 index 00000000000..03345893333 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-17.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-17.png new file mode 100644 index 00000000000..47334f2e546 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-18.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-18.png new file mode 100644 index 00000000000..47334f2e546 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-19.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-19.png new file mode 100644 index 00000000000..c8b2a0a41d8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-20.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-20.png new file mode 100644 index 00000000000..c8b2a0a41d8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-21.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-21.png new file mode 100644 index 00000000000..476697294ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-22.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-22.png new file mode 100644 index 00000000000..476697294ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-23.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-23.png new file mode 100644 index 00000000000..1aa4ea48aa8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-24.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-24.png new file mode 100644 index 00000000000..1aa4ea48aa8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-25.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-25.png new file mode 100644 index 00000000000..d84fe43b89c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-26.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-26.png new file mode 100644 index 00000000000..d84fe43b89c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-27.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-27.png new file mode 100644 index 00000000000..06debe6466d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-28.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-28.png new file mode 100644 index 00000000000..06debe6466d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-28.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-29.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-29.png new file mode 100644 index 00000000000..813ff2ff9b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-30.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-30.png new file mode 100644 index 00000000000..813ff2ff9b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-30.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-31.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-31.png new file mode 100644 index 00000000000..33aecf6c04d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-31.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-32.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-32.png new file mode 100644 index 00000000000..33aecf6c04d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-32.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-33.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-33.png new file mode 100644 index 00000000000..c9cee8314f5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-33.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-34.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-34.png new file mode 100644 index 00000000000..c9cee8314f5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-34.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-35.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-35.png new file mode 100644 index 00000000000..83ff1c5139d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-35.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-36.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-36.png new file mode 100644 index 00000000000..83ff1c5139d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-36.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-37.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-37.png new file mode 100644 index 00000000000..0dc13fee294 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-custom-metrics-37.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-00.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-00.png new file mode 100644 index 00000000000..558b7336948 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-01.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-01.png new file mode 100644 index 00000000000..a26640ec9ff Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-02.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-02.png new file mode 100644 index 00000000000..d570f9bb2f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-03.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-03.png new file mode 100644 index 00000000000..03345893333 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-04.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-04.png new file mode 100644 index 00000000000..4e748f3e1f5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-05.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-05.png new file mode 100644 index 00000000000..4e748f3e1f5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-06.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-06.png new file mode 100644 index 00000000000..71d717bc2b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-07.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-07.png new file mode 100644 index 00000000000..71d717bc2b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-08.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-08.png new file mode 100644 index 00000000000..e4da3d3dc92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-09.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-09.png new file mode 100644 index 00000000000..e4da3d3dc92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-10.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-10.png new file mode 100644 index 00000000000..a51559af4bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-11.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-11.png new file mode 100644 index 00000000000..a51559af4bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-12.png b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-12.png new file mode 100644 index 00000000000..75fc7a1b83a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/static/verify-deployments-with-datadog-as-a-custom-apm-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-app-dynamics-as-a-custom-apm.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-app-dynamics-as-a-custom-apm.md new file mode 100644 index 00000000000..56b8a757b5c --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-app-dynamics-as-a-custom-apm.md @@ -0,0 +1,207 @@ +--- +title: Verify Deployments with AppDynamics as a Custom APM +description: After adding AppDynamics as a Custom Verification Provider, you can use it as a verification step in a Workflow. +sidebar_position: 90 +helpdocs_topic_id: 0qvier4m49 +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +After adding AppDynamics as a Custom Verification Provider, you can use it as a verification step in a Workflow. The following sections outline how to select the AppDynamics metrics you need. + + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). +* See [Connect to AppDynamics as a Custom APM](connect-to-app-dynamics-as-a-custom-apm.md). + +### Step 1: Set Up the Deployment Verification + +You can add verification steps to a Workflow after you have performed at least one successful deployment.To begin the Workflow setup: + +1. In Harness, open the Workflow that deploys the Service you will monitor with AppDynamics. +2. In the Workflow, in **Verify Service**, click **Add Step**. +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **Custom Metrics**. +4. Click **Next**. The **Configure****Custom Metrics** settings appear. + +### Step 2: Metrics Data Server + +In the **Metrics Data Server** drop-down, select the Custom Verification Provider. + +### Step 3: Metric Type + +Set the **Metric Type** to either **Infrastructure** or **Transaction**. + +Your settings will now look something like this: + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-46.png) + +### Step 4: Metric Collections + +Beside **Metric Collections**, click **Add** to display the **New Metrics Collection** settings. + +Most fields here define Harness settings for collecting and grouping metrics. The exceptions are settings where you will map JSON response keys to Harness fields. + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-47.png) + +Fill out the **New Metrics Collection** settings using the following information. + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-48.png) + +### Step 5: Metrics Name + +Enter an arbitrary name for this metric. (This is not an AppDynamics value; it will be internal to Harness.) + +### Step 6: Metrics Type + +Select the type of events you want to monitor. If you selected **Infrastructure** back in the **Metrics Verification** settings, your choices here are **Infrastructure** or **Values**. If you selected **Transaction** in the **Metrics Verification** settings, your choices here are **Errors**, **Response Time**, or **Throughput**. + +#### Always Use Throughput with Error and Response Time Metrics + +Whenever you use the Error metric type, you should also add another metric for Throughput with the same Transaction Name. + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-49.png) + +Harness analyze errors as error percentage and without the throughput the error number does not provide much information. + +The same setup should used with the Response Time metric also. Whenever you set up a Response Time metric, setup a Throughput metric with the same Transaction Name. + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-50.png) + +### Step 7: Metrics Collection URL + +This is the API query that will return a JSON response. See the next section for details on setting up the query. + +The query for the **Metrics Collection URL** follows this syntax: + + +``` +rest/applications/cv-app/metric-data?metric-path=Business Transaction Performance|Business Transactions||//&time-range-type=BETWEEN_TIMES&start-time=${start_time}&end-time=${end_time}&output=JSON&rollup=false +``` +Above, the values in `<...>` brackets are placeholders for parameters that you will define. The values in `${...}` braces are placeholders used for querying the data, which will be substituted at runtime with real values. To build your literal query: + +1. In the AppDynamics Metric Browser's Metric Tree, right-click the metric you want to monitor, and then select **Copy REST URL**. + In the example below, we've selected the **Throughput** metric `/todolist/exception/Calls per Minute`. Its REST URL is now on the clipboard: + + ![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-51.png) + +2. Paste the resulting URL into the **Metrics Collection URL** field. +3. Truncate the URL to the substring that follows `.../controller/`. + +Your literal query—as copied, pasted into the **Metrics Collection URL** field, and then truncated—will now look something like this: + + +``` +rest/applications/cv-app/metric-data?metric-path=Business%20Transaction%20Performance%7CBusiness%20Transactions%7Cdocker-tier%7C%2Ftodolist%2Fexception%7CCalls%20per%20Minute&time-range-type=BEFORE_NOW&duration-in-mins=60 +``` +At the end of the query, replace this substring from AppDynamics' default REST URL: + + +``` +&time-range-type=BEFORE_NOW&duration-in-mins=60 +``` +...with this substring, whose `${...}` placeholders are used to query for dynamic runtime data: + + +``` +&time-range-type=BETWEEN_TIMES&start-time=${start_time}&end-time=${end_time}&output=JSON&rollup=false +``` +Your literal query should now look something like this: + + +``` +rest/applications/cv-app/metric-data?metric-path=Business%20Transaction%20Performance%7CBusiness%20Transactions%7Cdocker-tier%7C%2Ftodolist%2Fexception%7CCalls%20per%20Minute&time-range-type=BETWEEN_TIMES&start-time=${start_time}&end-time=${end_time}&output=JSON&rollup=false +``` +Next, you will refine your query by specifying the REST method, and by mapping response keys to Harness fields. + +### Step 8: Metrics Method + +In the **Metrics Method** drop-down, select either **GET** or **POST**, depending on the metric you're monitoring. + +If you select **POST** here, the **Metric Collection Body** field appears. Enter the JSON body to send as a payload when making a REST call to AppDynamics. For details, see [Verify Deployments with Custom APMs and Logs](verify-deployments-with-custom-metrics.md).The remaining Metrics Collection settings map the keys in the JSON response to Harness fields. + +### Step 9: Transaction Name + +Select **Fixed** or **Dynamic**, depending on the transaction name. In our example, we will use **Fixed**. + +If you select **Dynamic**, you will see the **Transaction Name Path** and **Regex to transform Transaction Name** fields. The **Transaction Name Path** is filled out in the same way as **Name** field listed just below. You use **Regex to transform Transaction Name** to truncate the value of the **Transaction Name Path**, if needed. + +### Step 10: Name + +Enter the Business Transaction name, as it appears in the AppDynamics Metric Tree. In this example, you would enter: `/todolist/exception`. + +### Step 11: Metrics Value + +To map this value, run the query: Click **Guide from Example**, then click **SEND** from the resulting pop-up: + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-52.png) + + In the JSON response, click the `value` key. This maps it into Harness' **Metrics Value** field: + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-53.png) + +### Step 12: Timestamp + +As with the preceding field, click this field's **Guide from Example** link to query AppDynamics. + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-54.png) + +In the JSON response, click the `startTimeInMillis` key, which includes the timestamp. This maps this key to Harness' **Timestamp** field. + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-55.png) + +### Step 13: Timestamp Format + +Optionally, enter a timestamp format, following the Java [SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). An example timestamp syntax might be `yyyy-MM-dd'T'HH:mm:ss.SSSX`. If you leave this field empty, Harness will use the default range of 1 hour previous (`now-1h`) to now. + +When you've filled in all the required Metrics Collection parameters, your query string in the **Metrics Collection URL** field will be modified to something like this: + + +``` +rest/applications/cv-app/metric-data?metric-path=Business%20Transaction%20Performance%7CBusiness%20Transactions%7Cdocker-tier%7C%2Ftodolist%2Fexception%7CCalls%20per%20Minute&time-range-type=BETWEEN_TIMES&start-time=${start_time}&end-time=${end_time}&output=JSON&rollup=false +``` +Your **New Metrics Collection** settings will now look something like this: + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-56.png) + +Click **Test**. If your configuration tests successfully, click **Add** to add this collection. + +This returns you to the **Custom APM Verification** settings' initial **Metrics Verification** page, to fill in the remaining settings. + +Fill in the remaining **Metrics Verification** settings as follows: + +### Step 14: Expression for Host/Container Name + +The expression entered here should resolve to a host/container name in your deployment environment. By default, the expression is `${host.hostName}`. + +### Step 15: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds this value, the Workflow's [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed**, but the Workflow execution continues. + +### Step 16: Data Collection Interval + +Specify how often Harness will collect data. For example, when the interval is `1` (1 minute), Harness will collect every minute. If it's `3`, we will collect every 3 minutes. If your total duration is 15 minutes and the interval is 3 minutes, Harness will collect every 3 minutes for a total of 5 times over 15 minutes. Harness recommends the value `2`. + +### Step 17: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 18: Algorithm for Sensitivity + +Harness recommends that you normally accept this drop-down's **Very sensitive** default. If your analysis needs differ, you can flag fewer deviations as anomalies by instead selecting **Moderately sensitive** or **Least sensitive**. + +### Step 19: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +### Step 20: Execute with previous steps + +Enable this check box to run this verification step in parallel with the previous steps in **Verify Service**. + +When you've entered all your **Custom APM Verification** settings, they will look something like this: + +![](./static/verify-deployments-with-app-dynamics-as-a-custom-apm-57.png) + +Click **Submit**. The AppDynamics Custom verification step is now added to the Workflow. Run your Workflow to see the results. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-custom-logs.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-custom-logs.md new file mode 100644 index 00000000000..9dcf3561c59 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-custom-logs.md @@ -0,0 +1,131 @@ +--- +title: Verify Deployments with Custom Logs +description: Add a custom Logs verification step in a Harness Workflow. +# sidebar_position: 2 +helpdocs_topic_id: d4i9pp3uea +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following procedure describes how to add a custom Logs verification step in a Harness Workflow. For more information about Workflows, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md). + +Once you run a deployment and your custom logs provider obtains its data, Harness machine-learning verification analysis will assess the risk level of the deployment using the data from the provider. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your Workflow *after* you have run at least one successful deployment. + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). +* See [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md). + +### Step 1: Set Up the Deployment Verification + +To verify your deployment with a custom metric or log provider, do the following: + +1. Ensure that you have added Custom Logs Provider as a verification provider, as described in [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md). +2. In your workflow, under **Verify Service**, click **Add Verification**. + + ![](./static/verify-deployments-with-custom-logs-63.png) + +3. In the resulting **Add Step** settings, select **Log Analysis** > **Custom Log Verification**. +4. Click **Next**. The **Configure Custom Log Verification** settings appear. +5. In **Log Data Provider**, select the custom logs provider you added, described in [Connect to Custom Verification for Custom Logs](connect-to-custom-verification-for-custom-logs.md). +6. Click **Add Log Collection** to add a **Log Collection** section.![](./static/verify-deployments-with-custom-logs-64.png) + +### Step 2: Request Method + +In **Request Method**, select GET or POST. + +### Step 3: Search URL + +In **Search URL**, enter the API query that will return a JSON response. + +Make sure the following parameters are included in the query. These placeholders will be replaced with the actual values during the execution. + +1. `${start_time}` or `${start_time_seconds`}: This is the placeholder parameter to specify the start time of the query. It is similar to the value specified in custom metrics verification. +2. `${end_time}` or `${end_time_seconds}`: This is the placeholder parameter to specify the end time of the query. It is similar to the value specified in custom metrics verification. +3. `${host}`: This is the placeholder for querying based on the host during deployment verification. This is NOT a required field if the setup is for a Previous Analysis. + +In the remaining settings, you will map the keys in the JSON response to Harness settings to identify where data—such as log message and timestamp—are located in the JSON response. + +### Step 4: Search Body + +In **Search Body**, enter any JSON search input for your query. If you need to send a token, but do not want to send it in plaintext, you can use a Harness [encrypted text secret](https://docs.harness.io/article/au38zpufhr-secret-management#encrypted_text). + +### Step 5: Response Type + +In **Response Type**, select **JSON**. + +### Step 6: Log Message JSON Path + +In **Log Message JSON Path** – Use **Guide from Example** to query the log provider and return the JSON response. + +![](./static/verify-deployments-with-custom-logs-65.png) + +The URL is a combination of the Verification Cloud Provider **Base URL** and the **Log Collection URL** you entered. + +Click **SEND**. In the JSON response, click the key that includes the log message path. + +![](./static/verify-deployments-with-custom-logs-66.png) + +The log message path key is added to **Log Message JSON Path**: + +![](./static/verify-deployments-with-custom-logs-67.png) + +### Step 7: Hostname JSON Path + +Use **Guide from Example** to query the log provider and return the JSON response. In the JSON response, click the key that includes the hostname path. + +![](./static/verify-deployments-with-custom-logs-68.png) + +### Step 8: Regex to Transform Host Name + +If the JSON value returned requires transformation in order to be used, enter the regex expression here. For example: If the value in the host name JSON path of the response is `pod_name:harness-test.pod.name` and the actual pod name is simply `harness-test.pod.name`, you can write a regular expression to remove the `pod_name` from the response value. + +### Step 9: Timestamp JSON Path + +Use **Guide from Example** to query the log provider and return the JSON response. In the JSON response, click the key that includes the timestamp. + +![](./static/verify-deployments-with-custom-logs-69.png) + +### Step 10: Timestamp Format + +Enter the format of the timestamp included in the query request (not response). The format follows the [Java SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). For example, a timestamp syntax might be **yyyy-MM-dd'T'HH:mm:ss.SSSX**. If you leave this field empty, Harness will use the default range of 1 hour previous (now - 1h) to now. + +Click **Add**. The Log Collection is added. + +### Step 11: Expression for Host/Container + +The expression entered here should resolve to a host/container name in your deployment environment. By default, the expression is **${instance.host.hostName}**. + +### Step 12: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +Harness waits 2-3 minutes before beginning the analysis to avoid initial deployment noise. This is a standard with monitoring tools. + +### Step 13: Data Collection Interval + +Specify the frequency at which Harness will run the query. Harness recommends the value 2. + +### Step 14: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 15: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Step 16: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +### Step 17: Wait interval before execution + +Set how long the deployment process should wait before executing the verification step. + +### Review: Additional Notes + +The **Compare With Previous Run** option is used for Canary deployments where the second phase is compared to the first phase, and the third phase is compared to the second phase, and so on. Do not use this setting in a single phase workflow or in the first phase of a multi-phase workflow. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-custom-metrics.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-custom-metrics.md new file mode 100644 index 00000000000..e4d138da17c --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-custom-metrics.md @@ -0,0 +1,263 @@ +--- +title: Verify Deployments with Custom Metrics +description: This topic describes how to add a custom APM (metrics) verification step in a Harness Workflow. For more information about Workflows, see Add a Workflow. Once you run a deployment and your custom met… +sidebar_position: 60 +helpdocs_topic_id: 5h6e4zudr2 +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to add a custom APM (metrics) verification step in a Harness Workflow. For more information about Workflows, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md). + +Once you run a deployment and your custom metrics provider obtains its data, Harness machine-learning verification analysis will assess the risk level of the deployment using the data from the provider. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your Workflow *after* you have run at least one successful deployment. + + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). +* See [Connect to Custom Verification for Custom Metrics](connect-to-custom-verification-for-custom-metrics.md). + +### Step 1: Set Up the Deployment Verification + +To verify your deployment with a custom metric or log provider, do the following: + +1. Ensure that you have added Custom Metrics Provider as a verification provider, as described in [Connect to Custom Verification for Custom Metrics](connect-to-custom-verification-for-custom-metrics.md). +2. In your workflow, under **Verify Service**, click **Add Verification**. + + ![](./static/verify-deployments-with-custom-metrics-13.png) + +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **Custom Metrics**. +4. Click **Next**. The **Configure****Custom Metrics** settings appear. +5. In **Metrics Data Provider**, select the custom metric provider you added, described in [Connect to Custom Verification for Custom Metrics](connect-to-custom-verification-for-custom-metrics.md). +6. In **Metrics Type**, select **Infrastructure** or **Transaction**. +7. Add a **Metric Collections** section. + ![](./static/verify-deployments-with-custom-metrics-14.png) + +### Step 2: Metrics Name + +Enter a name for the type of error you are collecting, such as **HttpErrors**. + +### Step 3: Metrics Type + +For the **Infrastructure** Metrics Type, select the type of metric you want to collect: + +* **Infra** -–Infrastructure metrics, such as CPU, memory, and HTTP errors. +* **Value** – [Apdex](https://docs.newrelic.com/docs/apm/new-relic-apm/apdex/apdex-measure-user-satisfaction) (measures user satisfaction with response time). +* **Lower Value** – Values below the average. + +For the **Transaction** Metrics Type, select the type of metric you want to collect: + +* Error +* Response Time +* Throughput + +#### Always Use Throughput with Error and Response Time Metrics + +Whenever you use the Error metric type, you should also add another metric for Throughput with the same Transaction Name. + +![](./static/verify-deployments-with-custom-metrics-15.png) + +Harness analyze errors as error percentage and without the throughput the error number does not provide much information. + +The same setup should used with the Response Time metric also. Whenever you set up a Response Time metric, setup a Throughput metric with the same Transaction Name. + +![](./static/verify-deployments-with-custom-metrics-16.png) + +### Step 4: Metrics Collection URL + +Enter a query for your verification. You can simply make the query in your Verification Provider and paste it in this field. For example, in New Relic Insights, you might have the following query: + +[![](./static/verify-deployments-with-custom-metrics-17.png)](./static/verify-deployments-with-custom-metrics-17.png) + +You can paste the query into the **Metrics Collection URL** field: + +[![](./static/verify-deployments-with-custom-metrics-19.png)](./static/verify-deployments-with-custom-metrics-19.png) + +For information on New Relic Insights NRSQL, see [NRQL syntax, components, functions](https://docs.newrelic.com/docs/insights/nrql-new-relic-query-language/nrql-resources/nrql-syntax-components-functions) from New Relic.The time range for a query (**SINCE** clause in our example) should be less than 5 minutes to avoid overstepping the time limit for some verification providers. + +Most often, when you create your query, you will include a hostname placeholder in the query, `${host}`. This placeholder will be used later when setting up the **Metrics Value** and other settings that use **Guide from an example**. + +For example, if your query is: + +`SELECT count(host) FROM Transaction SINCE 30 MINUTES AGO COMPARE WITH 1 WEEK AGO WHERE host = '37c444347ac2' TIMESERIES` + +Then you replace the host name with the `${host}` placeholder and paste the query into **Metrics Collection URL**: + +`SELECT count(host) FROM Transaction SINCE 30 MINUTES AGO COMPARE WITH 1 WEEK AGO WHERE host = '${host}' TIMESERIES` + +Make sure the `${start_time_seconds}` and `${end_time_seconds}` parameters are included in the query. + +These variables define a 1-minute interval from the time the Workflow Verification starts. To modify the time interval, click **Edit Step** > **APM** > **Custom** and update the Data Collection Interval in minutes field. + +An example part of the query with these values appears as follows: + +`from=${start_time_seconds}&to=${end_time_seconds}` + +For verification providers that accept values in milliseconds, you can use the `${start_time}` and `${end_time}` variables. + +### Step 5: Metrics Method + +Select **GET** or **POST**. If you select POST, the **Metric Collection Body** field appears. + +[![](./static/verify-deployments-with-custom-metrics-21.png)](./static/verify-deployments-with-custom-metrics-21.png) + +In **Metric Collection Body**, enter the JSON body to send as a payload when making a REST call to the APM Provider. The requirements of the JSON body will depend on your APM provider. + +You can use variables you created in the Service and Workflow in the JSON, as well as [Harness built-in variables](https://docs.harness.io/article/9dvxcegm90-variables). + +### Step 6: Response Mapping Transaction Name + +These settings are for specifying which JSON fields in the responses to use. + +Select **Fixed** or **Dynamic**. + +**Fixed:** Use this option when all metrics are for the same transaction. For example, a single login page. + +**Dynamic:** Use this option when the metrics are for multiple transactions. + +### Step 7: Name + +Fixed + +Enter the name with which you want to identify the transaction. + +### Step 8: Transaction Name Path + +Dynamic + +This is the JSON label for identifying a transaction name. In the case of our example New Relic Insights query, the FACET clause is used to group results by the attribute **transactionName**. You can obtain the field name that records the **transactionName** by using the **Guide from an example** feature: + +1. Click **Guide from an example**. The **Select Key from Example** popover appears. + + [![](./static/verify-deployments-with-custom-metrics-23.png)](./static/verify-deployments-with-custom-metrics-23.png) + + The Metrics URL Collection is based on the query you entered in the **Metric Collection URL field** earlier. + +2. In **${host}**, select a host to query. Click the query next to **GET** to see how the host you selected replaces the `${host}` placeholder in your query. +3. Click **SEND**. The query is executed and the JSON is returned. +4. Locate the field name that is used to identify transactions. In our New Relic Insights query, it is the **facets.name** field. + If no metrics are found, you will see a `METRIC DATA NOT FOUND` error. + In New Relic Insights, you can find the name in the JSON of your query results. + + [![](./static/verify-deployments-with-custom-metrics-25.png)](./static/verify-deployments-with-custom-metrics-25.png) + +5. Click the field **name** under facets. The field path is added to the **Transaction Name Path** field. + + [![](./static/verify-deployments-with-custom-metrics-27.png)](./static/verify-deployments-with-custom-metrics-27.png) + +### Step 9: Regex to transform Transaction Name + +Dynamic + +Enter a regex expression here to obtain the specific name from the transaction path. + +For example, if your Transaction Name Path JSON evaluated to a value such as `name : TxnName`, you can write a regex to remove everything other than `TxnName`. + +For example `(name:(.*),)` or `(?<=:).*(?=,)`. + +### Step 10: Metrics Value + +Specify the value for the event count. This is used to filter and aggregate data returned in a SELECT statement. To find the correct label for the value, do the following: + +1. Click **Guide from an example**. The example popover appears. +The Metrics URL Collection is based on the query you entered in the **Metric Collection URL field** earlier. The **${host}** field refers to the `${host}` variable in your query. +2. Click **Submit**. The query is executed and the JSON is returned. +If no metrics are found, you will see a `METRIC DATA NOT FOUND` error. +3. Locate the field name that is used to count events. In our New Relic Insights query, it is the **facets.timeSeries.results.count** field. + In New Relic Insights, you can find the name in the JSON of your query results. + + [![](./static/verify-deployments-with-custom-metrics-29.png)](./static/verify-deployments-with-custom-metrics-29.png) + +4. Click the name of the field **count**. The field path is added to the **Metrics Value** field. + + [![](./static/verify-deployments-with-custom-metrics-31.png)](./static/verify-deployments-with-custom-metrics-31.png) + +### Step 11: Hostname JSON path + +(Displayed if `${host}` is present in the **Metrics Collection URL query**) + +Use **Guide from an example** to select a host and query your APM. Click the name of the hostname JSON label in the response. + +If there is no hostname in the response, leave this setting empty. + +### Step 12: Timestamp + +Specify the value for the timestamp in the query. To find the correct label for the value, do the following: + +1. Click **Guide from an example**. The **Select Key from Example** popover appears. +The Metrics URL Collection is based on the query you entered in the **Metric Collection URL field** earlier. +2. Click **Submit**. The query is executed and the JSON is returned. +3. Locate the field name that is used for the time series **endTimeSeconds**. In our New Relic Insights query, it is the **facets.timeSeries.endTimeSeconds** field. + In New Relic Insights, you can find the name in the JSON of your query results. + + [![](./static/verify-deployments-with-custom-metrics-33.png)](./static/verify-deployments-with-custom-metrics-33.png) + +4. Click the name of the field **endTimeSeconds**. The field path is added to the **Timestamp** field. + + [![](./static/verify-deployments-with-custom-metrics-35.png)](./static/verify-deployments-with-custom-metrics-35.png) + +### Step 13: Timestamp Format + +Enter the format of the timestamp included in the query *request* (not response), set in **Timestamp**. The format follows the [Java SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). For example, a timestamp syntax might be **yyyy-MM-dd'T'HH:mm:ss.SSSX**. If you leave this field empty, Harness will use the default range of 1 hour previous (now-1h) to now. + +When you are done, the settings will look something like this: + +![](./static/verify-deployments-with-custom-metrics-37.png) + +Click **Test** and then click **Add**. + +### Step 14: Custom Thresholds + +In the **Configure****Custom Metrics** dialog, you can access the **Custom Thresholds** section once you have configured at least one Metrics Collection. You can use Custom Thresholds to define two types of rules that override normal verification behavior: + +* **Ignore Hints** that instruct Harness to skip certain metrics/value combinations from verification analysis. +* **Fast-Fail Hints** that cause a Workflow to enter a failed state. + +For details about defining Custom Thresholds, see [Apply Custom Thresholds to Deployment Verification](../tuning-tracking-verification/custom-thresholds.md). + +In deployment, where a Fast-Fail Hint moves a Workflow to a failed state, the Workflow's Details panel for that Verification step will indicate the corresponding threshold. + +### Step 15: Expression for Host/Container + +The expression entered here should resolve to a host/container name in your deployment environment. By default, the expression is **${instance.host.hostName}**. + +### Step 16: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +Harness waits 2-3 minutes before beginning the analysis to avoid initial deployment noise. This is a standard with monitoring tools. + +### Step 17: Data Collection Interval + +Specify the frequency at which Harness will run the query. Harness recommends the value 1 (1 minute). + +If the data collection interval is greater than 1 minute, Harness expects a response with multiple timestamped rows to be returned (1 per minute). This is to avoid losing granularity. + +### Step 18: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 19: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Step 20: Failure Criteria + +Specify the sensitivity of the failure criteria. When the criteria is met, the workflow **Failure Strategy** is triggered. + +### Step 21: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +### Step 22: Wait interval before execution + +Set how long the deployment process should wait before executing the verification step. + +### Review: Additional Notes + +* Depending on the custom metric provider you select, you might need to provide different information to the **Metric Collections** section. For example, you might need to provide a hostname for the **Guide from an example** popover to use to retrieve data. The hostname will be the host/container/pod/node name where the artifact is deployed. In you look in the JSON for the deployment environment, the hostname is typically the **name** label under the **host** label. +* The **Compare With Previous Run** option is used for Canary deployments where the second phase is compared to the first phase, and the third phase is compared to the second phase, and so on. Do not use this setting in a single phase workflow or in the first phase of a multi-phase workflow. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-datadog-as-a-custom-apm.md b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-datadog-as-a-custom-apm.md new file mode 100644 index 00000000000..7336ebb2384 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/custom-metrics-and-logs-verification/verify-deployments-with-datadog-as-a-custom-apm.md @@ -0,0 +1,183 @@ +--- +title: Verify Deployments with Datadog as a Custom APM +description: To solve [problem], [solution] [benefit of feature]. In this topic -- Before You Begin. Step 1 -- Set Up the Deployment Verification. Step 2 -- Metric Collections. Step 3 -- Metrics Name. Step 4 -- Metrics Typ… +sidebar_position: 70 +helpdocs_topic_id: h2crh8rvbr +helpdocs_category_id: ep5nt3dyrb +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To solve [problem], [solution] [benefit of feature]. + + +### Before You Begin + +* See [Custom Verification Overview](custom-verification-overview.md). +* See [Connect to Datadog as a Custom APM](connect-to-datadog-as-a-custom-apm.md). + +### Step 1: Set Up the Deployment Verification + +1. In Harness, open the **Workflow** that deploys the service you are monitoring with Datadog. You add verification steps after you have performed at least one successful deployment. +2. In the Workflow, in **Verify Service**, click **Add Step**. +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **Custom Metrics**. +4. Click **Next**. The **Configure****Custom Metrics** settings appear. +5. In the **Metrics Data Server** drop-down, select the Custom Verification Provider you set up already. +6. Set the **Metric Type** to either **Infrastructure** or **Transaction**. + +Your settings will now look something like this:![](./static/verify-deployments-with-datadog-as-a-custom-apm-00.png) + +### Step 2: Metric Collections + +1. Beside **Metric Collections**, click **Add** to display the **New Metrics Collection** settings. +All of the settings in **New Metrics Collection** are Harness settings for collecting and grouping metrics, except for the settings where you will map JSON response keys to Harness fields.![](./static/verify-deployments-with-datadog-as-a-custom-apm-01.png) +2. Fill out the **New Metrics Collection** settings using the following information. You will set up an API query for Harness to execute that returns a JSON response. Next, you will map the keys in the JSON response to the fields Harness needs to locate your metric values and host. + +### Step 3: Metrics Name + +Enter the name to use for your metric. This is not a Datadog value. It is simply the name used for metrics collected by Harness. + +### Step 4: Metrics Type + +Enter the type of metric, such as **Infra**. These are Harness types, not Datadog's. + +#### Always Use Throughput with Error and Response Time Metrics + +Whenever you use the Error metric type, you should also add another metric for Throughput with the same Transaction Name. + +![](./static/verify-deployments-with-datadog-as-a-custom-apm-02.png)Harness analyze errors as error percentage and without the throughput the error number does not provide much information. + +The same setup should used with the Response Time metric also. Whenever you set up a Response Time metric, setup a Throughput metric with the same Transaction Name. + +![](./static/verify-deployments-with-datadog-as-a-custom-apm-03.png) + +### Step 5: Metrics Collection URL + +This is the API query that will return a JSON response. + +The query for Metrics Collection URL follows this syntax: + + +``` +query?query={pod_name:${host}}by{pod_name}.rollup(avg,60)&from=${start\_time\_seconds}&to=${end\_time\_seconds} +``` +The values in `${...}` braces are placeholders used for querying the data. These are substituted at runtime with real values. + +Replace `` in the query with the correct metric name. The metric names are available in Datadog Metric Explorer: + +[![](./static/verify-deployments-with-datadog-as-a-custom-apm-04.png)](./static/verify-deployments-with-datadog-as-a-custom-apm-04.png) + +For example, to search for the `kubernetes.memory.usage_pct metric`, your query would look like this: + + +``` +query?query=kubernetes.memory.usage_pct{pod_name:${host}}by{pod_name}.rollup(avg,60)&from=${start\_time\_seconds}&to=${end\_time\_seconds} +``` +### Step 6: Metrics Method + +Select **GET** or **POST**. + +### Step 7: Response Mapping + +In this section you will map the keys in the JSON response to Harness fields. + +### Step 8: Transaction Name + +Select either **Fixed**, or **Dynamic** depending on the transaction name. In our example, we will use **Fixed**. If you select **Dynamic**, you will see the **Transaction Name Path** and **Regex to transform Transaction Name** fields. + +The Transaction Name Path is filled out in the same way as **Name** below. You use **Regex to transform Transaction Name** to truncate the value of the **Transaction Name Path**, if needed. + +### Step 9: Name + +Enter a name to map to the metric name. For example, if the metric name is `kubernetes.memory.usage_pct` then use a name like **KubeMemory**. + +### Step 10: Metrics Value + +Run the query using **Guide from an example** to see the JSON response and pick a key to map to **Metrics Value**. + +In **Guide from an example**, specify the time range and host for the query. To specify the time range, click in the **${startTime}** and **${endTime}** calendars. + +To specify the **${host}**, get the full name of a host from Datadog Metrics Explorer: + +[![](./static/verify-deployments-with-datadog-as-a-custom-apm-06.png)](./static/verify-deployments-with-datadog-as-a-custom-apm-06.png) + +To copy the name, click in the graph and click **Copy tags to clipboard**. + +[![](./static/verify-deployments-with-datadog-as-a-custom-apm-08.png)](./static/verify-deployments-with-datadog-as-a-custom-apm-08.png) + +Next, paste the name in the **${host}** field. + +[![](./static/verify-deployments-with-datadog-as-a-custom-apm-10.png)](./static/verify-deployments-with-datadog-as-a-custom-apm-10.png) + +Click **Submit**. The JSON results appear. Click the name of the field to map to **Metrics Value**. + +### Step 11: Timestamp + +Use **Guide from an example** to query Datadog and return the JSON response. In the JSON response, click the key that includes the timestamp. + +### Step 12: Timestamp Format + +Enter a timestamp format. The format follows the Java [SimpleDateFormat](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). For example, a timestamp syntax might be `yyyy-MM-dd'T'HH:mm:ss.SSSX`. If you leave this field empty, Harness will use the default range of 1 hour previous (`now-1h`) to now. + +Now that the Metric Collection is complete, click **UPDATE** to return to the rest of the **Metrics Verification State** settings. + +### Step 13: Canary Analysis + +Harness will compare the metrics received for the nodes deployed in each phase with metrics received for the rest of the nodes in the application. For example, if this phase deploys to 25% of your nodes, the metrics received from Custom APM during this deployment for these nodes will be compared with metrics received for the other 75% during the defined period of time. + +### Step 14: Previous Analysis + +Harness will compare the metrics received for the nodes deployed in each phase with metrics received for all the nodes during the previous deployment. For example, if this phase deploys V1.2 to node A, the metrics received from Custom APM during this deployment will be compared to the metrics for nodes A, B, and C during the previous deployment (V1.1). Previous Analysis is best used when you have predictable load, such as in a QA environment. + +### Step 15: Failure Criteria + +Specify the sensitivity of the failure criteria. When the criteria is met, the Workflow's Failure Strategy is triggered. + +### Step 16: Data Collection Interval + +Specify the frequency with which Harness will query Datadog, The value **2** is recommended. + +Click **SUBMIT**. Now the Datadog custom verification step is added to the Workflow. Run your Workflow to see the results. + +### Review: Verification Results + +Once you have deployed your Workflow (or Pipeline) using the Custom verification step, you can automatically verify cloud application and infrastructure performance across your deployment. + +#### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Custom verification, in your Workflow or Pipeline deployment, you can expand the **Verify Service** step and then click the **APM** **Verification** step. + +![](./static/verify-deployments-with-datadog-as-a-custom-apm-12.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The Workflow verification view is for the DevOps user who developed the workflow; the **Continuous Verification** dashboard is where all deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +#### Transaction Analysis + +**Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. + +**Risk level analysis:** Get an overall risk level and view the cluster chart to see events. + +**Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + +![](./static/_verify-ddog-00-trx-anal.png) + +#### Execution Analysis + +**Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event. + +**Cluster chart:** View the chart to see how the selected event contrasts with anticipated events. Click each event to see its log details. + +![](./static/_verify-ddog-01-ex-anal.png) + +#### Event Management + + + +| | +| --- | +| **Event-level analysis:** See the threat level for each event captured.**Tune event capture:** Remove events from analysis at the Service, Workflow, Execution, or overall level.**Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. | + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/1-datadog-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/1-datadog-connection-setup.md new file mode 100644 index 00000000000..8509a6f95d1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/1-datadog-connection-setup.md @@ -0,0 +1,87 @@ +--- +title: Connect to Datadog +description: Connect Harness to Datadog and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: yqris5svub +helpdocs_category_id: x9hs9wviib +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Datadog with Harness is to set up a Datadog Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as Datadog. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your Datadog data and analysis. + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [Datadog Verification Overview](../continuous-verification-overview/concepts-cv/datadog-verification-overview.md). + +### Step 1: Add Datadog Verification Provider + +To add Datadog as a verification provider: + +1. Click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **Datadog**. The **Datadog** dialog for your provider appears. + + ![](./static/1-datadog-connection-setup-17.png) + +4. Complete the following fields of the **Add Datadog Verification Provider** dialog. + +You need Datadog Admin access to create the API key needed to connect Harness to Datadog. + +### Step 2: Display Name + +Enter a display name for the provider. If you are going to use multiple providers of the same type, ensure you give each provider a different name. + +### Step 3: URL + +Enter the URL of the Datadog server. + +Simply take the URL from the Datadog dashboard, such as `https://app.datadoghq.com/`, and add the API and version (`api/v1/`) to the end. + +For example, `https://app.datadoghq.com/api/v1/`. + +The trailing forward slash after `v1` (`v1/`) in mandatory. + +### Step 4: Encrypted API Key + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets).Enter the API key for API calls. + +To create an API key in Datadog, do the following: + +1. In **Datadog**, mouseover **Integrations**, and then click **APIs**. + + [![](./static/1-datadog-connection-setup-18.png)](./static/1-datadog-connection-setup-18.png) + + The **APIs** page appears. + + [![](./static/1-datadog-connection-setup-20.png)](./static/1-datadog-connection-setup-20.png) + +2. In **API Keys**, in **New API key**, enter the name for the new API key, such as **Harness**, and then click **Create API key**. +3. Copy the API key and, in **Harness**, paste it into the **API Key** field. + +### Step 5: Encrypted Application Key + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets).Enter the application key. + +To create an application key in Datadog, do the following: + +1. In **Datadog**, mouseover **Integrations**, and then click **APIs**. The **APIs** page appears.[![](./static/1-datadog-connection-setup-22.png)](./static/1-datadog-connection-setup-22.png) +2. In **Application Keys**, in **New application key**, enter a name for the application key, such as **Harness**, and click **Create Application Key**. +3. Copy the API key and, in **Harness**, paste it into the **Application Key** field. + +### Step 6: Usage Scope + +Usage scope is inherited from the secrets used in the settings. + +Datadog has limit of about 300 API calls per hour. Requests to analyze many metrics can hit the limit. Datadog can increase the limit upon request. For more information, see [Rate Limiting](https://docs.datadoghq.com/api/?lang=python#rate-limiting) from Datadog. + +### Next Steps + +* [Monitor Applications 24/7 with Datadog Metrics](monitor-applications-24-7-with-datadog-metrics.md) +* [Monitor Applications 24/7 with Datadog Logging](2-24-7-service-guard-for-datadog.md) +* [Verify Deployments with Datadog Logging](3-verify-deployments-with-datadog.md) +* [Verify Deployments with Datadog Metrics](verify-deployments-with-datadog-metrics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/2-24-7-service-guard-for-datadog.md b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/2-24-7-service-guard-for-datadog.md new file mode 100644 index 00000000000..7640f577b84 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/2-24-7-service-guard-for-datadog.md @@ -0,0 +1,90 @@ +--- +title: Monitor Applications 24/7 with Datadog Logging +description: Combined with DataDog, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: j1b1fhh592 +helpdocs_category_id: x9hs9wviib +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Datadog monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see [Connect to Datadog](1-datadog-connection-setup.md). + +This section assumes you have a Harness Application set up and containing a Service and Environment. For steps on setting up a Harness Application, see [Application Checklist](../../model-cd-pipeline/applications/application-configuration.md). + +### Before You Begin + +* See the [Datadog Verification Overview](../continuous-verification-overview/concepts-cv/datadog-verification-overview.md). +* Make sure you [Connect to Datadog](1-datadog-connection-setup.md). + +### Visual Summary + +Here's an example of a Datadog Logging setup for 24/7 Service Guard setup. + +![](./static/2-24-7-service-guard-for-datadog-12.png) + +### Step 1: Set Up 24/7 Service Guard + +To set up 24/7 Service Guard for Datadog, do the following: + +1. Ensure that you have added Datadog as a Harness Verification Provider, as described in [Verification Provider Setup](1-datadog-connection-setup.md#datadog-verification-provider-setup). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**.![](./static/2-24-7-service-guard-for-datadog-13.png) +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Datadog**. The **Datadog** dialog appears. + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **Datadog**. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: Datadog Server + +Select the Datadog Verification Provider to use. + +### Step 5: Log Verification + +Select the Log Verification option. + +### Step 6: Search Keywords + +Enter search keywords, such as `*expression*`. Separate keywords using spaces. (Follow the Datadog [log search syntax](https://docs.datadoghq.com/logs/explorer/search/#search-syntax).) + +You can also enter variable expressions, such as: + +`error OR ${serviceVariable.error_type}` + +### Step 7: Field Name for Host/Containers + +Enter the log field that contains the name of the host/container for which you want logs. You can enter a field name for example. + +Harness uses this field to group data and perform analysis at the container-level. + +### Step 8: Enable 24/7 Service Guard + +Enable this setting to turn on 24/7 Service Guard. If you simply want to set up 24/7 Service Guard, but not enable it, leave this setting disabled. + +### Step 9: Verify your Settings + +1. Click **Test**. Harness verifies the settings you entered.![](./static/2-24-7-service-guard-for-datadog-14.png) +2. Click **Submit**. The Datadog 24/7 Service Guard is added. + + ![](./static/2-24-7-service-guard-for-datadog-15.png)To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/2-24-7-service-guard-for-datadog-16.png) For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +### Next Steps + +* [Verify Deployments with Datadog Logging](3-verify-deployments-with-datadog.md) +* [Verify Deployments with Datadog Metrics](verify-deployments-with-datadog-metrics.md) +* [CV Strategies, Tuning, and Best Practice](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/3-verify-deployments-with-datadog.md b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/3-verify-deployments-with-datadog.md new file mode 100644 index 00000000000..390ddc6628e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/3-verify-deployments-with-datadog.md @@ -0,0 +1,167 @@ +--- +title: Verify Deployments with Datadog Logging +description: Harness can analyze Datadog metrics to verify, rollback, and improve deployments. +# sidebar_position: 2 +helpdocs_topic_id: vd4jgv41io +helpdocs_category_id: x9hs9wviib +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Datadog logs to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up Datadog as a verification step in a Harness Workflow. + +Once you run a deployment, and Datadog preforms verification, Harness' machine-learning verification analysis will assess the risk level of the deployment. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow *after* you have run at least one successful deployment.In this topic: + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See  [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the  [Datadog Verification Overview](../continuous-verification-overview/concepts-cv/datadog-verification-overview.md). +* Make sure you [Connect to Datadog](1-datadog-connection-setup.md). + +### Visual Summary + +Here's an example of a Datadog Logs configuration for verification. + +![](./static/3-verify-deployments-with-datadog-24.png) + +### Step 1: Set Up the Deployment Verification + +To verify your deployment with Datadog Logs, do the following: + +1. Ensure that you have added Datadog as a verification provider, as described in [Connect to Datadog](1-datadog-connection-setup.md). +2. In your workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select either **Performance Monitoring** > **Datadog Logs.** +4. Click **Next**. The **Datadog Logs** settings appear. + +### Step 2: Datadog Log Server + +Select the Datadog verification provider you added earlier in [Datadog Connection Setup](1-datadog-connection-setup.md). + +You can also enter variable expressions, such as: + + `${serviceVariable.datadog_connector_name}` + +### Step 3: Search Keywords + +Enter search keywords, such as `*expression*`. Separate keywords using spaces. (Follow the Datadog [log search syntax](https://docs.datadoghq.com/logs/explorer/search/#search-syntax).) + +You can also enter variable expressions, such as: + +`error OR ${serviceVariable.error_type}` + +### Step 4: Field Name for Host/Container + +Enter the name of the **tag** in Datadog where the service instance is present. + +Harness uses this field to group data and perform analysis at the container-level. + +### Step 5: Expression for Host/Container name + +Enter an expression that evaluates to the host/container/pod name tagged in the Datadog events. + +This expression relates to the field you selected in **Field Name for Host/Container**. You want an expression that maps the JSON field returned to Harness with the Datadog field you selected in **Field Name for Host/Container**. + +For example, in Datadog, a Kubernetes deployment might use the tag **pod\_name** to identify the pod where the microservice is deployed. + +Find the where the same name is identified in the deployment environment, and use that path as the expression. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}.For example, locate the pod name in the Datadog **Event Stream** page: + +1. In **Datadog**, click **Events**. +2. Locate an event using a search query. For more information, see [Event Stream](https://docs.datadoghq.com/graphing/event_stream/) from Datadog. +3. Expand the event by click the the ellipsis at the end of the event title. + + [![](./static/3-verify-deployments-with-datadog-25.png)](./static/3-verify-deployments-with-datadog-25.png) + +4. Look through the event details and locate the tag that lists the pod name for the instance where the service is deployed. In our example, the tag is **pod\_name**. + + [![](./static/3-verify-deployments-with-datadog-27.png)](./static/3-verify-deployments-with-datadog-27.png) + + +5. Next, look in the JSON for the host/container/pod in the deployment environment and identify the label containing the same hostname. The path to that label is what the expression should be in **Expression for Host/Container name**. The default expression is **${host.hostName}**. In most cases, this expression will work. + +### Step 6: Analysis Time duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See  [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 7: Baseline for Risk Analysis + +See  [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 8: Algorithm Sensitivity + +See  [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 9: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Review: Datadog and ECS + +For [ECS-based deployments](https://docs.harness.io/article/08whoizbps-ecs-deployments-overview), Datadog uses the container ID to fetch data for both metrics and logs. Harness can fetch the container ID if the Harness Delegate is running on same ECS cluster as the container or the Delegate must be in same AWS VPC and **port 51678** must be open for incoming traffic. + +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 10: View Verification Results + +Once you have deployed your workflow (or pipeline) using the Datadog verification step, you can automatically verify cloud application and infrastructure performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +#### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Datadog verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **Datadog** step. + +![](./static/3-verify-deployments-with-datadog-29.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +##### Deployments + + + +| | | +| --- | --- | +| | **Deployment info:** See the verification analysis for each deployment, with information on its service, environment, pipeline, and workflows.**Verification phases and providers:** See the vertfication phases for each vertfication provider. Click each provider for logs and analysis.**Verification timeline:** See when each deployment and verification was performed. | + +##### Transaction Analysis + + + +| | | +| --- | --- | +| **Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took.**Risk level analysis:** Get an overall risk level and view the cluster chart to see events.**Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. | | + +##### Execution Analysis + + + +| | | +| --- | --- | +| | **Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event.**Cluster chart:** View the chart to see how the selected event contrast. Click each event to see its log details. | + +##### Event Management + + + +| | | +| --- | --- | +| **Event-level analysis:** See the threat level for each event captured.**Tune event capture:** Remove events from analysis at the service, workflow, execution, or overall level.**Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. | | + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Users and Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) +* [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/_category_.json new file mode 100644 index 00000000000..7b926b691f0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Datadog Verification", + "position": 60, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Datadog Verification" + }, + "customProps": { + "helpdocs_category_id": "x9hs9wviib" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/monitor-applications-24-7-with-datadog-metrics.md b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/monitor-applications-24-7-with-datadog-metrics.md new file mode 100644 index 00000000000..40bd2546ddf --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/monitor-applications-24-7-with-datadog-metrics.md @@ -0,0 +1,136 @@ +--- +title: Monitor Applications 24/7 with Datadog Metrics +description: Combined with DataDog, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 30 +helpdocs_topic_id: 16lntd8abz +helpdocs_category_id: x9hs9wviib +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Datadog monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see [Connect to Datadog](1-datadog-connection-setup.md). + +This section assumes you have a Harness Application set up and containing a Service and Environment. For steps on setting up a Harness Application, see [Application Checklist](../../model-cd-pipeline/applications/application-configuration.md). + +### Before You Begin + +* See the [Datadog Verification Overview](../continuous-verification-overview/concepts-cv/datadog-verification-overview.md). +* Make sure you [Connect to Datadog](1-datadog-connection-setup.md). + +### Visual Summary + +Here's an example of a Datadog Metrics setup for 24/7 Service Guard setup. + +![](./static/monitor-applications-24-7-with-datadog-metrics-30.png) + +### Step 1: Set Up 24/7 Service Guard + +To set up 24/7 Service Guard for Datadog, do the following: + +1. Ensure that you have added Datadog as a Harness Verification Provider, as described in [Verification Provider Setup](1-datadog-connection-setup.md#datadog-verification-provider-setup). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + ![](./static/monitor-applications-24-7-with-datadog-metrics-31.png) + +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Datadog**. The **Datadog** dialog appears. + + ![](./static/monitor-applications-24-7-with-datadog-metrics-32.png) + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **Datadog**. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: Datadog Server + +Select the Datadog Verification Provider to use. + +### Step 5: Metric Verification + +Select the Metric Verification option. + +### Step 6: APM Traces + +In **Datadog Service Name**, enter which service in Datadog you want to monitor. By default, Datadog monitors the servlet hits (number of incoming requests), servlet duration, and errors. Harness will fetch this data for every Web transaction with the service you enter and display it in 24/7 Service Guard as **Errors**, **Hits**, **Request Duration**. + +### Step 7: Docker Container Metrics + +Select the Docker container infrastructure metrics to monitor. + +1. In **Datadog Tags**, enter any tags that have been applied to your metrics in Datadog. These are the same tags used in Datadog Events, Metrics Explorer, etc. + + [![](./static/monitor-applications-24-7-with-datadog-metrics-33.png)](./static/monitor-applications-24-7-with-datadog-metrics-33.png) + + [![](./static/monitor-applications-24-7-with-datadog-metrics-35.png)](./static/monitor-applications-24-7-with-datadog-metrics-35.png) + + Use the Datadog tag format, such as `cluster-name:harness-test`. + +2. In **Metrics**, select the Docker metrics to use. + +### Step 8: ECS Metrics + +1. In **Datadog Tags**, enter any tags that have been applied to your metrics in Datadog. These are the same tags used in Datadog Events, Metrics Explorer, etc. You can find these tags by following the steps in the **Docker Container Metrics** instructions above. +2. In **Metrics**, select the ECS metrics to use. + +### Step 9: Datadog Custom Metrics + +1. In **Datadog Tags**, enter any tags that have been applied to your metrics in Datadog. These are the same tags used in Datadog Events, Metrics Explorer, etc. You can find these tags by following the steps in the **Docker Container Metrics** instructions above. +2. In **Metric Type**, select the metric to use. To use multiple types, click **Add**. +3. In **Display Name**, enter a name to identify this metric in the Harness dashboards. +4. In Metric Name, enter the metric you want to use. These are the metrics you will see in the Datadog Metrics Explorer **Graph** menu: + +[![](./static/monitor-applications-24-7-with-datadog-metrics-37.png)](./static/monitor-applications-24-7-with-datadog-metrics-37.png) + +#### Always Use Throughput with Error and Response Time Metrics + +Whenever you use the Error metric type, you should also add another metric for Throughput with the same Group Name. + +![](./static/monitor-applications-24-7-with-datadog-metrics-39.png) + +Harness analyze errors as error percentage and without the throughput the error number does not provide much information. + +The same setup should used with the Response Time metric also. Whenever you set up a Response Time metric, setup a Throughput metric with the same Group Name. + +![](./static/monitor-applications-24-7-with-datadog-metrics-40.png) + +### + +### Step 10: Algorithm Sensitivity + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +**Moderately sensitive** is recommended. + +### Step 11: Enable 24/7 Service Guard + +Enable this setting to turn on 24/7 Service Guard. If you simply want to set up 24/7 Service Guard, but not enable it, leave this setting disabled. + +### Step 12: Verify your Settings + +1. Click **Test**. Harness verifies the settings you entered.![](./static/monitor-applications-24-7-with-datadog-metrics-41.png) +2. Click **Submit**. The Datadog 24/7 Service Guard is added. + +![](./static/monitor-applications-24-7-with-datadog-metrics-42.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/monitor-applications-24-7-with-datadog-metrics-43.png) + + For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +### Next Steps + +* [Verify Deployments with Datadog Logging](3-verify-deployments-with-datadog.md) +* [Verify Deployments with Datadog Metrics](verify-deployments-with-datadog-metrics.md) +* [CV Strategies, Tuning, and Best Practice](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-17.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-17.png new file mode 100644 index 00000000000..7785daeaa54 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-18.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-18.png new file mode 100644 index 00000000000..cd8488f2c6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-19.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-19.png new file mode 100644 index 00000000000..cd8488f2c6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-20.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-20.png new file mode 100644 index 00000000000..420ff61f477 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-21.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-21.png new file mode 100644 index 00000000000..420ff61f477 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-22.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-22.png new file mode 100644 index 00000000000..1a3ba606da9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-23.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-23.png new file mode 100644 index 00000000000..1a3ba606da9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/1-datadog-connection-setup-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-12.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-12.png new file mode 100644 index 00000000000..046bb6f58d6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-13.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-13.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-14.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-14.png new file mode 100644 index 00000000000..cafe665bf2f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-15.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-15.png new file mode 100644 index 00000000000..57da1e85ad9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-16.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-16.png new file mode 100644 index 00000000000..9eca9ca006e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/2-24-7-service-guard-for-datadog-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-24.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-24.png new file mode 100644 index 00000000000..142d2a3dd02 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-25.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-25.png new file mode 100644 index 00000000000..17f400c1d64 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-26.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-26.png new file mode 100644 index 00000000000..17f400c1d64 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-27.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-27.png new file mode 100644 index 00000000000..0d8fb31d261 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-28.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-28.png new file mode 100644 index 00000000000..0d8fb31d261 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-28.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-29.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-29.png new file mode 100644 index 00000000000..7aeaad02cb4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/3-verify-deployments-with-datadog-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_00_deployments.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_00_deployments.png new file mode 100644 index 00000000000..b814b6bf25d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_00_deployments.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_01_trx-analysis.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_01_trx-analysis.png new file mode 100644 index 00000000000..11f4e4fbd2f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_01_trx-analysis.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_02_ex-analysis.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_02_ex-analysis.png new file mode 100644 index 00000000000..96cdcd2e943 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_02_ex-analysis.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_03_ev-mg.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_03_ev-mg.png new file mode 100644 index 00000000000..778ad1dbe21 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/_dd_03_ev-mg.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-30.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-30.png new file mode 100644 index 00000000000..8b55e68fed9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-30.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-31.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-31.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-31.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-32.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-32.png new file mode 100644 index 00000000000..a49879dd89a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-32.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-33.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-33.png new file mode 100644 index 00000000000..22b3d1f67fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-33.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-34.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-34.png new file mode 100644 index 00000000000..22b3d1f67fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-34.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-35.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-35.png new file mode 100644 index 00000000000..3d55593ba37 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-35.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-36.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-36.png new file mode 100644 index 00000000000..3d55593ba37 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-36.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-37.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-37.png new file mode 100644 index 00000000000..48c9109013b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-37.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-38.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-38.png new file mode 100644 index 00000000000..48c9109013b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-38.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-39.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-39.png new file mode 100644 index 00000000000..63089c07122 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-39.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-40.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-40.png new file mode 100644 index 00000000000..86e114aeead Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-40.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-41.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-41.png new file mode 100644 index 00000000000..cafe665bf2f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-41.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-42.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-42.png new file mode 100644 index 00000000000..57da1e85ad9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-42.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-43.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-43.png new file mode 100644 index 00000000000..9eca9ca006e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/monitor-applications-24-7-with-datadog-metrics-43.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-00.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-00.png new file mode 100644 index 00000000000..40050c3725d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-01.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-01.png new file mode 100644 index 00000000000..df3dba70d8d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-02.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-02.png new file mode 100644 index 00000000000..df3dba70d8d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-03.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-03.png new file mode 100644 index 00000000000..bd453a35627 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-04.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-04.png new file mode 100644 index 00000000000..bd453a35627 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-05.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-05.png new file mode 100644 index 00000000000..48c9109013b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-06.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-06.png new file mode 100644 index 00000000000..48c9109013b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-07.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-07.png new file mode 100644 index 00000000000..17f400c1d64 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-08.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-08.png new file mode 100644 index 00000000000..17f400c1d64 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-09.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-09.png new file mode 100644 index 00000000000..0d8fb31d261 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-10.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-10.png new file mode 100644 index 00000000000..0d8fb31d261 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-11.png b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-11.png new file mode 100644 index 00000000000..7aeaad02cb4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/static/verify-deployments-with-datadog-metrics-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/verify-deployments-with-datadog-metrics.md b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/verify-deployments-with-datadog-metrics.md new file mode 100644 index 00000000000..a72361f79d0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/datadog-verification/verify-deployments-with-datadog-metrics.md @@ -0,0 +1,192 @@ +--- +title: Verify Deployments with Datadog Metrics +description: Harness can analyze Datadog metrics to verify, rollback, and improve deployments. +sidebar_position: 40 +helpdocs_topic_id: o9pl8tfvix +helpdocs_category_id: x9hs9wviib +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Datadog metrics to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up Datadog as a verification step in a Harness Workflow. + +Once you run a deployment, and Datadog preforms verification, Harness' machine-learning verification analysis will assess the risk level of the deployment. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow *after* you have run at least one successful deployment. + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See  [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the  [Datadog Verification Overview](../continuous-verification-overview/concepts-cv/datadog-verification-overview.md). +* Make sure you [Connect to Datadog](1-datadog-connection-setup.md). + +### Limitations + +For Harness Workflows, Datadog metrics verification is supported only for infrastructure metrics and Datadog custom metrics, as described in this topic. + +Harness does not provide out of the box support for Datadog APM traces because Datadog does not support API calls for those. + +### Visual Summary + +Here's an example of a Datadog Metrics configuration for verification. + +![](./static/verify-deployments-with-datadog-metrics-00.png) + +### Step 1: Set Up the Deployment Verification + +To verify your deployment with Datadog Metrics, do the following: + +1. Ensure that you have added Datadog as a verification provider, as described in [Connect to Datadog](1-datadog-connection-setup.md). +2. In your workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select either **Performance Monitoring** > **Datadog Metrics.** +4. Click **Next**. The **Datadog Metrics** settings appear. + +### Step 2: Datadog Metrics Server + +Select the Datadog verification provider you added earlier in [Datadog Connection Setup](1-datadog-connection-setup.md). + +You can also enter variable expressions, such as: + + `${serviceVariable.datadog_connector_name}` + +### Step 3: Datadog Monitoring + +Here you can select any of the Datadog API metrics. For a list of the API metrics, see [Data Collected](https://docs.datadoghq.com/integrations/docker_daemon/#data-collected) from Datadog. + +### Step 4: Infrastructure Metrics + +Select the [Docker](https://docs.datadoghq.com/integrations/docker_daemon/#metrics), [Kubernetes](https://docs.datadoghq.com/agent/kubernetes/metrics/#kubernetes), and [ECS](https://docs.datadoghq.com/integrations/ecs_fargate/#metrics) metrics to use. + +### Step 5: Datadog Custom Metrics + +1. In **Hostname Identifier**, enter the field name in Datadog for the host name. + + [![](./static/verify-deployments-with-datadog-metrics-01.png)](./static/verify-deployments-with-datadog-metrics-01.png) + + This would be the hosts value used when searching: + + [![](./static/verify-deployments-with-datadog-metrics-03.png)](./static/verify-deployments-with-datadog-metrics-03.png) + +2. In **Metric Type**, select the metric to use. To use multiple types, click **Add**. +3. In **Display Name**, enter a name to identify this metric in the Harness dashboards. +4. In **Group Name**, enter the name of the service or request context to which the metric relates. For example, **Login**. +5. In **Metric Name**, enter the metric you want to use. These are the metrics you will see in the Datadog Metrics Explorer **Graph** menu: + +[![](./static/verify-deployments-with-datadog-metrics-05.png)](./static/verify-deployments-with-datadog-metrics-05.png) + +### Step 6: Expression for Host/Container name + +Enter an expression that evaluates to the host/container/pod name tagged in the Datadog events. + +For example, in Datadog, a Kubernetes deployment might use the tag **pod\_name** to identify the pod where the microservice is deployed. + +Find the where the same name is identified in the deployment environment, and use that path as the expression.For example, locate the pod name in the Datadog **Event Stream** page: + +1. In **Datadog**, click **Events**. +2. Locate an event using a search query. For more information, see [Event Stream](https://docs.datadoghq.com/graphing/event_stream/) from Datadog. +3. Expand the event by click the the ellipsis at the end of the event title. + + [![](./static/verify-deployments-with-datadog-metrics-07.png)](./static/verify-deployments-with-datadog-metrics-07.png) + +4. Look through the event details and locate the tag that lists the pod name for the instance where the service is deployed. In our example, the tag is **pod\_name**. + + [![](./static/verify-deployments-with-datadog-metrics-09.png)](./static/verify-deployments-with-datadog-metrics-09.png) + +5. Next, look in the JSON for the host/container/pod in the deployment environment and identify the label containing the same hostname. The path to that label is what the expression should be in **Expression for Host/Container name**. The default expression is **${host.hostName}**. In most cases, this expression will work. + +### Step 7: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 8: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 9: Algorithm Sensitivity + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 10: Execute with Previous Steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Step 11: Verify your Settings + +1. Click **TEST**. Harness verifies the settings you entered. +2. When you are finished, click **SUBMIT**. The Datadog verification step is added to your workflow. + +### Review: Datadog and ECS + +For [ECS-based deployments](https://docs.harness.io/article/08whoizbps-ecs-deployments-overview), Datadog uses the container ID to fetch data for both metrics and logs. Harness can fetch the container ID if the Harness Delegate is running on same ECS cluster as the container or the Delegate must be in same AWS VPC and **port 51678** must be open for incoming traffic. + +Getting the container ID is available when you are using some of the later versions of ECS agents in your container instances. AWS documentation does not explicitly mention what agent version is needed. Hence, Harness first looks the version up, and if it there, Harness can proceed. If it is not there, Harness queries the port as a backup to get the container ID. + +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 12: View Verification Results + +Once you have deployed your workflow (or pipeline) using the Datadog verification step, you can automatically verify cloud application and infrastructure performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Datadog verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **Datadog** step. + +![](./static/verify-deployments-with-datadog-metrics-11.png) + +### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +#### Deployments + +**Deployment info:** See the verification analysis for each deployment, with information on its service, environment, pipeline, and workflows. + +**Verification phases and providers:** See the vertfication phases for each vertfication provider. Click each provider for logs and analysis. + +**Verification timeline:** See when each deployment and verification was performed. | + +![](./static/_dd_00_deployments.png) + +#### Transaction Analysis + +**Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. + +**Risk level analysis:** Get an overall risk level and view the cluster chart to see events. + +**Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + +![](./static/_dd_01_trx-analysis.png) | + +#### Execution Analysis + +**Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event. + +**Cluster chart:** View the chart to see how the selected event contrast. Click each event to see its log details. | + +![](./static/_dd_02_ex-analysis.png) + +#### Event Management + +**Event-level analysis:** See the threat level for each event captured. + +**Tune event capture:** Remove events from analysis at the service, workflow, execution, or overall level. + +**Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. + +![](./static/_dd_03_ev-mg.png) | + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Users and Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) +* [CV Strategies, Tuning, and Best Practice](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/1-dynatrace-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/1-dynatrace-connection-setup.md new file mode 100644 index 00000000000..ec38f8ba832 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/1-dynatrace-connection-setup.md @@ -0,0 +1,76 @@ +--- +title: Connect to Dynatrace +description: Connect Harness to Dynatrace and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: vklaow56mx +helpdocs_category_id: f42d7rayvs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Dynatrace with Harness is to set up a Dynatrace Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as Dynatrace. Once Harness is connected, you can use Deployment Verification with your Dynatrace data and analysis. + +### Before You Begin + +* See the [Dynatrace Verification Overview](../continuous-verification-overview/concepts-cv/dynatrace-verification-overview.md). + +### Step 1: Generate Dynatrace Access Token + +Dynatrace requires token-based authentication for accessing the Dynatrace API. For more information, see [Access tokens](https://www.dynatrace.com/support/help/get-started/introduction/why-do-i-need-an-access-token-and-an-environment-id/#anchor-access-tokens) from Dynatrace. + +To generate a Dynatrace access token, do the following: + +1. Log into your Dynatrace environment. +2. In the navigation menu, click **Settings**, and then click **Integration**. +3. Select **Dynatrace API**. The Dynatrace API page appears. + + ![](./static/1-dynatrace-connection-setup-14.png) + +4. Enter a token name in the text field. The default Dynatrace API token switches are sufficient for Harness. +5. Click **Generate**. The token appears in the token list. +6. Click **Edit**. The token details appear. + + ![](./static/1-dynatrace-connection-setup-15.png) + +7. Click **Copy**. You will use this token when connecting Harness to Dynatrace, described below. + +### Step 2: Add Dynatrace Verification Provider + +To add Dynatrace as a verification provider, do the following: + +1. In **Harness**, click **Setup**. +2. Click **Connectors**. +3. Click **Verification Providers**. +4. Click **Add Verification Provider**, and select **Dynatrace**. The **Dynatrace** dialog for your provider appears. + + ![](./static/1-dynatrace-connection-setup-16.png) + +### Step 3: URL + +The URL of your Dynatrace account. The URL has the following syntax: + +`https://your_environment_ID.live.dynatrace.com` + +HTTPS is mandatory for Dynatrace connections. + +### Step 4: API Token + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets).In **Encrypted API Token**, enter in the API token you created in Dynatrace, described above. + +### Step 5: Display Name + +The name for the Dynatrace verification provider connection in Harness. If you will have multiple Dynatrace connections, enter a unique name. + +You will use this name to select this connection when integrating Dynatrace with the **Verify Steps** of your workflows, described below. + +### Step 6: Usage Scope + +Usage scope is inherited from the secrets used in the settings. + +### Next Steps + +* [Monitor Applications 24/7 with Dynatrace](2-24-7-service-guard-for-dynatrace.md) +* [Verify Deployments with Dynatrace](3-verify-deployments-with-dynatrace.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/2-24-7-service-guard-for-dynatrace.md b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/2-24-7-service-guard-for-dynatrace.md new file mode 100644 index 00000000000..95dd94edcd7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/2-24-7-service-guard-for-dynatrace.md @@ -0,0 +1,107 @@ +--- +title: Monitor Applications 24/7 with Dynatrace +description: Combined with Dynatrace, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: 2frnj2gqiu +helpdocs_category_id: f42d7rayvs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Dynatrace monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see [Connect to Dynatrace](1-dynatrace-connection-setup.md). + +This section assumes you have a Harness Application set up and containing a Service and Environment. For steps on setting up a Harness Application, see [Application Checklist](../../model-cd-pipeline/applications/application-configuration.md). + + +### Before You Begin + +* See the [Dynatrace Verification Overview](../continuous-verification-overview/concepts-cv/dynatrace-verification-overview.md). +* See [Connect to Dynatrace](1-dynatrace-connection-setup.md). + +### Step 1: Set up 24/7 Service Guard for Dynatrace + +To set up 24/7 Service Guard for Dynatrace, do the following: + +1. Ensure that you have added Dynatrace as a Harness Verification Provider, as described in [Verification Provider Setup](1-dynatrace-connection-setup.md#dynatrace-verification-provider-setup). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + ![](./static/2-24-7-service-guard-for-dynatrace-17.png) +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Dynatrace**. The **Dynatrace** dialog appears. + + ![](./static/2-24-7-service-guard-for-dynatrace-18.png) + +Fill out the settings. + +Dynatrace returns API data for transactions that are marked as [key requests](https://www.dynatrace.com/support/help/how-to-use-dynatrace/transactions-and-services/monitoring/monitor-key-requests/) only. To use these transactions in Harness, be sure to mark them as key requests in Dynatrace. + +### Step 2: Display Name + +The name that will identify this service on the Continuous Verification dashboard. Use a name that indicates the environment and monitoring tool, such as Dynatrace. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: Dynatrace Server + +This dropdown contains the names of the Dynatrace verification providers you added, as described above. + +Select the name of the Dynatrace verification provider that monitors the application defined in the Harness Service you selected in **Service**. + +### Step 5: Dynatrace Service + +Once you select a Dynatrace server in **Dynatrace Server**, Harness goes and fetches a list of all the services you have in Dynatrace. + +Select the Dynatrace service to monitor. Dynatrace analytics are performed at Dynatrace's service level. + +You can also enter a built-in [Harness variable expression](https://docs.harness.io/article/9dvxcegm90-variables) or custom variable, such as a [Service](../../model-cd-pipeline/setup-services/service-configuration.md) or [Workflow variable](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md): + +[![](./static/2-24-7-service-guard-for-dynatrace-19.png)](./static/2-24-7-service-guard-for-dynatrace-19.png) + +If you see multiple services for the same application, it is likely because the service is being hit from multiple endpoints. + +This is uncommon in production, but in development/test environments you might be throwing traffic and data at the service from multiple endpoints (local, QA, etc). These endpoints get registered as different services by Dynatrace. + +To distinguish services, Harness list the service ID also. + +[![](./static/2-24-7-service-guard-for-dynatrace-21.png)](./static/2-24-7-service-guard-for-dynatrace-21.png) + +This ID is taken from the Dynatrace console URL (`id=`). if you need to select a specific service, use the ID in the URL to match the service listed in the Harness **Dynatrace Service** setting. + +### Step 6: Algorithm Sensitiivty + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 7: Enable 24/7 Service Guard + +Enable this setting to turn on 24/7 Service Guard. If you simply want to set up 24/7 Service Guard, but not enable it, leave this setting disabled. + +When you are finished, the dialog will look something like this: + +![](./static/2-24-7-service-guard-for-dynatrace-23.png) + +### Step 8: Verify Your Settings + +Click **Test**. Harness verifies the settings you entered. + +Harness will filter by the service you selected in **Dynatrace Service**. You can see this in the **Third-Party API Call History**. + +Click **Submit**. The Dynatrace 24/7 Service Guard is added. + +![](./static/2-24-7-service-guard-for-dynatrace-24.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/2-24-7-service-guard-for-dynatrace-25.png) + +### Next Steps + +* [Verify Deployments with Dynatrace](3-verify-deployments-with-dynatrace.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/3-verify-deployments-with-dynatrace.md b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/3-verify-deployments-with-dynatrace.md new file mode 100644 index 00000000000..2f62ed08180 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/3-verify-deployments-with-dynatrace.md @@ -0,0 +1,156 @@ +--- +title: Verify Deployments with Dynatrace +description: Harness can analyze Dynatrace data to verify, rollback, and improve deployments. +sidebar_position: 30 +helpdocs_topic_id: q6bk0oy1ta +helpdocs_category_id: f42d7rayvs +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following procedure describes how to add Dynatrace as a verification step in a Harness workflow. For more information about workflows, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md). + +Once you run a deployment and Dynatrace preforms verification, Harness' machine-learning verification analysis will assess the risk level of the deployment. + +Harness does not perform host level analysis using Dynatrace as Dynatrace API does not support host level analysis. Harness performs analysis at the service level for a time duration of the last seven days. + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See  [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [Dynatrace Verification Overview](../continuous-verification-overview/concepts-cv/dynatrace-verification-overview.md). +* See [Connect to Dynatrace](1-dynatrace-connection-setup.md). + +### Visual Summary + +Here's an example of Dynatrace setup for verification. + +![](./static/3-verify-deployments-with-dynatrace-00.png) + +### Step 1: Set up the Deployment Verification + +To verify your deployment with Dynatrace, do the following: + +1. Ensure that you have added Dynatrace as a verification provider, as described in [Dynatrace Connection Setup](1-dynatrace-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Step**. +3. Select **Dynatrace**, and click **Next**. The **Configure****Dynatrace** settings appear. + +![](./static/3-verify-deployments-with-dynatrace-01.png) + +These **Configure Dynatrace** settings include the following fields. + +Dynatrace returns API data for transactions that are marked as [key requests](https://www.dynatrace.com/support/help/how-to-use-dynatrace/transactions-and-services/monitoring/monitor-key-requests/) only. To use these transactions in Harness, be sure to mark them as key requests in Dynatrace:![](./static/3-verify-deployments-with-dynatrace-02.png) + +### Step 2: Dynatrace Server + +This dropdown contains the names of the Dynatrace verification providers you added, as described above. Select the name of the Dynatrace verification provider that connects to the Dynatrace environment associated with the microservice/application this workflow deploys. + +### Step 3: Dynatrace Service + +One you select a Dynatrace server in **Dynatrace Server**, Harness goes and fetches a list of all the services you have in Dynatrace. + +Select the Dynatrace service to monitor. Dynatrace analytics are performed at Dynatrace's service level. + +You can find the Dynatrace service in Dynatrace search: + +[![](./static/3-verify-deployments-with-dynatrace-03.png)](./static/3-verify-deployments-with-dynatrace-03.png) + +You can also enter a built-in [Harness variable expression](https://docs.harness.io/article/9dvxcegm90-variables) or custom variable, such as a [Service](../../model-cd-pipeline/setup-services/service-configuration.md) or [Workflow variable](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md): + +[![](./static/3-verify-deployments-with-dynatrace-05.png)](./static/3-verify-deployments-with-dynatrace-05.png) + +If you see multiple services for the same application, it is likely because the service is being hit from multiple endpoints. + +This is uncommon in production, but in development/test environments you might be throwing traffic and data at the service from multiple endpoints (local, QA, etc). These endpoints get registered as different services by Dynatrace. + +To distinguish services, Harness list the service ID also. + +[![](./static/3-verify-deployments-with-dynatrace-07.png)](./static/3-verify-deployments-with-dynatrace-07.png) + +This ID is taken from the Dynatrace console URL (`id=`). + +[![](./static/3-verify-deployments-with-dynatrace-09.png)](./static/3-verify-deployments-with-dynatrace-09.png) + +If you need to select a specific service, use the ID in the URL to match the service listed in the Harness **Dynatrace Service** setting. + +### Step 4: Analysis Time duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 5: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 6: Algorithm Sensitivity + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 7: Verify Your Configuration + +When you are finished, click **Test** to verify your configuration. + +Harness will filter by the service you selected in **Dynatrace Service**. You can see this in the **Third-Party API Call History**. + +Once your configuration tests successfully, click **Submit**. The **Dynatrace** verification step is added to your workflow. + +![](./static/3-verify-deployments-with-dynatrace-11.png) + +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-dynatrace-12.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 8: View Verification Results + +Once you have deployed your workflow (or pipeline) using the Dynatrace verification step, you can automatically verify cloud application and infrastructure performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +#### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Dynatrace verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **Dynatrace** step. + +![](./static/3-verify-deployments-with-dynatrace-13.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +##### Transaction Analysis + + **Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. + + **Risk level analysis:** Get an overall risk level and view the cluster chart to see events. + + **Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + + ![](./static/_dyn-00-trx-anal.png) + +##### Execution Analysis + +**Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event. + +**Cluster chart:** View the chart to see how the selected event contrast. Click each event to see its log details. + + ![](./static/_dyn-01-ev-anal.png) + +##### Event Management + +**Event-level analysis:** See the threat level for each event captured. + +**Tune event capture:** Remove events from analysis at the service, workflow, execution, or overall level. + +**Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. + + ![](./static/_dyn-02-ev-mgmnt.png) + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/_category_.json new file mode 100644 index 00000000000..585976d197b --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Dynatrace Verification", + "position": 70, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Dynatrace Verification" + }, + "customProps": { + "helpdocs_category_id": "f42d7rayvs" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-14.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-14.png new file mode 100644 index 00000000000..d8f4c7421b8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-15.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-15.png new file mode 100644 index 00000000000..3b821d43d31 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-16.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-16.png new file mode 100644 index 00000000000..1ad6706c01e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/1-dynatrace-connection-setup-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-17.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-17.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-18.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-18.png new file mode 100644 index 00000000000..865c694b11b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-19.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-19.png new file mode 100644 index 00000000000..d0e716359c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-20.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-20.png new file mode 100644 index 00000000000..d0e716359c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-21.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-21.png new file mode 100644 index 00000000000..5a3eb775c92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-22.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-22.png new file mode 100644 index 00000000000..5a3eb775c92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-23.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-23.png new file mode 100644 index 00000000000..fdf1253f3e3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-24.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-24.png new file mode 100644 index 00000000000..17584132733 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-25.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-25.png new file mode 100644 index 00000000000..611d09d7bf6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/2-24-7-service-guard-for-dynatrace-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-00.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-00.png new file mode 100644 index 00000000000..5a0e4e5cc8d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-01.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-01.png new file mode 100644 index 00000000000..5a0e4e5cc8d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-02.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-02.png new file mode 100644 index 00000000000..328397319fc Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-03.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-03.png new file mode 100644 index 00000000000..fb8e5c56197 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-04.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-04.png new file mode 100644 index 00000000000..fb8e5c56197 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-05.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-05.png new file mode 100644 index 00000000000..d0e716359c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-06.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-06.png new file mode 100644 index 00000000000..d0e716359c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-07.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-07.png new file mode 100644 index 00000000000..5a3eb775c92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-08.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-08.png new file mode 100644 index 00000000000..5a3eb775c92 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-09.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-09.png new file mode 100644 index 00000000000..1b7153eaf3b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-10.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-10.png new file mode 100644 index 00000000000..1b7153eaf3b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-11.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-11.png new file mode 100644 index 00000000000..a39aefa1124 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-12.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-12.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-13.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-13.png new file mode 100644 index 00000000000..2f5c9e5de3d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/3-verify-deployments-with-dynatrace-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-00-trx-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-00-trx-anal.png new file mode 100644 index 00000000000..a8936010bcf Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-00-trx-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-01-ev-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-01-ev-anal.png new file mode 100644 index 00000000000..96cdcd2e943 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-01-ev-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-02-ev-mgmnt.png b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-02-ev-mgmnt.png new file mode 100644 index 00000000000..778ad1dbe21 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/dynatrace-verification/static/_dyn-02-ev-mgmnt.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/1-elasticsearch-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/1-elasticsearch-connection-setup.md new file mode 100644 index 00000000000..faab5623457 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/1-elasticsearch-connection-setup.md @@ -0,0 +1,64 @@ +--- +title: Connect to Elasticsearch (ELK) +description: Connect Harness to Elasticsearch and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: dagmgqw5ag +helpdocs_category_id: ytuafly1jg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Elasticsearch (ELK) with Harness is to set up an Elasticsearch Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools, such as Elasticsearch. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your Elasticsearch data and analysis. + + +### Before You Begin + +* See the [Elasticsearch Verification Overview](../continuous-verification-overview/concepts-cv/elasticsearch-verification-overview.md). + +### Step 1: Add Elasticsearch (ELK) Verification Provider + +To add Elasticsearch as a Harness Verification Provider, do the following: + +1. In Harness, click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **ELK**. The **Add ELK Verification Provider** dialog for your provider appears. + + ![](./static/1-elasticsearch-connection-setup-03.png) + +### Step 2: Display Name + +Enter a display name for the provider. If you are going to use multiple providers of the same type, ensure you give each provider a different name. + +### Step 3: URL + +Enter the URL of the server. The format is **http(s)://*****server*****:*****port*****/**. The default port is **9200**. + +### Step 4: Username and Encrypted Password + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +Enter the credentials to authenticate with the server. + +### Step 5: Token + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +Some systems provide Elasticsearch as a service and use access tokens.If you have token based authentication, provide the authentication header that is passed when making the HTTP request. + +Header: **APITokenKey**. Example: **x-api-key** (varies by system). + +Value: **APITokenValue**. Example: **kdsc3h3hd8wngdfujr23e23e2.** + +### Step 6: Usage Scope + +Usage scope is inherited from the secrets used in the settings. + +If you selected **None** in **Authentication**, then you can scope this connection to Harness Applications and Environments. + +### Next Steps + +* [Monitor Applications 24/7 with Elasticsearch](2-24-7-service-guard-for-elasticsearch.md) +* [Verify Deployments with Elasticsearch](3-verify-deployments-with-elasticsearch.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/2-24-7-service-guard-for-elasticsearch.md b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/2-24-7-service-guard-for-elasticsearch.md new file mode 100644 index 00000000000..03a0e87898a --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/2-24-7-service-guard-for-elasticsearch.md @@ -0,0 +1,135 @@ +--- +title: Monitor Applications 24/7 with Elasticsearch +description: Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: 564doloeuq +helpdocs_category_id: ytuafly1jg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Elasticsearch monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see [Connect to Elasticsearch](1-elasticsearch-connection-setup.md). + +This section assumes you have set up a Harness Application, containing a Service and Environment. For steps on setting up a Harness Application, see [Application Components](../../model-cd-pipeline/applications/application-configuration.md). + + +### Before You Begin + +* See the [Elasticsearch Verification Overview](../continuous-verification-overview/concepts-cv/elasticsearch-verification-overview.md). + +### Visual Summary + +Here's an example 24/7 Service Guard setup for Elasticsearch. + +![](./static/2-24-7-service-guard-for-elasticsearch-16.png) + +### Step 1: Set Up 24/7 Service Guard for Elasticsearch + +To set up 24/7 Service Guard for Elasticsearch, do the following: + +1. Ensure that you have added ELK Elasticsearch as a Harness Verification Provider, as described in [Verification Provider Setup](#verification_provider_setup). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your running microservice. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + ![](./static/2-24-7-service-guard-for-elasticsearch-17.png) +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **ELK**. The **ELK** dialog appears. + + ![](./static/2-24-7-service-guard-for-elasticsearch-18.png) + +8. Fill out the dialog. The dialog has the following fields. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **ELK**. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: ELK Server + +Select the ELK Verification Provider to use. + +### Step 5: Search Keywords + +Enter search keywords for your query, such as **error** or **exception**. + +Do not use wildcards in queries with Elasticsearch. ElasticSearch documentation indicates that wildcard queries can become very expensive and take down the cluster. + +### Step 6: Index + + Enter the the index to search. This field is automatically populated from the index templates, if available. + +[![](./static/2-24-7-service-guard-for-elasticsearch-19.png)](./static/2-24-7-service-guard-for-elasticsearch-19.png) + +### Step 7: Host Name Field + +Enter the field name used in the ELK logs that refers to the host/pod/container ELK is monitoring. + +### Step 8: Message Field + +Enter the field by which the messages are usually indexed. Typically, a log field. + +To find the field in **Kibana** and enter it in **Harness**, do the following: + +1. In Kibana, click **Discover**. +2. In the search field, search for **error or exception**. +3. In the results, locate a log for the host/container/pod ELK is monitoring. For example, in the following Kubernetes results in Kibana, the messages are indexed under the **log** field. +4. In **Harness**, in the **ELK** dialog, next to **Message Field**, click **Guide From Example**. The **Message Field** popover appears. +5. In the JSON response, click on the name of the label that maps to the log in your Kibana results. Using our Kubernetes example, you would click the **log** label. + +The label is added to the **Message Field**. + +### Step 9: Timestamp Field + +Enter the timestamp field in the Elasticsearch record, such as **@timestamp**. + +### Step 10: Timestamp Format + +Enter the format for the timestamp field in the Elasticsearch record. Use Kibana to determine the format. + +In **Kibana**, use the **Filter** feature in **Discover** to construct your timestamp range: + +[![](./static/2-24-7-service-guard-for-elasticsearch-21.png)](./static/2-24-7-service-guard-for-elasticsearch-21.png) + +Format Examples: + +**Timestamp:** 2018-08-24T21:40:20.123Z. **Format:** yyyy-MM-dd'T'HH:mm:ss.SSSX + +**Timestamp:** 2018-08-30T21:57:23+00:00. **Format:** yyyy-MM-dd'T'HH:mm:ss.SSSXXX + +For more information, see  [Data Math](https://www.elastic.co/guide/en/elasticsearch/reference/6.x/common-options.html#date-math) from Elastic. + +### Step 11: Enable 24/7 Service Guard + +Click the checkbox to enable 24/7 Service Guard. + +### Step 12: Baseline + +Select the baseline time unit for monitoring. For example, if you select **For 4 hours**, Harness will collect the logs for the last 4 hours as the baseline for comparisons with future logs. If you select **Custom Range** you can enter a **Start Time** and **End Time**. + +### Step 13: Verify Your Settings + +1. Click **Test**. Harness verifies the settings you entered. +2. Click **Submit**. The ELK 24/7 Service Guard is configured. + + ![](./static/2-24-7-service-guard-for-elasticsearch-23.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/2-24-7-service-guard-for-elasticsearch-24.png) + + For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +### Next Steps + +* [Verify Deployments with Elasticsearch](3-verify-deployments-with-elasticsearch.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/3-verify-deployments-with-elasticsearch.md b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/3-verify-deployments-with-elasticsearch.md new file mode 100644 index 00000000000..b35b98117b2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/3-verify-deployments-with-elasticsearch.md @@ -0,0 +1,274 @@ +--- +title: Verify Deployments with Elasticsearch (FirstGen) +description: Harness can analyze Elasticsearch data to verify, rollback, and improve deployments. +# sidebar_position: 2 +helpdocs_topic_id: e2eghvcyas +helpdocs_category_id: ytuafly1jg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Elasticsearch data to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up ELK as a verification step in a Harness Workflow. This section covers setup steps, and provides a summary of Harness verification results. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow *after* you have run at least one successful deployment. + +## Before You Begin + +* See the [Elasticsearch Verification Overview](../continuous-verification-overview/concepts-cv/elasticsearch-verification-overview.md). + +## Visual Summary + +Here's an example configuration of Elasticsearch deployment verification. + +![](./static/3-verify-deployments-with-elasticsearch-04.png) + +## Step 1: Set Up the Deployment Verification + +To add an ELK verification step to your Workflow: + +1. Ensure that you have added ELK Elasticsearch as a Verification Provider, as described in [Verification Provider Setup](#verification_provider_setup). +2. In your Workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select **Log Analysis** > **ELK**.![](./static/3-verify-deployments-with-elasticsearch-05.png) +4. Click **Next**. The **Configure** **ELK** settings appear.![](./static/3-verify-deployments-with-elasticsearch-06.png) + +## Step 2: Elasticsearch Server + +Select the server you added when you set up the ELK verification provider earlier in [Connect to Elasticsearch](1-elasticsearch-connection-setup.md). + +You can also enter [variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as: `${serviceVariable.elk_connector_name}`. + +If the **Elasticsearch Server** field contains an expression, the **Index** field must also use an expression. + +## Step 3: Search Keywords + +Enter search keywords for your query, such as **error** or **exception**. + +The keywords are searched against the logs identified in the **Message** field of the dialog (see below). + +You can also enter variable expressions, such as: `error OR ${serviceVariable.error_type}` + +For an advanced query, enter an Elasticsearch JSON query. You can use JSON to create complex queries beyond keywords. The following example looks for the substring **error** in the field **log**: + +`{"regexp":{"log": {"value":"error"}}}` + +Do not use wildcards in queries for Elasticsearch. ElasticSearch documentation indicates that wildcard queries can become very expensive on the resources and take down the cluster. + +## Step 4: Query Type + +Select query type for the value entered in the **Host Name Field**. The queries accept text, numerics, and dates. For MATCH and MATCH\_PHRASE types, the input is analyzed and the query is constructed. + +1. **TERM** finds documents that contain the exact term specified in the entered value. See [ELK documentation on TERM queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-term-query.html#query-dsl-term-query) for more information. +2. **MATCH\_PHRASE** finds documents that contain the terms specified in the exact order of entries in the analyzed text. See [ELK documentation on MATCH\_PHRASE queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_phrase) for more information. +3. **MATCH** finds documents that contain the entries in the analyzed text in any order. See [ELK documentation on MATCH queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html) for more information. + +## Step 5: Index + +Enter the the index to search. This field is automatically populated from the index templates, if available. + +![](./static/3-verify-deployments-with-elasticsearch-07.png) + +You can also enter variable expressions, such as: `${service.name}` + +If the **Elasticsearch Server** field contains an expression, the **Index** field must also use an expression.If there are no index templates, or if you do not have administrator privileges with ELK, enter the index manually: + +1. To locate indices, in **Kibana**, click **Management**. +2. Click **Index Patterns**. The **Index Patterns** page appears. +3. Copy the name of one of the Index patterns. +4. In **Harness**, in the **ELK** dialog, paste the name of the Index pattern into **Indices**. + +## Step 6: Host Name Field + +Enter the field name used in the ELK logs that refers to the host/pod/container ELK is monitoring. + +### Select Key from Example + +To find the hostname in Kibana and enter it in Harness, do the following: + +1. In **Kibana**, click **Discover**. +2. In the search field, search for **error** or **exception**. +3. In the results, locate the host name of the host/container/pod where ELK is monitoring. For example, when using Kubernetes, the pod name field **kubernetes.pod\_name** is used. +4. In **Harness**, in the **ELK** dialog, next to **Host Name Field**, click **Guide From Example**. The **Host Name Field** popover appears. +5. In the JSON response, click on the name of the label that maps to the host/container/pod in your log search results. Using our Kubernetes example, under **pod**, you would click the first **name** label. +The **Host Name Field** is filled with the JSON label for the hostname. + +### Paste Custom JSON Response + +If you do not want to get the sample record from the server configuration and select the required object, you can use your own JSON object. + +Click **Paste Custom JSON Response** and paste your custom valid JSON object in the text field. It will appear in the dialog and you can select to use it. + +Make sure the styling of the JSON object is valid as the input field strictly validates the entry. + +## Step 7: Message Field + +Enter the field by which the messages are usually indexed. This is typically a **log** field. You can also enter variable expressions, such as: `${serviceVariable.message_field}`. + +To find the field in **Kibana** and enter it in **Harness**, do the following: + +1. In **Kibana**, click **Discover**. +2. In the search field, search for **error or exception**. +3. In the results, locate a log for the host/container/pod ELK is monitoring. For example, in the following Kubernetes results in Kibana, the messages are indexed under the **log** field. +4. In **Harness**, in the **ELK** dialog, next to **Message Field**, click **Guide From Example**. The **Message Field** popover appears. +5. In the JSON response, click on the name of the label that maps to the log in your Kibana results. Using our Kubernetes example, you would click the **log** label. +The label is added to the **Message Field**. + +You can also paste your own JSON object by clicking **Paste Custom JSON Response**. + +## Step 8: Expression for Host/Container name + +Add an expression that evaluates to the host name value for the field you entered in the **Host Name Field** above. The default expression is **${instance.host.hostName}**. + +In order to obtain the names of the host where your service is deployed, the verification provider should be added to your workflow **after** you have run at least one successful deployment.To ensure that you pick the right field when using **Guide From Example**, you can use a host name from the ELK log messages as a guide. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}.To use **Guide From Example** for a host name expression, do the following: + +1. In **Kibana**, click **Discover**. +2. In the search field, search for **error or exception**. +3. In the results, locate the name of the host/container/pod ELK is monitoring. For example, when using Kubernetes, the pod name field **kubernetes.pod\_name** displays the value you need. +The expression that you provide in **Expression for Host/Container Name** should evaluate to the name here, although the suffixes can differ. +4. In **Harness**, in your workflow ELK dialog, click **Guide From Example**. The **Expression for Host Name** popover appears. +The dialog shows the service, environment, and service infrastructure used for this workflow. +5. In **Host**, click the name of the host to use when testing verification. The hostname will be similar to the hostname you used for the **Host Name Field**, as described earlier in this procedure. The suffixes can be different. +6. Click **SUBMIT**. The JSON for the host appears. Look for the **host** section. +You want to use a **name** label in the **host** section. Do not use a host name label outside of that section. +7. To identify which label to use to build the expression, compare the host/pod/container name in the JSON with the hostname you use when configuring **Host Name Field**. +8. In the **Expression for Host Name** popover, click the **name** label to select the expression. Click back in the main dialog to close the **Guide From Example**. The expression is added to the **Expression for Host/Container name** field. +For example, if you clicked the **name** label, the expression **${host.name}** is added to the **Expression for Host/Container name** field. + +You can also paste your own JSON object by clicking **Paste Custom JSON Response**. + +## Step 9: Timestamp Field + +Enter either a static value (such as `@timestamp`), or a variable expression such as: `${serviceVariable.timestamp_field}`. + + If you are using a timestamp in the **Timestamp Field** that is not formatted as epoch/Unix timestamp (the default), then you must enter the format you are using in the **Timestamp Format** setting. The format is used to parse the timestamp in **Timestamp Field**.You can also paste your own JSON object by clicking **Paste Custom JSON Response**. + +## Step 10: Timestamp Format + +Enter the format for the **timestamp** field in the Elasticsearch record. You can also enter a variable expression, such as: `${serviceVariable.timestamp_format_field}`. + +If you are entering a literal format, use Kibana to determine the format. In Kibana, use the **Discover** > **Filter** feature to construct your timestamp range: + +![](./static/3-verify-deployments-with-elasticsearch-08.png) + +Format Examples: + +**Timestamp:** 2018-08-24T21:40:20.123Z. **Format:** yyyy-MM-dd'T'HH:mm:ss.SSSX + +**Timestamp:** 2018-08-30T21:57:23+00:00. **Format:** yyyy-MM-dd'T'HH:mm:ss.SSSXXX + +For more information, see [Date Math](https://www.elastic.co/guide/en/elasticsearch/reference/6.x/common-options.html#date-math) from Elastic. + +You can also paste your own JSON object by clicking **Paste Custom JSON Response**. + +## Step 11: Test Expression for Host Name + +At the bottom of the dialog, click **Test**. + +A new **Expression for Host Name** popover appears. + +In **Host**, select the same host you selected last time, and then click **RUN**. Verification for the host is found. + +![](./static/3-verify-deployments-with-elasticsearch-09.png) + +If you receive an error, it is likely because you selected the wrong label in **Expression for Host/Container Name** or **Host Name Field**. Resolve the error as needed. + +Click **Analysis Details**. + +## Step 12: Analysis Period + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +Harness waits 2-3 minutes before beginning the analysis to avoid initial deployment noise. This is a standard with monitoring tools. + +## Step 13: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +For Canary Analysis and Previous Analysis, analysis happens at the host/node/pod level. For Predictive Analysis, data collection happens at the host/node/pod level but analysis happens at the application or service level. Consequently, for data collection, provide a query that targets the logs for the host using fields such as **SOURCE\_HOST** in **Field name for Host/Container**. + +## Step 14: Algorithm Sensitivity + +Select the sensitivity that will result in the most useful results for your analysis. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +## Step 15: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +## Step 16: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +Click **Submit**. + +## Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-elasticsearch-10.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +## Step 17: View Verification Results + +Once you have deployed your workflow (or pipeline) using the New Relic verification step, you can automatically verify cloud application and infrastructure performance across your deployment. + +### Workflow Verification + +To see the results of Harness machine-learning evaluation of your ELK verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **ELK** step. + +![](./static/3-verify-deployments-with-elasticsearch-11.png) + +### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +### Transaction Analysis + +* **Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. +* **Risk level analysis:** Get an overall risk level and view the cluster chart to see events. +* **Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + +### Execution Analysis + +* **Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event. +* **Cluster chart:** View the chart to see how the selected event contrast. Click each event to see its log details. + +### Event Management + +* **Event-level analysis:** See the threat level for each event captured. +* **Tune event capture:** Remove events from analysis at the service, workflow, execution, or overall level. +* **Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. + +## Option: Templatize ELK Verification + +Once you have created an ELK verification step, you can templatize certain settings. This enables you to use the ELK verification step in the Workflow (and multiple Pipelines) without having to provide settings until runtime. + +You templatize settings by click the **[T]** icon next to the setting. + +![](./static/3-verify-deployments-with-elasticsearch-12.png) + +The settings are replaced by Workflow variables: + +![](./static/3-verify-deployments-with-elasticsearch-13.png) + +You will now see them in the **Workflow Variables** section of the Workflow: + +![](./static/3-verify-deployments-with-elasticsearch-14.png) + +When you deploy the Workflow, **Start New Deployment** prompts you to enter values for templatize settings: + +![](./static/3-verify-deployments-with-elasticsearch-15.png) + +You can select the necessary settings and deploy the Workflow. + +You can also pass variables into a Workflow from a Trigger that can be used for templatized values. For more information, see [Passing Variables into Workflows and Pipelines from Triggers](../../model-cd-pipeline/expressions/passing-variable-into-workflows.md). + +## Next Steps + +* [Troubleshooting Elasticsearch](4-troubleshooting-elasticsearch.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/4-troubleshooting-elasticsearch.md b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/4-troubleshooting-elasticsearch.md new file mode 100644 index 00000000000..441ea58d85e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/4-troubleshooting-elasticsearch.md @@ -0,0 +1,116 @@ +--- +title: Troubleshoot Verification with Elasticsearch +description: Resolutions to common configuration problems with the Elastic Stack (ELK Stack). +# sidebar_position: 2 +helpdocs_topic_id: emorpi9nd4 +helpdocs_category_id: ytuafly1jg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following are resolutions to common configuration problems. + +### Workflow Step Test Error + +When you click **TEST** in the **ELK** workflow dialog **Expression for Host Name** popover, you should get provider information: + +![](./static/4-troubleshooting-elasticsearch-00.png) + +The following error message can occur when testing the New Relic verification step in your workflow: + + +``` +ELK_CONFIGURATION_ERROR: Error while saving ELK configuration. No node with name ${hostName} found reporting to ELK +``` +#### Cause + +The expression in the **Expression for Host/Container name** field is incorrect. Typically, this occurs when the wrong hostName label is selected to create the expression in the **Expression for Host/Container name** field. + +#### Solution + +Following the steps in [Verify with ELK](#verify_with_elk) again to select the correct expression. Ensure that the **name** label selected is under the **host** section of the JSON. + +![](./static/4-troubleshooting-elasticsearch-01.png) + +### SocketTimeoutException + +When you add an ELK verification provider and click SUBMIT, you might see the following error. + +![](./static/4-troubleshooting-elasticsearch-02.png) + +#### Cause + +The Harness delegate does not have a valid connection to the ELK server. + +#### Solution + +On the same server or instance where the Harness delegate is running, run one of the following cURL commands to verify whether the delegate can connect to the ELK server. + +If you do not have a username and password for the ELK server: + + +``` +curl -i -X POST url/*/_search?size=1 -H 'Content-Type: application/json' -d '{"size":1,"query":{"match_all":{}},"sort":{"@timestamp":"desc"}}' +``` +If you have username and password then use this command: + + +``` +curl -i -X POST url/*/_search?size=1 -H 'Content-Type: application/json' -H 'Authorization: '-d '{"size":1,"query":{"match_all":{}},"sort":{"@timestamp":"desc"}}' +``` +If you have token-based authentication, use this command: + + +``` +curl -i -X POST url/*/_search?size=1 -H 'Content-Type: application/json' -H 'tokenKey: *tokenValue*'-d '{"size":1,"query":{"match_all":{}},"sort":{"@timestamp":"desc"}}' +``` +If the cURL command cannot connect, it will fail. + +If the cURL command can connect, it will return a HTTP 200, along with the JSON. + +If the cURL command is successful, but you still see the SocketTimeoutException error in the ELK dialog, contact Harness Support ([support@harness.io](mailto:support@harness.io)). + +It is possible that the response from the ELK server is just taking very long. + +### Log Errors Due To Missing Fields + +While trying to set up a simple query for ELK deployment verification, an error message occurs about a missing field and the workflow fails. + +#### Cause + +The fields specified in the query can be present only in a subset of the documents. + +#### Solution + +Make sure that the field exists in the document that is being examined in ELK and then filter the value of the field. + +For example, if you want to fetch only documents where the message field contains the term exception and the ELK index contains documents that could either contain the message field or not contain it. In such a case, a query with conditions as implemented here is recommended. + + +``` + +{ + "bool":{ + "must":[ + { + "exists":{ + "field":"message" + } + }, + { + "regexp":{ + "message":{ + "value":"exception" + } + } + } + ] + } +} + +``` +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/_category_.json new file mode 100644 index 00000000000..a2206ea5ee9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "ELK Elasticsearch Verification", + "position": 80, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "ELK Elasticsearch Verification" + }, + "customProps": { + "helpdocs_category_id": "ytuafly1jg" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/1-elasticsearch-connection-setup-03.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/1-elasticsearch-connection-setup-03.png new file mode 100644 index 00000000000..8e622b727b0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/1-elasticsearch-connection-setup-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-16.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-16.png new file mode 100644 index 00000000000..620415a5bc9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-17.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-17.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-18.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-18.png new file mode 100644 index 00000000000..156f00414b9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-19.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-19.png new file mode 100644 index 00000000000..04b089e8f73 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-20.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-20.png new file mode 100644 index 00000000000..04b089e8f73 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-21.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-21.png new file mode 100644 index 00000000000..04e6db472fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-22.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-22.png new file mode 100644 index 00000000000..04e6db472fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-23.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-23.png new file mode 100644 index 00000000000..137dc3d39e0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-24.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-24.png new file mode 100644 index 00000000000..f92b66c434a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/2-24-7-service-guard-for-elasticsearch-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-04.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-04.png new file mode 100644 index 00000000000..2ddc1a35d0f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-05.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-05.png new file mode 100644 index 00000000000..55dc1dec0ae Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-06.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-06.png new file mode 100644 index 00000000000..2ddc1a35d0f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-07.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-07.png new file mode 100644 index 00000000000..72c5fbdefca Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-08.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-08.png new file mode 100644 index 00000000000..04e6db472fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-09.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-09.png new file mode 100644 index 00000000000..1857b62e864 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-10.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-10.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-11.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-11.png new file mode 100644 index 00000000000..d8b39bfe4c9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-12.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-12.png new file mode 100644 index 00000000000..240826a9771 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-13.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-13.png new file mode 100644 index 00000000000..f1037a5ad58 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-14.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-14.png new file mode 100644 index 00000000000..aa79e6cf1a9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-15.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-15.png new file mode 100644 index 00000000000..ca7e9cd00eb Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/3-verify-deployments-with-elasticsearch-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-00.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-00.png new file mode 100644 index 00000000000..1857b62e864 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-01.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-01.png new file mode 100644 index 00000000000..c5a3d5c6a31 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-02.png b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-02.png new file mode 100644 index 00000000000..3042e51de90 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/elk-elasticsearch-verification/static/4-troubleshooting-elasticsearch-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/_category_.json new file mode 100644 index 00000000000..df6078e4425 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Instana Verification", + "position": 150, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Instana Verification" + }, + "customProps": { + "helpdocs_category_id": "hb98hpyemv" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-connection-setup.md new file mode 100644 index 00000000000..75cd06ff2b2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-connection-setup.md @@ -0,0 +1,51 @@ +--- +title: 1 – Instana Connection Setup +description: Connect Harness to Instana, and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: dg4vojlcx1 +helpdocs_category_id: hb98hpyemv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Instana with Harness is to set up a Instana Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as Instana. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your Instana data and analysis. + +### Instana Verification Provider Setup + +To add Instana as a verification provider: + +1. Click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **Instana**. The **Add****Instana Verification Provider** dialog appears. + + ![](./static/instana-connection-setup-00.png) + +4. Complete the following fields in this dialog.On Instana, you must have the account owner role to create the API token needed to connect Harness to Instana. + +| | | +| --- | --- | +| **Field** | **Description** | +| **Display Name** | Enter a display name for the provider. If you are going to use multiple providers of the same type, ensure that you give each provider a different name. | +| **Instana** **URL** | Enter the URL of the Instana server, such as:`https://integration-.instana.io` | +| **Encrypted** **API Token** | Select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) for your Instana API token. | +| **Usage Scope** | Usage scope is inherited from the secrets used in the settings. | + +To create an API key in Instana, do the following: +1. In Instana Settings, select **Team Settings** > **API Tokens**. +2. Select **Add** **API Token**. +3. On the resulting **New API Token** page, enter a name for the new API key, such as **Harness**. +4. Under Permssions, enable **Service & Endpoint Mapping**. +5. Copy the value from the **API Token** field, and save the token. +6. In **Harness**, paste the token's value into the **API Token** field. + +### Testing and Saving Your Setup + +1. After you have filled in the **Add Instana Verification Provider** dialog's settings, click **Test** to confirm them. +2. Once the test is successful, click **Submit** to create your new Instana Connector. + +### Next Step + +* [2 – 24/7 Service Guard for Instana](instana-service-guard.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-service-guard.md b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-service-guard.md new file mode 100644 index 00000000000..5d43e951dd7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-service-guard.md @@ -0,0 +1,75 @@ +--- +title: 2 – 24/7 Service Guard for Instana +description: Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see 24/7 Service Guard Overview. You can add yo… +sidebar_position: 20 +helpdocs_topic_id: ovghx0k1xt +helpdocs_category_id: hb98hpyemv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Instana monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see [Instana Connection Setup](instana-connection-setup.md). + +This section assumes you have a Harness Application set up, containing a Service and an Environment. For steps on setting up a Harness Application, see [Application Components](../../model-cd-pipeline/applications/application-configuration.md). + +### 24/7 Service Guard Setup + +To set up 24/7 Service Guard for Instana: + +1. Ensure that you have added Instana as a Harness Verification Provider, as described in [Verification Provider Setup](../datadog-verification/1-datadog-connection-setup.md#datadog-verification-provider-setup). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, nor configure its settings. You simply need to create a Service, and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + + ![](./static/instana-service-guard-01.png) + +7. In **24/7 Service Guard**, click **Add Service Verification**, and select click **Instana**. The **Instana** dialog appears. + + ![](./static/instana-service-guard-02.png) + +8. Fill out the dialog. The dialog has the following fields. + + For 24/7 Service Guard, the queries you define to collect metrics are specific to the Application or Service you want monitored. Verification is Application/Service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + + * **Display Name** -- The name that will identify this Service on the **Continuous Verification** dashboard. Use a name that indicates the Environment and monitoring tool, such as **Instana**. + * **Service** -- The Harness Service to monitor with 24/7 Service Guard. + * **Instana Server** -- Select the Instana [Verification Provider](instana-connection-setup.md) to use. + * **Application Metrics** -- This section is where you specify API endpoint metrics to monitor from Instana: + + 1. Click **Add** to display a **Tag Filters** row, as shown below.You must specify at least one Tag Filter. These correspond to Filters applied to your metrics on Instana's **Analytics** tab > **Filters**. + 2. In the **Name** field, enter the Instana Filter's name, such as `kubernetes.pod.name`. + 3. Select an **Operator** to define the threshold or condition for considering this metric anomalous. + 4. Enter a **Value** corresponding to the **Operator**. + 5. To define additional Tag Filters, repeat the above steps. + + * **Algorithm Sensitivity** -- See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria).**Moderately sensitive** is recommended. + * **Enable 24/7 Service Guard** -- Select this check box to turn on 24/7 Service Guard monitoring with Instana. If you simply want to set up 24/7 Service Guard, but not yet enable it, leave this check box empty. + + When you are finished, the dialog will look something like this: + + ![](./static/instana-service-guard-03.png) + +9. Click **Test**. Harness verifies the settings that you entered. + + ![](./static/instana-service-guard-04.png) + +10. Click **Submit**. Instana 24/7 Service Guard monitoring is now configured. + + ![](./static/instana-service-guard-05.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays your production verification results. + +![](./static/instana-service-guard-06.png) + +For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +### Next Step + +* [3 – Verify Deployments with Instana](instana-verify-deployments.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-verify-deployments.md b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-verify-deployments.md new file mode 100644 index 00000000000..e8d9e6a469a --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/instana-verify-deployments.md @@ -0,0 +1,104 @@ +--- +title: 3 – Verify Deployments with Instana +description: Harness can analyze Instana metrics to verify, rollback, and improve deployments. +sidebar_position: 30 +helpdocs_topic_id: 3nhtaodeff +helpdocs_category_id: hb98hpyemv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Instana data to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up Instana as a verification step in a Harness Workflow, as outined in the following sections: + +* [Deployment Verification Setup](../datadog-verification/3-verify-deployments-with-datadog.md#deployment-verification-setup) +* [Harness Expression Support in CV Settings](#harness_expression_support_in_cv_settings) +* [Verification Results](../datadog-verification/3-verify-deployments-with-datadog.md#verification-results) +* [Continuous Verification](../datadog-verification/3-verify-deployments-with-datadog.md#continuous-verification) +* [Next Steps](../datadog-verification/3-verify-deployments-with-datadog.md#next-steps) + +In order to obtain the names of the host(s) or container(s) where your service is deployed, add the Verification Provider to your Workflow *after* you have run at least one successful Workflow deployment. + +### Deployment Verification Setup + +To add an Instana verification step to your Workflow: + +1. Ensure that you have added Instana as a Harness Verification Provider, as described in [Instana Connection Setup](instana-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **Instana**. + + ![](./static/instana-verify-deployments-07.png) + +4. Click **Next** to display the **Configure** **Instana** settings. + + ![](./static/instana-verify-deployments-08.png) + + Next, fill out the settings, which include the following fields. + + + * **Instana Server** -- Select the Instana [Verification Provider](instana-connection-setup.md) to use.You can also enter variable expressions, such as: `${serviceVariable.instana-QA}`. | + + * **Infrastructure Metrics** -- In the **Infrastructure Metrics** pane, you can first select one or more **Docker Metrics** to monitor from Instana:Next, in the **Query** field, you must enter a query corresponding to a [Dynamic Focus](http://docs.instana.io/dynamic_focus/) query in Instana. For example, you might enter: `entity.kubernetes.pod.name:${host}`For background information, see Instana's [Container Monitoring](https://docs.instana.io/infrastructure_monitoring/containers/) documentation. | + + * **Application Metrics** --In the **Application Metrics** pane, you specify API endpoint metrics to monitor from Instana. + + First, in the required **Host Tag Filter** field, enter a value in the format: `kubernetes.pod.name`. + Optionally, you can also add **Tag Filters**, corresponding to Filters applied to your metrics on Instana's **Analytics** tab > **Filters**. + To do so: + + 1. Click **Add** to display a **Tag Filters** row, as shown below. + 2. In the **Name** field, enter the Instana Filter's name, such as `kubernetes.pod.name` or `kubernetes.cluster.name`. + 3. Select an **Operator** to define the threshold or condition for considering this metric anomalous. + 4. Enter a **Value** corresponding to the **Operator**. + 5. To define additional Tag Filters, repeat the above steps. + + * **Expression for Host/Container name** -- The expression entered here should resolve to a host/container name in your deployment environment. By default, the expression is `${instance.host.hostName}`. + For most use cases, you can leave this field empty, to apply the default. + + However, if you want to add a prefix or suffix, enter an expression as outlined here.For AWS EC2 hostnames, use the expression `${instance.hostName`}.If you begin typing an expression into the field, the field provides expression assistance. For PCF, you might enter an expression like: + ``` + ${host.pcfElement.displayName}-${host.pcfElement.instanceIndex} + ``` + ...which could yield something like: `harness-example-1`, where the `displayName` is `harness-example` and `instanceIndex` is `1`.When you are setting up the Workflow for the first time, Harness will not be able to help you create an expression, because there has not been a host/container deployed yet.For this reason, you should add the **Verify Step** after you have done one successful deployment. + + * **Analysis Time Duration** -- Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues.See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + + * **Baseline for Risk Analysis** -- See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + + * **Algorithm Sensitivity** -- See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + + * **Include instances from previous phases** -- If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + * **Execute with previous steps** -- Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. | + +5. Click **Test**. Harness verifies the settings you entered. + +6. When the test is successful, click **Submit**. The Instana verification step is added to your Workflow. + +### Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/instana-verify-deployments-09.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Verification Results + +Once you have deployed your Workflow (or Pipeline) using the Instana verification step, you can automatically verify cloud application and infrastructure performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +#### Workflow Verification + +To see the results of Harness' machine-learning evaluation of your Instana verification: In your Workflow or Pipeline deployment, expand the **Verify Service** step, and then click the **Instana** step to populate the Details panel at right. + +![](./static/instana-verify-deployments-10.png) + +### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. While the Workflow verification view is for the DevOps user who developed the Workflow, the **Continuous Verification** dashboard is where *all* future deployments are displayed for developers and others interested in deployment analysis. + +To explore and interpret verification analysis results, see [Verification Results Overview](../continuous-verification-overview/concepts-cv/deployment-verification-results.md). + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Users and Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-connection-setup-00.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-connection-setup-00.png new file mode 100644 index 00000000000..961a380b355 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-connection-setup-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-01.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-01.png new file mode 100644 index 00000000000..71f428efa91 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-02.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-02.png new file mode 100644 index 00000000000..a2c26d4d5d3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-03.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-03.png new file mode 100644 index 00000000000..07f63a1504d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-04.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-04.png new file mode 100644 index 00000000000..cafe665bf2f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-05.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-05.png new file mode 100644 index 00000000000..c0c0aa052e8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-06.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-06.png new file mode 100644 index 00000000000..4feb033df1d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-service-guard-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-07.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-07.png new file mode 100644 index 00000000000..e07e5fad56a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-08.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-08.png new file mode 100644 index 00000000000..c518b7daf14 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-09.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-09.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-10.png b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-10.png new file mode 100644 index 00000000000..8e29727cda1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/instana-verification/static/instana-verify-deployments-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/_category_.json new file mode 100644 index 00000000000..0a85056c0ee --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Logz.io Verification", + "position": 90, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Logz.io Verification" + }, + "customProps": { + "helpdocs_category_id": "j3m3gbxk88" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/logz-verification-provider.md b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/logz-verification-provider.md new file mode 100644 index 00000000000..4ede99ce518 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/logz-verification-provider.md @@ -0,0 +1,62 @@ +--- +title: Connect to Logz.io +description: Connect Logz.io as a Harness Verification Provider, enabling Harness to ensure the success of your deployments. +sidebar_position: 10 +helpdocs_topic_id: 1hw6xxh73c +helpdocs_category_id: j3m3gbxk88 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To set up Logz.io to work with Harness' Continuous Verification features, you must add Logz.io as Harness Verification Provider. + +### Before You Begin + +* [What Is Continuous Verification (CV)?](../continuous-verification-overview/concepts-cv/what-is-cv.md) + +### Limitations + +You must have a Logz.io Enterprise account to generate the API tokens required to integrate with Harness. (Logz.io Pro and Community accounts do not support token generation.) +### Step 1: Add Verification Provider + +To begin adding Logz.io as a Harness Verification Provider,: + +1. In Harness, click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select Logz.io. The **Add Logz Verification Provider** dialog appears. + + ![](./static/logz-verification-provider-00.png) + +### Step 2: Display Name + +Enter a name for this connection. You will use this name when selecting the Verification Provider in Harness Environments and Workflows. + +If you plan to use multiple providers of the same type, ensure that you give each provider a different name. + + +### Step 3: Logz.io URL + +Enter the URL of the server. + + +### Step 4: Token + +In **Encrypted API Token**, + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) for the token. + +For details, see Logz.io's [Announcing the Logz.io Search API](https://logz.io/blog/announcing-the-logz-io-search-api/) tutorial, [Managing API Tokens](https://docs.logz.io/user-guide/tokens/api-tokens.html) topic, and [Authentication](https://docs.logz.io/api/#section/Authentication) API documentation. +### Step 5: Usage Scope + +Usage scope is inherited from the secrets used in the settings. + + +### Step 6: Test and Save + +1. When you have configured the dialog, click **Test**. +2. Once the test is successful, click **Submit** to add this Verification Provider. + + +### Next Step + +We will soon add additional topics on using Logz.io for Harness deployment and service verification. \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/static/logz-verification-provider-00.png b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/static/logz-verification-provider-00.png new file mode 100644 index 00000000000..0399cf76427 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/static/logz-verification-provider-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/static/verify-deployments-with-logz-io-01.png b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/static/verify-deployments-with-logz-io-01.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/static/verify-deployments-with-logz-io-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/verify-deployments-with-logz-io.md b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/verify-deployments-with-logz-io.md new file mode 100644 index 00000000000..cf34778bd06 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/logz-io-verification/verify-deployments-with-logz-io.md @@ -0,0 +1,98 @@ +--- +title: Verify Deployments with Logz.io +description: Verify, rollback, and improve deployments with Harness and Logz.io. +sidebar_position: 20 +helpdocs_topic_id: vbl1xlad1e +helpdocs_category_id: j3m3gbxk88 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Logz.io data and analysis to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up Logz.io as a verification step in a Harness Workflow. + +This topic covers the process to set up Logz.io in a Harness Workflow, and provides a summary of Harness verification results. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your Workflow *after* you have run at least one successful deployment. + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* [Connect to Logz.io](logz-verification-provider.md) + +### Limitations + +You must have a Logz.io Enterprise account to generate the API tokens required to integrate with Harness. (Logz.io Pro and Community accounts do not support token generation.) + +### Step 1: Set up Deployment Verification + +To verify your deployment with Logz.io, do the following: + +1. Ensure that you have added Logz.io as a Harness Verification Provider, as described in [Connect to Logz.io](logz-verification-provider.md). +2. In your Workflow, under **Verify Service**, click **Add Step**, and select **Logz**. +3. Click **Next**. The **Logz** settings appear. + +### Step 2: Select Logz Server + +In **Logz** **Server**, select the server you added when you set up the Logz verification provider in [Connect to Logz.io](logz-verification-provider.md). + +You can also enter variable expressions, such as: `${serviceVariable.logz_connector_name}`. + +### Step 3: Query + +In **Query**, enter search keywords for your query, such as **error or exception**. + +The keywords are searched against the logs identified in the **Message** setting (see below). + +You can also enter variable expressions, such as: `error OR ${serviceVariable.error_type}` + +### Step 4: Query Type + +Select query type for the value entered in the **Hostname Field**. The queries accept text, numerics, and dates. For MATCH and MATCH\_PHRASE types, the input is analyzed and the query is constructed. + +1. **TERM** finds documents that contain the exact term specified in the entered value. See [ELK documentation on TERM queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-term-query.html#query-dsl-term-query) for more information. +2. **MATCH\_PHRASE** finds documents that contain the terms specified in the exact order of entries in the analyzed text. See [ELK documentation on MATCH\_PHRASE queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_phrase) for more information. +3. **MATCH** finds documents that contain the entries in the analyzed text in any order. See [ELK documentation on MATCH queries](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html) for more information. + +For more information, see [Elasticsearch Queries: A Thorough Guide](https://logz.io/blog/elasticsearch-queries/) and [Add dashboards and configure drilldown links](https://docs.logz.io/user-guide/infrastructure-monitoring/configure-grafana-drilldown-links.html) from Logz.io. + +### Step 5: Hostname Field + +In **Hostname Field**, enter the field name used in the logs that refers to the host/pod/container being monitored. + +This is similar to a [Logz.io dashboard query](https://docs.logz.io/user-guide/infrastructure-monitoring/configure-grafana-drilldown-links.html). + +### Step 6: Message + +In **Message Field**, enter the field by which the messages are usually indexed. This is typically a **log** field. + +You can also enter variable expressions, such as: `${serviceVariable.message_field}`. + +### Step 7: Timestamp Format + +In **Timestamp Format**, enter either a static value (such as `@timestamp`), or a variable expression such as: `${serviceVariable.timestamp_field}`. + +### Step 8: Expression for Host/Container name + +In **Expression for Host/Container name**, add an expression that evaluates to the host name value for the field you entered in the **Host Name Field** above. The default expression is **${instance.host.hostName}**. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}.In order to obtain the names of the host where your service is deployed, the verification provider should be added to your workflow **after** you have run at least one successful deployment. + +### Step 9: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#analysis-time-duration). + + +### Step 10: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the setting of Harness Verification Providers. + +![](./static/verify-deployments-with-logz-io-01.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/1-new-relic-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/1-new-relic-connection-setup.md new file mode 100644 index 00000000000..44851bbc66f --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/1-new-relic-connection-setup.md @@ -0,0 +1,69 @@ +--- +title: Connect to New Relic +description: Connect Harness to New Relic and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: iz45kpa10u +helpdocs_category_id: 1nci5420c8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using New Relic with Harness is to set up an New Relic Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as New Relic. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your New Relic data and analysis. + +### Before You Begin + +* See the [New Relic Verification Overview](../continuous-verification-overview/concepts-cv/new-relic-verification-overview.md). + +### Step 1: Add New Relic Verification Provider + +To connect a verification provider, do the following: + +1. Click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **New Relic**. The **Add New Relic Verification Provider** dialog appears. + + ![](./static/1-new-relic-connection-setup-24.png) + +4. Complete the following fields of the **Add New Relic Verification Provider** dialog. + +### Step 2: Encrypted API Key + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets).Enter the API key needed to connect with the server. For information on API keys, see [Access to REST API keys](https://docs.newrelic.com/docs/apis/getting-started/intro-apis/access-rest-api-keys) from New Relic. + +1. Log into New Relic. +2. On the home page, click your account name, and then click **Account Settings**. + + [![](./static/1-new-relic-connection-setup-25.png)](./static/1-new-relic-connection-setup-25.png) + +3. Click the left menu, under **Integrations**, click **API keys**. + + [![](./static/1-new-relic-connection-setup-27.png)](./static/1-new-relic-connection-setup-27.png) + + The API keys are displayed. + + [![](./static/1-new-relic-connection-setup-29.png)](./static/1-new-relic-connection-setup-29.png) + + An index of Admin user's API keys appears below the account's REST API key. The list includes the Admin's full name and the date their key was last used. You can view your own Admin user's API key: From the Admin index, select **(Show key)** for your name. + +### Step 3: Account ID + +To get the account ID for your New Relic account, in the New Relic Dashboard, copy the number after the **/accounts/** portion of the URL. + +### Step 4: Display Name + +Enter a display name for the provider. If you are going to use multiple providers of the same type, ensure you give each provider a different name. + +### Step 5: Usage Scope + +Usage scope is inherited from the secrets used in the settings. + +Pro or higher subscription level needed. For more information, see [Introduction to New Relic's REST API Explorer](https://docs.newrelic.com/docs/apis/rest-api-v2/api-explorer-v2/introduction-new-relics-rest-api-explorer) from New Relic. + +### Next Steps + +* [Monitor Applications 24/7 with New Relic](2-24-7-service-guard-for-new-relic.md) +* [New Relic Deployment Marker](3-new-relic-deployment-marker.md) +* [Verify Deployments with New Relic](4-verify-deployments-with-new-relic.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/2-24-7-service-guard-for-new-relic.md b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/2-24-7-service-guard-for-new-relic.md new file mode 100644 index 00000000000..9e678a00cd0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/2-24-7-service-guard-for-new-relic.md @@ -0,0 +1,112 @@ +--- +title: Monitor Applications 24/7 with New Relic +description: Combined with New Relic, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: eke3rf093v +helpdocs_category_id: 1nci5420c8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your New Relic monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see [Connect to New Relic](1-new-relic-connection-setup.md). + +This section assumes you have a Harness Application set up and containing a Service and Environment. For steps on setting up a Harness Application, see [Application Components](../../model-cd-pipeline/applications/application-configuration.md). + + +### Before You Begin + +* See the [New Relic Verification Overview](../continuous-verification-overview/concepts-cv/new-relic-verification-overview.md). +* See [Connect to New Relic](1-new-relic-connection-setup.md). + +### Visual Summary + +Here's an example of a 24/7 Service Guard configuration for New Relic. + +![](./static/2-24-7-service-guard-for-new-relic-37.png) + +### Step 1: Set Up 24/7 Service Guard for New Relic + +To set up 24/7 Service Guard for New Relic, do the following: + +1. Ensure that you have added New Relic as a Harness Verification Provider, as described in [Verification Provider Setup](#verification_provider_setup). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + + ![](./static/2-24-7-service-guard-for-new-relic-38.png) + + +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **New Relic**. + + ![](./static/2-24-7-service-guard-for-new-relic-39.png) + +8. Fill out the dialog. The dialog has the following fields. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **New Relic**. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: New Relic Server + +Select the New Relic Verification Provider to use. + +### Step 5: Application Name + +Select the Application Name used by the monitoring tool. If your New Relic account contains hundreds or thousands of applications, Harness requests that you enter in the application name. You can just paste in the application name as it appears in the New Relic Applications page **Name** column. + +This is the application name used to aggregate data in the New Relic UI. You set both the license and the app name as part of the New Relic installation process. + +To find your application, in **New Relic**, click **Applications**. The list of applications is displayed on the **Applications** page. + +[![](./static/2-24-7-service-guard-for-new-relic-40.png)](./static/2-24-7-service-guard-for-new-relic-40.png) + +### Step 6: Custom Thresholds + +In the **Custom Thresholds** section, you can define **Ignore Hints**. These are rules that instruct Harness to skip certain metrics/value combinations from verification analysis. + +To configure these rules, see  [Apply Custom Thresholds to 24/7 Service Guard](../24-7-service-guard/custom-thresholds-24-7.md). + +### Step 7: Algorithm Sensitivity + +Specify the sensitivity to determine what events are identified as anomalies. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 8: Enable 24/7 Service Guard + +Enable this setting to turn on 24/7 Service Guard. If you simply want to set up 24/7 Service Guard, but not enable it, leave this setting disabled. + +### Step 9: Verify Settings + +When you are finished, the dialog will look something like this: + +![](./static/2-24-7-service-guard-for-new-relic-42.png) + +1. Click **Test**. Harness verifies the settings you entered. +2. Click **Submit**. New Relic is configured now configured for 24/7 Service Guard monitoring. + + ![](./static/2-24-7-service-guard-for-new-relic-43.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +For information on using the dashboard, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +![](./static/2-24-7-service-guard-for-new-relic-44.png) + +### Next Steps + +* [New Relic Deployment Marker](3-new-relic-deployment-marker.md) +* [Verify Deployments with New Relic](4-verify-deployments-with-new-relic.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/3-new-relic-deployment-marker.md b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/3-new-relic-deployment-marker.md new file mode 100644 index 00000000000..b23172d3348 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/3-new-relic-deployment-marker.md @@ -0,0 +1,144 @@ +--- +title: New Relic Deployment Marker +description: Use the New Relic REST API v2 to record Harness deployments and then view them in the New Relic APM Deployments page. +sidebar_position: 30 +helpdocs_topic_id: 5zh0ijlupr +helpdocs_category_id: 1nci5420c8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use the New Relic REST API v2 to record Harness deployments and then view them in the New Relic APM **Deployments** page and in the **Event** log list on the **Overview** page. For more information, see + [REST API Procedures](https://docs.newrelic.com/docs/apm/new-relic-apm/maintenance/record-deployments#api) from New Relic. + +### Review: New Relic Deployment Markers + + +In a Harness Workflow, you can add a **New Relic Deployment Marker** to perform the POST as part of your Workflow: + +![](./static/3-new-relic-deployment-marker-34.png) + +The result is similar to using a cURL command that sends a POST to the API to record a deployment. See the `deployment` section in this example: + + +``` +curl -X POST 'https://api.newrelic.com/v2/applications/${APP_ID}/deployments.json' \ + -H 'X-Api-Key:${APIKEY}' -i \ + -H 'Content-Type: application/json' \ + -d \ +'{ + "deployment": { + "revision": "REVISION", + "changelog": "Added: /v2/deployments.rb, Removed: None", + "description": "Added a deployments resource to the v2 API", + "user": "datanerd@example.com" + } +}' +``` + +### Step 1: Add a New Relic Verification Provider + + +You connect Harness to New Relic using a Harness New Relic Verification Provider. + + +See + [Connect to New Relic](1-new-relic-connection-setup.md). + + +Later, when you add the **New Relic Deployment Marker** step in your Workflow, the API key parameter (shown in the cURL example above) is provided by the credentials you entered in the New Relic Verification Provider. + + +### Step 2: Add New Relic Deployment Marker + + +New Relic Deployment Marker is only available within a Workflow deployment phase in the **Verify** section. + + + + +![](./static/3-new-relic-deployment-marker-35.png) + +You cannot use it in the **Pre-deployment Steps** of a canary Workflow. + + +1. To select the **New Relic Deployment Marker** step, click **Add Step**, and then, in the **Utility** section, select **New Relic Deployment Marker**. +2. In **New Relic Server**, select the + [New Relic Verification Provider](1-new-relic-connection-setup.md) you added. + + +You can also enter set up a + [Service or Workflow variable](https://docs.harness.io/article/9dvxcegm90-variables) in the **New Relic Server** setting, such as: `${serviceVariable.new_relic_connector_name}` . + + +If the **New Relic Server** field contains an expression, the **Application Name** field must also use an expression. +The App ID parameter is provided by the **Application Name** you select. The list of applications is pulled from the New Relic Server you selected. You can also enter variable expressions, such as: `${app.name}`. + + +The **Body** section contains the standard JSON content as in the cURL example: + + +``` +{ + "deployment": { + "revision": "${artifact.buildNo}", + "description": "Harness Deployment via workflow ${workflow.name}", + "user": "${workflow.name}" + } +} +``` + +Harness uses some + [built-in variables](https://docs.harness.io/article/9dvxcegm90-variables) to provide the revision information and other settings. + + +Now when the Workflow is deployed you will see the Deployment Marker vertical line in New Relic: + + + + +![](./static/3-new-relic-deployment-marker-36.png) + +### YAML Example + + +Here's an example YAML schema that you can use for New Relic Deployment Marker. + + +``` +- type: NEW_RELIC_DEPLOYMENT_MARKER + name: New Relic Deployment Marker + properties: + analysisServerConfigId: 1crZaE-DQrKE_YFmH97CSg + applicationId: "${service.name}" + body: |- + { + "deployment": { + "revision": "${artifact.buildNo}", + "description": "Harness Deployment via workflow ${workflow.name}", + "user": "${workflow.name}" + } + } + templateUuid: null + templateVariables: null + templateVersion: null + +``` + + +For **applicationId**, you can use a Harness built-in variable expression such as **${service.name}** or a Workflow or Service variable. + + +For more information about variables and expressions, see the following topics: + + +* [Add Service Config Variables](../../model-cd-pipeline/setup-services/add-service-level-config-variables.md) +* [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) + + +### See Also + + +* [Verify Deployments with New Relic](4-verify-deployments-with-new-relic.md) + + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/4-verify-deployments-with-new-relic.md b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/4-verify-deployments-with-new-relic.md new file mode 100644 index 00000000000..92bd475cb9c --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/4-verify-deployments-with-new-relic.md @@ -0,0 +1,244 @@ +--- +title: Verify Deployments with New Relic +description: Harness can analyze New Relic data to verify, rollback, and improve deployments. +sidebar_position: 40 +helpdocs_topic_id: 9rc6pryw97 +helpdocs_category_id: 1nci5420c8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following procedure describes how to add New Relic as a verification step in a Harness workflow. For more information about workflows, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md). + +Once you run a deployment and New Relic preforms verification, Harness machine-learning verification analysis will assess the risk level of the deployment. + +### Before You Begin + +* See the [New Relic Verification Overview](../continuous-verification-overview/concepts-cv/new-relic-verification-overview.md). +* See [Connect to New Relic](1-new-relic-connection-setup.md). + + +### Step 1: Set Up the Deployment Verification + +To verify your deployment with New Relic, do the following: + +1. Ensure that you have added New Relic as a verification provider, as described in [Connect to New Relic](1-new-relic-connection-setup.md). +2. In your workflow, under **Verify Service**, click **Add Verification**. + + ![](./static/4-verify-deployments-with-new-relic-00.png) + +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **New Relic**. + + ![](./static/4-verify-deployments-with-new-relic-01.png) + +4. Click **Next**. The **Configure****New Relic** settings appear. + + ![](./static/4-verify-deployments-with-new-relic-02.png) + +### Step 2: New Relic Server + +In **Harness**, click **Setup**, click **Connectors**, click **Verification Providers**. The New Relic verification provider(s) are listed. The New Relic logo is next to each New Relic provider. + +Select the server you added when you set up the New Relic verification provider. + +You can also enter variable expressions, such as: `${serviceVariable.new_relic_connector_name}` + +If the **New Relic Server** field contains an expression, the **Application Name** field must also use an expression. + +### Step 3: Application Name + +Select the application to use for this verification step. + +If your New Relic account contains hundreds or thousands of applications, Harness requests that you enter in the application name. You can just paste in the application name as it appears in the New Relic Applications page **Name** column. + +This is the application name used to aggregate data in the New Relic UI. You set both the license and the app name as part of the New Relic installation process. + +You can also enter variable expressions, such as: `${app.name}` + +If the **New Relic Server** field contains an expression, the **Application Name** field must also use an expression.To find your application, in **New Relic**, click Applications. The list of applications is displayed on the **Applications** page. + +[![](./static/4-verify-deployments-with-new-relic-03.png)](./static/4-verify-deployments-with-new-relic-03.png) + +### Step 4: Custom Thresholds + +In the **Custom Thresholds** section, define two types of rules that override normal verification behavior: + +* **Ignore Hints** instruct Harness to skip certain metrics/value combinations from verification analysis. +* **Fast-Fail Hints** cause a Workflow to promptly enter a failed state. + +You can configure the following metric types as part of Custom Thresholds: Apdex Score, Average Response Time, Error, and Requests Per Minute (in Workflow). + +To configure these rules, see  [Apply Custom Thresholds to Deployment Verification](../tuning-tracking-verification/custom-thresholds.md). + +### Step 5: Expression for Host/Container Name + +The expression entered here should resolve to a host/container name in your deployment environment. For instructions on how to the **Guide From Example** feature, see [Guide From Example](#guide_from_example). + +When you are setting up the workflow for the first time, Harness will not be able to help you create an expression in the **Guide From Example** feature because there has not been a host/container deployed yet. For this reason, you should add the Verify Step **after** you have done one successful deployment. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}.To ensure that you pick the right name when using **Guide From Example**, you can use a host name in New Relic as a guide. In **New Relic**, click your application, click the left menu, and then click **JVMs**. The host names are listed in the first column of the table. + +[![](./static/4-verify-deployments-with-new-relic-05.png)](./static/4-verify-deployments-with-new-relic-05.png) + +The expression that you provide in **Expression for Host/Container Name** should evaluate to the names here. + +### Step 6: Analysis Time Duration + +You can use integers and expressions in the **Analysis Time Duration** field. + +See [Harness Variable Expression](https://docs.harness.io/article/9dvxcegm90-variables) and [Analysis Time Duration](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#analysis-time-duration). + +### Step 7: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 8: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Step 9: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +### Step 10: Wait interval before execution + +Set how long the deployment process should wait before executing the verification step. + +### Review: Guide from Example + +This section explains how to use the **Guide From Example** option next to the **Expression for Host/Container name** field, but the same information applies to when the **Guide From Example** option is next any other field, such as a **Message** field.In the New Relic verification step's settings, you can see the **Guide From Example** option next to the **Expression for Host/Container name** field. This option lets you select the host(s), pod(s), or container(s) for Harness to use when performing verification. + +You select the host, pod, or container in **Guide From Example**, and an expression is added to the **Expression for Host/Container name** field. The default expression is **${instance.host.hostName}**. + +![](./static/4-verify-deployments-with-new-relic-07.png) + +In order to obtain the names of the host(s) pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow **after** you have run at least one successful deployment. Then the **Guide From Example** feature can display the host or container name(s) for you to select.To ensure that you pick the right name when using **Guide From Example**, you can use a host name in New Relic as a guide. + +To use **Guide From Example** for a host or container name expression, do the following: + +1. In **New Relic**, click your application, click the left menu, and then click **JVMs**. The host names are listed in the first column of the table. + + ![](./static/4-verify-deployments-with-new-relic-08.png) + + The expression that you provide in **Expression for Host/Container Name** should evaluate to the names here. + +2. In your Harness Workflow's **Configure****New Relic** settings, click **Guide From Example**. The **Expression for Host Name** popover appears. + The dialog shows the Service, Environment, and Service Infrastructure used for this Workflow. + + ![](./static/4-verify-deployments-with-new-relic-09.png) + +3. In **Host**, click the name of the host to use when testing verification. Match the hostname from the New Relic JVMs to the hostname in the Expression for Host Name popover: + + ![](./static/4-verify-deployments-with-new-relic-10.png) + +4. Click **SUBMIT**. The YAML for the host appears. Look for the **host** section. + + ![](./static/4-verify-deployments-with-new-relic-11.png) + + You want to use a **hostName** label in the **host** section. Do not use a **hostName** label outside of that section. + + ![](./static/4-verify-deployments-with-new-relic-12.png) + +5. To identify which label to use to build the expression, compare the host/container names in the YAML with the host names in the New Relic **JVMs** page. + + ![](./static/4-verify-deployments-with-new-relic-13.png) + +6. Locate the host/pod/container to use, and click the **label** to select the expression. For example, if you clicked the **hostName** label, the expression **${host.hostName}** is added to the **Expression for Host/Container name** field. Click back in the main dialog to close the **Guide From Example**. +7. At the bottom of the New Relic dialog, click **TEST**. + + ![](./static/4-verify-deployments-with-new-relic-14.png) + + A new **Expression for Host Name** popover appears. + + ![](./static/4-verify-deployments-with-new-relic-15.png) + +8. In **Host**, select the same host you selected last time, and then click **RUN**. Verification for the host is found. + + ![](./static/4-verify-deployments-with-new-relic-16.png) + + If you have errors, see [Troubleshooting](#troubleshooting). + +9. Click back in the **New Relic** dialog and click **SUBMIT**. The New Relic verification step is added to your workflow. + + ![](./static/4-verify-deployments-with-new-relic-17.png) + +Using **Guide From Example** for other dialog fields is the same process as above. + +### Review: Templatize New Relic Verification + +Once you have created a New Relic verification step, you can templatize certain settings. This enables you to use the New Relic verification step in the Workflow (and multiple Pipelines) without having to provide settings until runtime. + +You templatize settings by click the **[T]** icon next to the setting. + +![](./static/4-verify-deployments-with-new-relic-18.png) + +The settings are replaced by Workflow variables: + +![](./static/4-verify-deployments-with-new-relic-19.png) + +You will now see them in the **Workflow Variables** section of the Workflow: + +![](./static/4-verify-deployments-with-new-relic-20.png) + +When you deploy the Workflow, **Start New Deployment** prompts you to enter values for templatize settings: + +![](./static/4-verify-deployments-with-new-relic-21.png) + +You can select the necessary settings and deploy the Workflow. + +You can also pass variables into a Workflow from a Trigger that can be used for templatized values. For more information, see [Passing Variables into Workflows and Pipelines from Triggers](../../model-cd-pipeline/expressions/passing-variable-into-workflows.md). + +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/4-verify-deployments-with-new-relic-22.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 11: View Verification Results + +Once you have deployed your workflow (or pipeline) using the New Relic verification step, you can automatically verify cloud application and infrastructure performance across your deployment. + +New Relic APIs will return transactions with and without data, so Harness checks for load. Harness gets all the web transactions and then fetches metric data to find out if they have load. If load exists, then Harness fetches node level data. In cases where none of the web transactions have load Harness doesn't collect anything. In cases where there is no load, Harness can't create a baseline. + +#### Workflow Verification + +To see the results of Harness machine-learning evaluation of your New Relic verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **New Relic** step. + +![](./static/4-verify-deployments-with-new-relic-23.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +#### Transaction Analysis + + + +| | | +| --- | --- | +| **Execution details**See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took.**Risk level analysis**Get an overall risk level and view the cluster chart to see events.**Transaction-level summary**See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. | ![](./static/_nr-00-trx-anal.png) | + +#### Execution Analysis + + + +| | | +| --- | --- | +| ![](./static/_nr-01-ex-anal.png) | **Event type**Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event.**Cluster chart**View the chart to see how the selected event contrast. Click each event to see its log details. | + +#### Event Management + + + +| | | +| --- | --- | +| **Event-level analysis**See the threat level for each event captured.**Tune event capture**Remove events from analysis at the service, workflow, execution, or overall level.**Event distribution**Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. | ![](./static/_nr-02-ev-mgmnt.png) | + +### Next Steps + +* [Troubleshoot Deployment Verification with New Relic](5-troubleshooting-new-relic.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/5-troubleshooting-new-relic.md b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/5-troubleshooting-new-relic.md new file mode 100644 index 00000000000..63258373ce9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/5-troubleshooting-new-relic.md @@ -0,0 +1,43 @@ +--- +title: Troubleshooting New Relic +description: Resolutions to common configuration problems with New Relic. +sidebar_position: 50 +helpdocs_topic_id: 3d5sv5p9pf +helpdocs_category_id: 1nci5420c8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following are resolutions to common configuration problems. + +### Workflow Step Test Error + +When you click **TEST** in the **New Relic** workflow dialog **Expression for Host Name** popover, you should get provider information: + +![](./static/5-troubleshooting-new-relic-31.png) + +The following error message can occur when testing the New Relic verification step in your workflow: + + +``` +NEWRELIC_CONFIGURATION_ERROR: Error while saving New Relic configuration. No node with name ${hostName} found reporting to new relic +``` +Here is the error in the Expression for Host Name popover: + +![](./static/5-troubleshooting-new-relic-32.png) + +#### Cause + +The expression in the **Expression for Host/Container name** field is incorrect. Typically, this occurs when the wrong hostName label is selected to create the expression in the **Expression for Host/Container name** field. + +#### Solution + +Following the steps in [Guide From Example](#guide_from_example) again to select the correct expression. Ensure that the **hostName** label selected is under the **host** section of the YAML. + +![](./static/5-troubleshooting-new-relic-33.png) + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/_category_.json new file mode 100644 index 00000000000..5ed98ebb55e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "New Relic Verification", + "position": 100, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "New Relic Verification" + }, + "customProps": { + "helpdocs_category_id": "1nci5420c8" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-24.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-24.png new file mode 100644 index 00000000000..91de8ed1a5d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-25.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-25.png new file mode 100644 index 00000000000..0651e275272 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-26.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-26.png new file mode 100644 index 00000000000..0651e275272 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-27.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-27.png new file mode 100644 index 00000000000..ac93f051f58 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-28.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-28.png new file mode 100644 index 00000000000..ac93f051f58 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-28.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-29.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-29.png new file mode 100644 index 00000000000..636dec7461f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-30.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-30.png new file mode 100644 index 00000000000..636dec7461f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/1-new-relic-connection-setup-30.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-37.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-37.png new file mode 100644 index 00000000000..917c3f94f80 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-37.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-38.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-38.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-38.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-39.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-39.png new file mode 100644 index 00000000000..4a4670d1bae Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-39.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-40.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-40.png new file mode 100644 index 00000000000..2f28b603125 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-40.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-41.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-41.png new file mode 100644 index 00000000000..2f28b603125 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-41.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-42.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-42.png new file mode 100644 index 00000000000..917c3f94f80 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-42.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-43.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-43.png new file mode 100644 index 00000000000..75e5abf3171 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-43.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-44.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-44.png new file mode 100644 index 00000000000..658aa80f220 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/2-24-7-service-guard-for-new-relic-44.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-34.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-34.png new file mode 100644 index 00000000000..cd4ced18b06 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-34.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-35.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-35.png new file mode 100644 index 00000000000..4b92c70b016 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-35.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-36.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-36.png new file mode 100644 index 00000000000..7df55c019b6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/3-new-relic-deployment-marker-36.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-00.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-00.png new file mode 100644 index 00000000000..82249486595 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-01.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-01.png new file mode 100644 index 00000000000..f633deaa8c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-02.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-02.png new file mode 100644 index 00000000000..cc6d951304f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-03.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-03.png new file mode 100644 index 00000000000..2f28b603125 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-04.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-04.png new file mode 100644 index 00000000000..2f28b603125 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-05.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-05.png new file mode 100644 index 00000000000..9385b92f8de Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-06.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-06.png new file mode 100644 index 00000000000..9385b92f8de Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-07.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-07.png new file mode 100644 index 00000000000..a3d1c82b6ef Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-08.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-08.png new file mode 100644 index 00000000000..9385b92f8de Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-09.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-09.png new file mode 100644 index 00000000000..c94ef22fae1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-10.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-10.png new file mode 100644 index 00000000000..f62af4e1aae Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-11.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-11.png new file mode 100644 index 00000000000..13f78737e67 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-12.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-12.png new file mode 100644 index 00000000000..1b41d618969 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-13.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-13.png new file mode 100644 index 00000000000..aa0cfcd885d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-14.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-14.png new file mode 100644 index 00000000000..b14b8b95f4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-15.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-15.png new file mode 100644 index 00000000000..8fabc8fa37f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-16.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-16.png new file mode 100644 index 00000000000..dcf87c79473 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-17.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-17.png new file mode 100644 index 00000000000..2944606ce0f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-18.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-18.png new file mode 100644 index 00000000000..a0dc1e1a557 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-19.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-19.png new file mode 100644 index 00000000000..810b34571f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-20.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-20.png new file mode 100644 index 00000000000..e0a34587a70 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-21.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-21.png new file mode 100644 index 00000000000..9a3ca95081d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-22.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-22.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-23.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-23.png new file mode 100644 index 00000000000..95f52e0fa6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/4-verify-deployments-with-new-relic-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-31.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-31.png new file mode 100644 index 00000000000..dcf87c79473 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-31.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-32.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-32.png new file mode 100644 index 00000000000..7055fece321 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-32.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-33.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-33.png new file mode 100644 index 00000000000..1b41d618969 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/5-troubleshooting-new-relic-33.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-00-trx-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-00-trx-anal.png new file mode 100644 index 00000000000..dbc95c281ce Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-00-trx-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-01-ex-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-01-ex-anal.png new file mode 100644 index 00000000000..96cdcd2e943 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-01-ex-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-02-ev-mgmnt.png b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-02-ev-mgmnt.png new file mode 100644 index 00000000000..778ad1dbe21 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/new-relic-verification/static/_nr-02-ev-mgmnt.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/1-prometheus-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/1-prometheus-connection-setup.md new file mode 100644 index 00000000000..a939dc51dc1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/1-prometheus-connection-setup.md @@ -0,0 +1,49 @@ +--- +title: Connect to Prometheus +description: Connect Harness to Prometheus and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: da3je0ck3a +helpdocs_category_id: 177rlmujlu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Prometheus with Harness is to set up a Prometheus Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as Prometheus. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your Prometheus data and analysis. + +### Before You Begin + +* See the [Prometheus Verification Overview](../continuous-verification-overview/concepts-cv/prometheus-verification-overview.md). + +### Step 1: Add Prometheus Verification Provider + +To add Prometheus as a Verification Provider, do the following: + +1. Click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **Prometheus**. The **Add Prometheus Verification** **Provider** dialog appears.![](./static/1-prometheus-connection-setup-00.png) + +### Step 2: URL + +Enter the URL of the server. + +You cannot use a Grafana URL. + +### Step 3: Display Name + +Enter a display name for the provider. If you are going to use multiple providers of the same type, ensure you give each provider a different name. + +### Step 4: Usage Scope + +If you want to restrict the use of a provider to specific applications and environments, do the following: + +In **Usage Scope**, click the drop-down under **Applications**, and click the name of the application. + +In **Environments**, click the name of the environment. + +### Next Steps + +* [Monitor Applications 24/7 with Prometheus](2-24-7-service-guard-for-prometheus.md) +* [Verify Deployments with Prometheus](3-verify-deployments-with-prometheus.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/2-24-7-service-guard-for-prometheus.md b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/2-24-7-service-guard-for-prometheus.md new file mode 100644 index 00000000000..2f5c394bb2c --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/2-24-7-service-guard-for-prometheus.md @@ -0,0 +1,111 @@ +--- +title: Monitor Applications 24/7 with Prometheus +description: Combined with Prometheus, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: i9d01kf32g +helpdocs_category_id: 177rlmujlu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +Harness 24/7 Service Guard differs from Deployment Verification in that it monitors the live microservice whereas Deployment Verification monitors the hosts and nodes for the first 15 minutes following steady state. + +You can add your Prometheus monitoring to Harness 24/7 Service Guard in your Harness Application Environment. See [Connect to Prometheus](1-prometheus-connection-setup.md). + +This section assumes you have a Harness Application set up and containing a Service and Environment. For steps on setting up a Harness Application, see [Application Checklist](../../model-cd-pipeline/applications/application-configuration.md). + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [Prometheus Verification Overview](../continuous-verification-overview/concepts-cv/prometheus-verification-overview.md). + +### Visual Summary + +Here's an example of a 24/7 Service Guard configuration for Prometheus. + +![](./static/2-24-7-service-guard-for-prometheus-11.png) + +### Step 1: Set Up 24/7 Service Guard for Prometheus + +To set up 24/7 Service Guard for Prometheus, do the following: + +1. Ensure that you have added Prometheus as a Harness Verification Provider, as described in [Connect to Prometheus](1-prometheus-connection-setup.md). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. + + ![](./static/2-24-7-service-guard-for-prometheus-12.png) + +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Prometheus**. The **Prometheus** dialog appears. + + ![](./static/2-24-7-service-guard-for-prometheus-13.png) + +The **Prometheus** dialog has the following fields. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the Application or Service you want monitored. (Verification is Application/Service level.) This is unlike Workflows, where deployment verification is performed at the host/node/pod level. + +### Step 2: Display Name + +Enter the name to identify this Service's Prometheus monitoring on the 24/7 Service Guard dashboard. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: Prometheus Server + +Select the server you added when setting up the [Connect to Prometheus](1-prometheus-connection-setup.md). + +### Step 5: Metric to Monitor + +Every time series is uniquely identified by its metric name and a set of key-value pairs, also known as labels. For more information, see [Data Model](https://prometheus.io/docs/concepts/data_model/) from Prometheus. A metric requires the following parameters: + +* **Metric Name:** The name of the metric defined in Prometheus. +* **Metric Type:** The type of metric (Response Time, Error, Throughput, or Value). +* **Group Name:** The transaction (service or request context) which the metric relates to. For example, Login or Hardware. +* **Query:** The API query required to retrieve the metric value. + +When you add your query in **Query**, you want the query to return a single time series result for the metric and transaction you identify. If it returns multiple results, Harness will not process your verification step.You can simply obtain your query from Prometheus and paste it into Harness. + +For example, here is a query in Prometheus: + +![](./static/2-24-7-service-guard-for-prometheus-14.png) + +See [Expression queries](https://prometheus.io/docs/prometheus/latest/querying/api/#expression-queries) from Prometheus for example of queries, but always use the placeholders demonstrated above. + +You cannot use the built-in Harness expressions `${service.name}` in the query. + +#### Always Use Throughput with Error and Response Time Metrics + +Whenever you use the Error metric type, you should also add another metric for Throughput with the same Group Name. + +![](./static/2-24-7-service-guard-for-prometheus-16.png) + +Harness analyze errors as error percentage and without the throughput the error number does not provide much information. + +The same setup should used with the Response Time metric also. Whenever you set up a Response Time metric, setup a Throughput metric with the same Group Name. + +![](./static/2-24-7-service-guard-for-prometheus-17.png) + +### Step 6: Custom Thresholds + +In the **Custom Thresholds** section, you can define **Ignore Hints**. These are rules that instruct Harness to skip certain metrics/value combinations from verification analysis. + +To configure these rules, see [Apply Custom Thresholds to 24/7 Service Guard](../24-7-service-guard/custom-thresholds-24-7.md). + +### Step 7: Algorithm Sensitivity + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 8: Enable 24/7 Service Guard + +Click the checkbox to enable 24/7 Service Guard. + +### Next Steps + +* [Verify Deployments with Prometheus](3-verify-deployments-with-prometheus.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/3-verify-deployments-with-prometheus.md b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/3-verify-deployments-with-prometheus.md new file mode 100644 index 00000000000..6faff274236 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/3-verify-deployments-with-prometheus.md @@ -0,0 +1,169 @@ +--- +title: Verify Deployments with Prometheus +description: Harness can analyze Prometheus data to verify, rollback, and improve deployments. +sidebar_position: 30 +helpdocs_topic_id: qkcn11esgb +helpdocs_category_id: 177rlmujlu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +When Harness deploys a new application or service to the target environment defined in the workflow, it will immediately connect to the Prometheus Server and build a model of what it is observing. + +Next, Harness compares this model with previous deployment models to identify anomalies or regressions. If necessary, Harness rolls back to the previous working version automatically. For more information, see [Rollback Steps](../../model-cd-pipeline/workflows/workflow-configuration.md#rollback-steps). + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See  [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the  Prometheus Verification Overview. + +### Visual Summary + +Here's an example of a Prometheus setup for verification. + +![](./static/3-verify-deployments-with-prometheus-01.png) + +Here is an example of a deployment Pipeline Stage verified using Prometheus. + +![](./static/3-verify-deployments-with-prometheus-02.png) + +Under **Prometheus**, you can see that all Prometheus metrics have been validated by the Harness machine learning algorithms. Green indicates that there are no anomalies or regressions identified and the deployment is operating within its normal range. + +### Step 1: Set Up the Deployment Verification + +To verify your deployment with Prometheus, do the following: + +1. Ensure that you have added Prometheus as a verification provider, as described in [Prometheus Connection Setup](1-prometheus-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **Prometheus**. + + ![](./static/3-verify-deployments-with-prometheus-03.png) + +4. Click **Next**. The **Configure****Prometheus** settings appear. + + ![](./static/3-verify-deployments-with-prometheus-04.png) + +The **Configure** **Prometheus** settings include the following fields. + +### Step 2: Prometheus Server + +Select the server you added when setting up the [Prometheus Connection Setup](1-prometheus-connection-setup.md). + +### Step 3: Metrics to Monitor + +Every time series is uniquely identified by its metric name and a set of key-value pairs, also known as labels. For more information, see [Data Model](https://prometheus.io/docs/concepts/data_model/) from Prometheus. A metric requires the following parameters: + +* **Metric Name:** The name of the metric defined in Prometheus. +* **Metric Type:** The type of metric (Response Time, Error, Infra, Throughput, or Value). +* **Group Name:** The transaction (service or request context) which the metric relates to. For example, Login or Hardware. +* **Query:** The API query required to retrieve the metric value. This query must include a placeholder for hostname, `$hostName`. + +When you add your query in **Query**, you want the query to return a single time series result for the metric and transaction you identify. If it returns multiple results, Harness will not process your verification step.For **Query**, you can simply copy your query from Prometheus and paste it into Harness, and then replace the actual hostname in the query with `$hostName`. + +For example, here is a query in Prometheus: + +[![](./static/3-verify-deployments-with-prometheus-05.png)](./static/3-verify-deployments-with-prometheus-05.png) + +The actual query string is: + +`container_cpu_usage_seconds_total{pod_name="prometheus-deployment-7c878596ff-r8qrt",namespace="harness"}` + +When you paste that string into the Query field in Harness, you replace the `pod_name` value with `$hostName`: + +`container_cpu_usage_seconds_total{pod_name="$hostName",namespace="harness"}` + +#### Always Use Throughput with Error and Response Time Metrics + +Whenever you use the Error metric type, you should also add another metric for Throughput with the same Group Name. + +![](./static/3-verify-deployments-with-prometheus-07.png) + +Harness analyze errors as error percentage and without the throughput the error number does not provide much information. + +The same setup should used with the Response Time metric also. Whenever you set up a Response Time metric, setup a Throughput metric with the same Group Name. + +![](./static/3-verify-deployments-with-prometheus-08.png) + +### Step 4: Custom Thresholds + +In the **Custom Thresholds** section, define two types of rules that override normal verification behavior: + +* **Ignore Hints** instruct Harness to skip certain metrics/value combinations from verification analysis. +* **Fast-Fail Hints** cause a Workflow to promptly enter a failed state. + +To configure these rules, see [Apply Custom Thresholds to Deployment Verification](../tuning-tracking-verification/custom-thresholds.md). + +### Step 5: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#analysis-time-duration). + +### Step 6: Baseline for Risk Analysis + +**Canary Analysis:** (Available in Workflows using the **Canary** and **Rolling** Workflow Types) + +Harness will compare the metrics received for the nodes deployed in each phase with metrics received for the rest of the nodes in the application. For example, if this phase deploys to 25% of your nodes, the metrics received from Prometheus during this deployment for these nodes will be compared with metrics received for the other 75% during the defined period of time. + +**Previous Analysis:** (Available in Workflows using the **Basic**, **Blue/Green**, and **Rolling** Workflow Types) + +Harness will compare the metrics received for the nodes deployed in each phase with metrics received for all the nodes during the previous deployment. For example, if this phase deploys V1.2 to node A, the metrics received from Prometheus during this deployment will be compared to the metrics for nodes A, B, and C during the previous deployment (V1.1). Previous Analysis is best used when you have predictable load, such as in a QA environment. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 7: Algorithm Sensitivity + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 8: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +### Step 9: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Step 10: Test Configuration + +Click **Test**. + +If a multiple time series error occurs as follows, add more filters to your query to configure successfully. + + +``` +Error while saving Prometheus configuration. Multiple time series values are returned for metric name CPU and group name Hardware. Please add more filters to your query to return only one time series. +``` +Update your query to add more filters (like container\_name = "POD"), as follows: + + +``` +query=container_cpu_usage_seconds_total{pod_name="$hostName", container_name="POD"} +``` +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-prometheus-09.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 11: View Verification Results + +When Harness deploys a new application or service to the target environment defined in the workflow, it will immediately connect to the Prometheus Server and build a model of what it is observing. + +Next, Harness compares this model with previous deployment models to identify anomalies or regressions. If necessary, Harness rolls back to the previous working version automatically. For more information, see [Rollback Steps](../../model-cd-pipeline/workflows/workflow-configuration.md#rollback-steps). + +Here is an example of a deployment Pipeline Stage verified using Prometheus. + +![](./static/3-verify-deployments-with-prometheus-10.png) + +Under **Prometheus**, you can see that all Prometheus metrics have been validated by the Harness machine learning algorithms. Green indicates that there are no anomalies or regressions identified and the deployment is operating within its normal range. + +To see an overview of the verification UI elements, see [Continuous Verification Tools](https://docs.harness.io/article/xldc13iv1y-meet-harness#-continuous-verification-tools-). + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/_category_.json new file mode 100644 index 00000000000..e3778a03dd4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Prometheus Verification", + "position": 110, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Prometheus Verification" + }, + "customProps": { + "helpdocs_category_id": "177rlmujlu" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/1-prometheus-connection-setup-00.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/1-prometheus-connection-setup-00.png new file mode 100644 index 00000000000..356eb0284d3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/1-prometheus-connection-setup-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-11.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-11.png new file mode 100644 index 00000000000..3f9c91de565 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-12.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-12.png new file mode 100644 index 00000000000..a0b9a6cf372 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-13.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-13.png new file mode 100644 index 00000000000..3f9c91de565 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-14.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-14.png new file mode 100644 index 00000000000..b93cef1beec Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-15.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-15.png new file mode 100644 index 00000000000..b93cef1beec Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-16.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-16.png new file mode 100644 index 00000000000..63089c07122 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-17.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-17.png new file mode 100644 index 00000000000..86e114aeead Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/2-24-7-service-guard-for-prometheus-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-01.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-01.png new file mode 100644 index 00000000000..8dc2538a92e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-02.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-02.png new file mode 100644 index 00000000000..22ead9cc5b4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-03.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-03.png new file mode 100644 index 00000000000..01c59dbecbf Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-04.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-04.png new file mode 100644 index 00000000000..8dc2538a92e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-05.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-05.png new file mode 100644 index 00000000000..b16bdf7bdf7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-06.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-06.png new file mode 100644 index 00000000000..b16bdf7bdf7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-07.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-07.png new file mode 100644 index 00000000000..63089c07122 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-08.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-08.png new file mode 100644 index 00000000000..86e114aeead Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-09.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-09.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-10.png b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-10.png new file mode 100644 index 00000000000..22ead9cc5b4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/prometheus-verification/static/3-verify-deployments-with-prometheus-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/1-splunk-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/1-splunk-connection-setup.md new file mode 100644 index 00000000000..db0ba61f0b1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/1-splunk-connection-setup.md @@ -0,0 +1,89 @@ +--- +title: Connect to Splunk +description: Connect Harness to Splunk and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: 1ruyqq4q4p +helpdocs_category_id: wnxi7xc4a4 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Splunk with Harness is to set up an Splunk Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as Bugsnag. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your Splunk data and analysis. + +### Before You Begin + +* See the [Splunk Verification Overview](../continuous-verification-overview/concepts-cv/splunk-verification-overview.md). + +### Step 1: Assign Permissions for API Connection + +Splunk APIs require that you authenticate with a non-SAML account. To access your Splunk Cloud deployment using the Splunk REST API and SDKs, submit a support case requesting access on the Support Portal. For managed deployments, Splunk Support opens port 8089 for REST access. You can specify a range of IP addresses to control who can access the REST API. For self-service deployments, Splunk Support defines a dedicated user and sends you credentials that enable that user to access the REST API. For information on Splunk self-service accounts, see [Using the REST API with Splunk Cloud](http://docs.splunk.com/Documentation/Splunk/7.2.0/RESTTUT/RESTandCloud). + +Ensure that the Splunk user account used to authenticate Harness with Splunk is assigned to a role that contains the following REST-related capabilities: + +* Search. +* Access to the indexes you want to search. + +#### Permissions Setup Example + +Here, we've created a new Splunk role named **Harness User**, and assigned it search capability: + +![](./static/1-splunk-connection-setup-04.png) + +We've given this role access to **All non-internal indexes**. However, we could restrict the access to only the few relevant indexes: + +![](./static/1-splunk-connection-setup-05.png) + +### Step 2: Add Splunk Verification Provider + +To add Splunk as a verification provider, do the following: + +1. Click **Setup**. +2. Click **Connectors**. +3. Click **Verification Providers**. +4. Click **Add Verification Provider**, and select **Splunk**. The **Add Splunk Verification Provider** dialog appears. + + ![](./static/1-splunk-connection-setup-06.png) + + The **Add Splunk Verification Provider** dialog has the following fields. + +### Step 3: URL + +Enter the URL for accessing the REST API on the Splunk server. Include the port number in the format **https://<deployment-name>.cloud.splunk.com:8089**: The default port number is 8089.The port number is required for hosted Splunk, also. For example: **https://mycompany.splunkcloud.com:8089**.For more information, see [Using the REST API with Splunk Cloud](http://docs.splunk.com/Documentation/Splunk/7.1.3/RESTTUT/RESTandCloud) from Splunk. + +### Step 4: Username and Encrypted Password + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets).Enter the account credentials to authenticate with the server. A user role that is **not** authenticated with SAML is required. You do not need an admin role. + +### Step 5: Display Name + +Enter a display name for the provider. If you are going to use multiple providers of the same type, ensure you give each provider a different name. + +### Step 6: Usage Scope + +Usage scope is inherited from the secrets used in the settings. + +### Step 7: Test Delegate Connection to Splunk Server + +**Delegate connection** — If the Test button fails, you can use the following script to verify that the Harness Delegate's host can connect to the Splunk server: + + +``` +curl -k https://:8089/services/auth/login --data-urlencode username= --data-urlencode password='' +``` +If this script fails, it is likely that the host running the Harness Delegate has networking issues, or there is an authentication issue. + +**Search Job Permission** — As part of validating the configuration, Harness creates a search job as a test. This test will fail if the user account used for the Harness Splunk Verification Provider does not have the permission to search jobs. + +To test if the user account can run searches, use the following cURL command. + + +``` +curl -u admin:changeme -k https://localhost:8089/services/search/jobs -d search="search *" +``` +### Next Steps + +* [Monitor Applications 24/7 with Splunk](2-24-7-service-guard-for-splunk.md) +* [Verify Deployments with Splunk](3-verify-deployments-with-splunk.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/2-24-7-service-guard-for-splunk.md b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/2-24-7-service-guard-for-splunk.md new file mode 100644 index 00000000000..fcf16cb39f2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/2-24-7-service-guard-for-splunk.md @@ -0,0 +1,110 @@ +--- +title: Monitor Applications 24/7 with Splunk +description: Combined with Splunk, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: feiv05dmnk +helpdocs_category_id: wnxi7xc4a4 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Splunk monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see  [Connect to Splunk](1-splunk-connection-setup.md). + + +### Before You Begin + +* See the [Splunk Verification Overview](../continuous-verification-overview/concepts-cv/splunk-verification-overview.md). +* See [Connect to Splunk](1-splunk-connection-setup.md). + +### Visual Summary + +Here's an example configuration of 24/7 Service Guard for Splunk. + +![](./static/2-24-7-service-guard-for-splunk-00.png) + +### Step 1: Set Up 24/7 Service Guard for Splunk + +To set up 24/7 Service Guard for Splunk: + +1. Ensure that you have added Splunk as a Harness Verification Provider, as described in  [Connect to Splunk](1-splunk-connection-setup.md). +2. In your Harness Application, ensure that you have added a Service, as described in  [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see  [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Splunk**. The **Splunk** dialog appears. + + ![](./static/2-24-7-service-guard-for-splunk-01.png) + +8. Fill out the dialog. The dialog has the following fields. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **Splunk**. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: Splunk Server + +Select the Harness Verification Provider you configured using your Splunk account. + +### Step 5: Search Keywords + +Enter a search term or query. To search for all exceptions, use asterisks (\*) around **exception**, for example, **\*exception\***. For more information, see [Retrieve events from the index](http://docs.splunk.com/Documentation/Splunk/7.2.0/SearchTutorial/Startsearching#Retrieve_events_from_the_index) from Splunk. + +When you enter a search such as **\*exception\***, at runtime Harness will generate a query containing your search and information Harness needs to perform monitoring, such as the information following **\*exception\*** below: + + +``` +search *exception* host = ip-172-31-81-88 | bin _time span=1m | + + + +cluster t=0.9999 showcount=t labelonly=t| + + + +table _time, _raw,cluster_label, host | + + + +stats latest(_raw) as _raw count as cluster_count by _time,cluster_label,host +``` +If you want more flexibility in your search, or to repurpose Splunk searches you already have, you can click **Advanced Query** and enter whatever you like in **Search Keywords**. Simply copy and paste your Splunk query into Harness, such as `search index=*prod *exception*`. + +Note that you will specify host field name and host/pod/container name in other settings so you do not need to include them in the search query. + +### Step 6: Field name for Host/Container + +Typically, you will enter **host**. You can enter **host** into the Splunk **Search** field to see the host for your Harness deployment: + +[![](./static/2-24-7-service-guard-for-splunk-02.png)](./static/2-24-7-service-guard-for-splunk-02.png) + +### Step 7: Baseline + +Select the baseline for comparison. + +### Step 8: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + + For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +### Troubleshoot: Splunk Connectivity and Service Guard Configuration + +If you are able to test the connection while configuring Splunk, but unable to schedule a job while setting up 24/7 Service Guard: + +* Make sure the permissions of the Splunk user are granted for REST API. +* If all the permissions are granted, create a new user in Splunk and verify if the CURL call from this user works. +* On Harness On-Prem, make sure the firewall port is open for the Splunk REST API access. + +### Next Steps + +* [Verify Deployments with Splunk](3-verify-deployments-with-splunk.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/3-verify-deployments-with-splunk.md b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/3-verify-deployments-with-splunk.md new file mode 100644 index 00000000000..068a54370f5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/3-verify-deployments-with-splunk.md @@ -0,0 +1,252 @@ +--- +title: Verify Deployments with Splunk +description: Harness can analyze Splunk data to verify, rollback, and improve deployments. +sidebar_position: 30 +helpdocs_topic_id: zi7doy7zn8 +helpdocs_category_id: wnxi7xc4a4 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following procedure describes how to add Splunk as a verification step in a Harness workflow. For more information about workflows, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md). + +Once you run a deployment and Splunk preforms verification, Harness' machine-learning verification analysis will assess the risk level of the deployment. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow **after** you have run at least one successful deployment. + + +### Before You Begin + +* See the [Splunk Verification Overview](../continuous-verification-overview/concepts-cv/splunk-verification-overview.md). +* See [Monitor Applications 24/7 with Splunk](2-24-7-service-guard-for-splunk.md). + +### Visual Summary + +Here's an example configuration of the Splunk Deployment Verification. + +![](./static/3-verify-deployments-with-splunk-07.png) + +### Step 1: Set Up the Deployment Verification + +To verify your deployment with Splunk, do the following: + +1. Ensure that you have added Splunk as a verification provider, as described in [Splunk Connection Setup](1-splunk-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select **Log Analysis** > **Splunk**. + + ![](./static/3-verify-deployments-with-splunk-08.png) + +4. Click **Next**. The **Configure Splunk** settings appear. + + ![](./static/3-verify-deployments-with-splunk-09.png) + +These settings include the following fields. + +### Step 2: Splunk Server + +Select the Harness Verification Provider you configured using your Splunk account. + +You can templatize the **Splunk Server** setting by clicking the **[T]** button. This puts the expression `${Splunk_Server}` in the **Splunk Server** setting. You can change the variable name in the expression. + +When you templatize the **Splunk Server** setting, it creates a [Workflow variable](../../model-cd-pipeline/workflows/workflow-configuration.md#add-workflow-variables), which is a parameter that must be given a value when the Workflow is deployed. + +The following diagram shows the templatized **Splunk Server** setting, the Workflow variable it creates, and how you can provide a value when you deploy the Workflow. + +[![](./static/3-verify-deployments-with-splunk-10.png)](./static/3-verify-deployments-with-splunk-10.png) + +### Step 3: Search Keywords + +Enter a search term or query. To search for all exceptions, use asterisks (\*) around **exception**, for example, **\*exception\***. For more information, see [Retrieve events from the index](http://docs.splunk.com/Documentation/Splunk/7.2.0/SearchTutorial/Startsearching#Retrieve_events_from_the_index) from Splunk. + +When you enter a search such as **\*exception\***, at runtime Harness will generate a query containing your search and information Harness needs to perform verification, such as the information following **\*exception\*** below: + + +``` +search *exception* host = ip-172-31-81-88 | bin _time span=1m | +cluster t=0.9999 showcount=t labelonly=t| +table _time, _raw,cluster_label, host | +stats latest(_raw) as _raw count as cluster_count by _time,cluster_label,host +``` +If you want more flexibility in your search, or to repurpose Splunk searches you already have, you can click **Advanced Query** and enter whatever you like in **Search Keywords**. For example, you could replace **\*exception\*** with an existing Splunk search like `search index=*prod *exception*`. + +Note that you will specify host field name and host/pod/container name in other settings so you do not need to include them in the search query. + +### Step 4: Field name for Host/Container + +Typically, you will enter **host**. You can enter **host** into the Splunk **Search** field to see the host for your Harness deployment: + +[![](./static/3-verify-deployments-with-splunk-12.png)](./static/3-verify-deployments-with-splunk-12.png) + +### Step 5: Expression for Host/Container name + +See [Guide From Example](#guide_from_example). + +### Step 6: Analysis Time duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#analysis-time-duration). + +### Step 7: Baseline for Risk Analysis + +**Canary Analysis** - Harness will compare the metrics received for the nodes deployed in each phase with metrics received for the rest of the nodes in the application. For example, if this phase deploys to 25% of your nodes, the metrics received from Splunk during this deployment for these nodes will be compared with metrics received for the other 75% during the defined period of time. + +**Previous Analysis** - Harness will compare the metrics received for the nodes deployed in each phase with metrics received for all the nodes during the previous deployment. For example, if this phase deploys V1.2 to node A, the metrics received from Splunk during this deployment will be compared to the metrics for nodes A, B, and C during the previous deployment (V1.1). + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 8: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Option: Use Guide from Example - Host field + +This section uses a **host** field for the **Expression for Host/Container name** field. For Kubernetes, see [Kubernetes and Splunk](3-verify-deployments-with-splunk.md#kubernetes-and-splunk).In the Splunk verification step dialog, you can see the **Guide From Example** option next to the **Expression for Host/Container name** field. This option lets you select the host(s), pod(s), or container(s) for Harness to use when performing verification. + +![](./static/3-verify-deployments-with-splunk-14.png) + +You select the host, pod, or container in **Guide From Example**, and an expression is added to the **Expression for Host/Container name** field. The default expression is `${instance.host.hostName}`. Typically, you can simply use `${instance.host.hostName}`. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}.In order to obtain the names of the host(s) pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow **after** you have run at least one successful deployment. Then the **Guide From Example** feature can display the host or container name(s) for you to select.To ensure that you pick the right name when using **Guide From Example**, you can use a host name in Splunk as a guide. + +To use **Guide From Example** for a host or container name expression, do the following: + +1. In **Splunk**, click **App: Search & Reporting**, and then click **Search & Reporting**. + + ![](./static/3-verify-deployments-with-splunk-15.png) + +2. In **Search**, enter **host** to see a list of the available hosts being tracked. + + ![](./static/3-verify-deployments-with-splunk-16.png) + +3. Click the name of your host to add it to the search, select a date range, and click the search icon. The event log entries for the host appear. + + ![](./static/3-verify-deployments-with-splunk-17.png) + + The name of the host can be seen in the event message, next to **host =**. The expression that you provide in the **Expression for Host/Container Name** field in the Harness **Splunk** dialog should evaluate to the name here. + + You might have a different label than **host**, such as **pod\_name**. You simply use the label that identifies the host or container.![](./static/3-verify-deployments-with-splunk-18.png) + +4. In your Harness workflow **Splunk** dialog, click **Guide From Example**. The **Expression for Host Name** popover appears. + + ![](./static/3-verify-deployments-with-splunk-19.png) + + The dialog shows the service, environment, and service infrastructure used for this workflow. + +5. In **Host**, click the name of the host to use when testing verification. Match the hostname from the Splunk Search to the hostname in the **Expression for Host Name** popover: + + ![](./static/3-verify-deployments-with-splunk-20.png) + +6. Click **SUBMIT**. The YAML for the host appears. Look for the **host** section. + + ![](./static/3-verify-deployments-with-splunk-21.png) + + You want to use a **hostName** label in the **host** section. Do not use a **hostName** label outside of that section. + + ![](./static/3-verify-deployments-with-splunk-22.png) + +7. Click the **hostName** label. The variable name is added to the **Expression for Host/Container name** field. + + ![](./static/3-verify-deployments-with-splunk-23.png) + +8. At the bottom of the Splunk dialog, click **TEST**. A new **Expression for Host Name** popover appears. + + ![](./static/3-verify-deployments-with-splunk-24.png) + +9. In **Host**, select the same host you selected last time, and then click **RUN**. Verification information for the host is found. In there is no verification data for the selected node, the test will display connection information only. + + ![](./static/3-verify-deployments-with-splunk-25.png) + +10. Click back in the **Splunk** dialog and click **SUBMIT**. The Splunk verification step is added to your workflow. + + ![](./static/3-verify-deployments-with-splunk-26.png) + +### Option: Use Guide from Example - Kubernetes + +In the **Guide From Example** section above we used a **host** example for the **Expression for Host/Container name** field, but if the Workflow is deploying Kubernetes, you will likely use **pod** or **pod\_name** or a custom label. + +For Kubernetes deployments, your Splunk account must perform Kubernetes log collection. This is typically done using [Splunk Connect for Kubernetes](https://github.com/splunk/splunk-connect-for-kubernetes). For information on using Splunk Connect, see [Deploy Splunk Enterprise on Kubernetes](https://www.splunk.com/blog/2018/12/17/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-1.html) on the Splunk Blog. + +In addition, the Splunk [fields.conf](https://docs.splunk.com/Documentation/ITSI/4.2.1/Configure/fields.conf) file should contain the following fields in order to search Kubernetes logs in Splunk: + + +``` + [namespace] + INDEXED = true + + [pod] + INDEXED = true + + [container_name] + INDEXED = true + + [container_id] + INDEXED = true + + [cluster_name] + INDEXED = true +``` +You can edit fields.conf in `$SPLUNK_HOME/etc/system/local/fields.conf` or in a custom app directory `$SPLUNK_HOME/etc/apps/myapp/local/fields.conf`. + +Ensure that your Kubernetes deployment is set up to log what you need. See [Logging Architecture](https://kubernetes.io/docs/concepts/cluster-administration/logging/) from Kubernetes. + +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-splunk-27.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 9: View Verification Results + +Once you have deployed your workflow (or pipeline) using the Splunk verification step, you can automatically verify app performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +#### Workflow Verification + +After you add the Splunk verification step to your workflow, the next time you deploy the workflow you will see the Splunk verification step running: + +![](./static/3-verify-deployments-with-splunk-28.png) + +To see the results of Harness machine-learning evaluation of your Splunk verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **Splunk** step. + +![](./static/3-verify-deployments-with-splunk-29.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +##### Transaction Analysis + +**Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. + +**Risk level analysis:** Get an overall risk level and view the cluster chart to see events. + +**Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + +![](./static/_splunk-00-trx-anal.png) + + +##### Execution Analysis + +**Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event. + +**Cluster chart:** View the chart to see how the selected event contrast. Click each event to see its log details. + +![](./static/_splunk-01-ex-anal.png) + +##### Event Management + +**Event-level analysis:** See the threat level for each event captured. + +**Tune event capture:** Remove events from analysis at the service, workflow, execution, or overall level. + +**Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Users and Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/_category_.json new file mode 100644 index 00000000000..3e8d78d6fae --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Splunk Verification", + "position": 120, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Splunk Verification" + }, + "customProps": { + "helpdocs_category_id": "wnxi7xc4a4" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-04.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-04.png new file mode 100644 index 00000000000..14ba7488c9e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-05.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-05.png new file mode 100644 index 00000000000..34834d1817a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-06.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-06.png new file mode 100644 index 00000000000..320ebdcbd4d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/1-splunk-connection-setup-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-00.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-00.png new file mode 100644 index 00000000000..b88be7d1a08 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-01.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-01.png new file mode 100644 index 00000000000..b88be7d1a08 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-02.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-02.png new file mode 100644 index 00000000000..d3bf1edc481 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-03.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-03.png new file mode 100644 index 00000000000..d3bf1edc481 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/2-24-7-service-guard-for-splunk-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-07.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-07.png new file mode 100644 index 00000000000..a4ec8b8b368 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-08.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-08.png new file mode 100644 index 00000000000..492b58e658f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-09.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-09.png new file mode 100644 index 00000000000..a4ec8b8b368 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-10.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-10.png new file mode 100644 index 00000000000..dcd6fedc2f0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-11.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-11.png new file mode 100644 index 00000000000..dcd6fedc2f0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-12.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-12.png new file mode 100644 index 00000000000..d3bf1edc481 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-13.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-13.png new file mode 100644 index 00000000000..d3bf1edc481 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-14.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-14.png new file mode 100644 index 00000000000..a3d1c82b6ef Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-15.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-15.png new file mode 100644 index 00000000000..d6071fb1e0b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-16.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-16.png new file mode 100644 index 00000000000..d3bf1edc481 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-17.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-17.png new file mode 100644 index 00000000000..8fde5c10f30 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-18.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-18.png new file mode 100644 index 00000000000..f634ad3f142 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-19.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-19.png new file mode 100644 index 00000000000..ea3d5989354 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-20.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-20.png new file mode 100644 index 00000000000..def60435896 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-21.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-21.png new file mode 100644 index 00000000000..13f78737e67 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-22.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-22.png new file mode 100644 index 00000000000..8635cfdae01 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-23.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-23.png new file mode 100644 index 00000000000..8dde6fbceb5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-24.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-24.png new file mode 100644 index 00000000000..ee01a6959fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-25.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-25.png new file mode 100644 index 00000000000..3016b5e63e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-26.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-26.png new file mode 100644 index 00000000000..2b10801ba08 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-27.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-27.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-28.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-28.png new file mode 100644 index 00000000000..2cb1d4467e1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-28.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-29.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-29.png new file mode 100644 index 00000000000..2d8afd3e19b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/3-verify-deployments-with-splunk-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/_splunk-00-trx-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/_splunk-00-trx-anal.png new file mode 100644 index 00000000000..876841ff2b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/_splunk-00-trx-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/_splunk-01-ex-anal.png b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/_splunk-01-ex-anal.png new file mode 100644 index 00000000000..96cdcd2e943 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/splunk-verification/static/_splunk-01-ex-anal.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/2-24-7-service-guard-for-stackdriver.md b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/2-24-7-service-guard-for-stackdriver.md new file mode 100644 index 00000000000..671e47b6e9e --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/2-24-7-service-guard-for-stackdriver.md @@ -0,0 +1,139 @@ +--- +title: Monitor Applications 24/7 with Stackdriver Logging +description: Combined with Stackdriver Logging, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: 485lq1k7mo +helpdocs_category_id: 5mu8983wa0 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. + +Once you have set up a Stackdriver Verification Provider in Harness, as described in [Connect to Stackdriver](stackdriver-connection-setup.md), you can add your Stackdriver **Logs** monitoring to Harness 24/7 Service Guard in your Harness Application Environment. + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [Stackdriver Verification Overview](../continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md). + + +### Visual Summary + +Here's an example of a completed Stackdriver Logging setup for verification. + +![](./static/2-24-7-service-guard-for-stackdriver-16.png) + + +### Step 1: Set up 24/7 Service Guard + +To set up 24/7 Service Guard for Stackdriver, do the following: + +1. Ensure that you have added Stackdriver as a Harness Verification Provider, as described in  [Connect to Stackdriver](stackdriver-connection-setup.md). +2. In your Harness Application, ensure that you have added a Service, as described in  [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see  [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Stackdriver Log**. The **Stackdriver Log** dialog appears. + + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **Stackdriver**. + + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + + +### Step 4: GCP Cloud Provider + +Select the GCP Cloud Provider to use, as described in [Connect to Stackdriver](stackdriver-connection-setup.md). If you currently connect to GCP via a Kubernetes Cluster Cloud Provider, you must set up a GCP Cloud Provider for access to the Stackdriver data on your cluster. + + +### Step 5: Search Keywords + +Enter search keywords for your query. You can use the same filters you have in GCP **Logs Viewer**. + +![](./static/2-24-7-service-guard-for-stackdriver-17.png) + +Simply copy a filter entry into **Search Keywords**: + +![](./static/2-24-7-service-guard-for-stackdriver-18.png) + +To use multiple filter entries, place an **AND** between them or use multiline entries. For example: + +![](./static/2-24-7-service-guard-for-stackdriver-19.png) + +For advanced filter examples, see [Advanced filters library](https://cloud.google.com/logging/docs/view/filters-library) from GCP. + +For information on the log entries used, see [Viewing Logs](https://cloud.google.com/logging/docs/view/overview) from GCP. + + +### Step 6: Host Name Field + +Enter the log field that contains the name of the host/pod/container for which you want logs. You can enter a pod ID or field name for example. + +Harness uses this field to group data and perform analysis at the container-level. + +For example, the query in **Search Keywords** looks for pods labelled `nginx-deployment`: + + +``` +resource.type="container" +resource.labels.pod_id:"nginx-deployment-" +``` +In **Host Name Field**, you would enter **pod\_id** because it is the log field containing the pod name. In a log, this field will be in the resource section: + + +``` +... + resource: { + labels: { + cluster_name: "doc-example" + container_name: "harness-delegate-instance" + instance_id: "1733097732247470454" + namespace_id: "harness-delegate" + **pod\_id: "harness-sample-k8s-delegate-wverks-0"** project_id: "exploration-161417" + zone: "us-central1-a" + } + type: "container" + } +... +``` + +### Step 7: Enable 24/7 Service Guard + +Click the checkbox to enable 24/7 Service Guard. + + +### Step 8: Baseline + +Select the baseline time unit for monitoring. For example, if you select **For 4 hours**, Harness will collect the logs for the last 4 hours as the baseline for comparisons with future logs. If you select **Custom Range** you can enter a **Start Time** and **End Time**. + +When you are finished, the dialog will look something like this: + +![](./static/2-24-7-service-guard-for-stackdriver-20.png) +### Step 9: Verify Your Settings + +1. Click **TEST**. Harness verifies the settings you entered. +2. Click **SUBMIT**. The Stackdriver 24/7 Service Guard is configured. + +![](./static/2-24-7-service-guard-for-stackdriver-21.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/2-24-7-service-guard-for-stackdriver-22.png) + + For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + + +### Next Steps + +* [Monitor Applications 24/7 with Stackdriver Metrics](monitor-applications-24-7-with-stackdriver-metrics.md) +* [Verify Deployments with Stackdriver Logging](3-verify-deployments-with-stackdriver.md) +* [Verify Deployments with Stackdriver Metrics](verify-deployments-with-stackdriver-metrics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/3-verify-deployments-with-stackdriver.md b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/3-verify-deployments-with-stackdriver.md new file mode 100644 index 00000000000..ff0d7c71fab --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/3-verify-deployments-with-stackdriver.md @@ -0,0 +1,169 @@ +--- +title: Verify Deployments with Stackdriver Logging +description: Harness can analyze Stackdriver data to verify, rollback, and improve deployments. +sidebar_position: 40 +helpdocs_topic_id: 4ohedfzz1c +helpdocs_category_id: 5mu8983wa0 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Stackdriver data and analysis to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up Stackdriver as a verification step in a Harness Workflow. + +This topic covers how to set up Stackdriver Logs in a Harness Workflow, and provides a summary of Harness verification results. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow *after* you have run at least one successful deployment. + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [Stackdriver Verification Overview](../continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md). + +### Visual Summary + +Here's an example of a completed Stackdriver setup for verification. + +![](./static/3-verify-deployments-with-stackdriver-08.png) + +### Step 1: Set up the Deployment Verification + +To verify your deployment with Stackdriver, do the following: + +1. Ensure that you have added Google Cloud Platform as a Cloud Provider provider, as described in [Connect to Stackdriver](stackdriver-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Step**. +3. In the resulting **Add Step** settings, select **Log Analysis** > **Stackdriver**. +4. Click **Next**. The **Configure****Stackdriver** settings appear. +5. In **GCP Cloud Provider**, select the [Google Cloud Platform (GCP) Cloud Provider](https://docs.harness.io/article/whwnovprrb-cloud-providers#google_cloud_platform_gcp) you set up in Harness. + You can also enter variable expressions, such as: `${serviceVariable.stackdriver_connector_name}`. + + If the **GCP Cloud Provider** field contains an expression, the **Region** field must also use an expression.1. In **Region**, select the GCP [region](https://cloud.google.com/compute/docs/regions-zones/) where the application is hosted. The Stackdriver API uses a service-specific notion of location. Harness uses the name of a region. You can find the region in Stackdriver Metrics Explorer by selecting the **location** column: + + ![](./static/3-verify-deployments-with-stackdriver-09.png) + + You can also enter a variable expression for the region, such as `${env.name}`.Because Harness does not currently support multi-region load balancers, you must add a Stackdriver step for each region. If the **GCP Cloud Provider** field contains an expression, the **Region** field must also use an expression. + +6. Select **Log Verification**. + +### Step 2: Search Keywords + +Enter search keywords for your query. You can use the same filters you have in GCP **Logs Viewer**. + +![](./static/3-verify-deployments-with-stackdriver-10.png) + +Simply copy a filer entry into **Search Keywords**: + +![](./static/3-verify-deployments-with-stackdriver-11.png) + +To use multiple filter entries, you can place an **AND** between them or use multiline entries. For example: + +![](./static/3-verify-deployments-with-stackdriver-12.png) + +For advanced filter examples, see [Advanced filters library](https://cloud.google.com/logging/docs/view/filters-library) from GCP. + +#### Troubleshooting queries + +Once you try to verify the deployment and you can encounter an error as follows: + +`Execution logs are not available for old executions.` + +**Solution** + +Update your query to include the required statement (`jsonPayload.message:*` in this case), as follows: + + +``` +resource.type="k8s_container" resource.labels.cluster_name="guse4-kube-poc" resource.labels.namespace_name="rc" +resource.labels.container_name="hello-container" +jsonPayload.message:* +severity="ERROR" +``` +### Step 3: Host Name Field + +Enter the log field that contains the name of the host for which you want logs. You can enter a pod ID or name. + +For example, the query in **Search Keywords** looks for pods labelled `nginx-deployment`: + + +``` +resource.type="container" +resource.labels.pod_id:"nginx-deployment-" +``` +In **Host Name Field**, you would enter **pod\_id** because it is the log field containing the pod name. In a log, this field will be in the resource section: + + +``` +... + resource: { + labels: { + cluster_name: "doc-example" + container_name: "harness-delegate-instance" + instance_id: "1733097732247470454" + namespace_id: "harness-delegate" + **pod\_id: "harness-sample-k8s-delegate-wverks-0"** project_id: "exploration-161417" + zone: "us-central1-a" + } + type: "container" + } +... +``` +### Step 4: Message Field + +Enter the field by which the messages are usually indexed. You can also enter variable expressions, such as: `${serviceVariable.message_field}`. + +#### Troubleshooting Message Field Entries + +If you select `jsonPayload.message` as Message Field, but the queries return log messages that do not have the field `jsonPayload.message`, the workflow will fail. + +**Solution** + +Add a statement in your query to ensure that the query only fetches logs which always have `jsonPayload.message` field included, as follows. + +`jsonPayload.message:*` + +### Step 5: Algorithm Sensitivity + +Select the Algorithm Sensitivity. See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +### Step 6: Baseline + +Select the baseline time unit for monitoring. For example, if you select **For 4 hours**, Harness will collect the logs for the last 4 hours as the baseline for comparisons with future logs. If you select **Custom Range** you can enter a **Start Time** and **End Time.** + +When you are finished, the dialog will look something like this: + +![](./static/3-verify-deployments-with-stackdriver-13.png) + +### Step 7: Confirm Your Settings + +Click **Test** to confirm your settings. In the testing assistant, select a host and click **Run**. When you have confirmed your settings, click **Submit**. + +The Stackdriver verification step is added to your Workflow. + +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-stackdriver-14.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 8: View Verification Results + +Once you have deployed your Workflow (or Pipeline) using the Stackdriver verification step, you can automatically verify performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +#### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Stackdriver verification, in your Workflow or pipeline deployment you can expand the **Verify Service** step and then click the **Stackdriver** step. + +![](./static/3-verify-deployments-with-stackdriver-15.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The Workflow verification view is for the DevOps user who developed the Workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +### Next Steps + +* [Verify Deployments with Stackdriver Metrics](verify-deployments-with-stackdriver-metrics.md) +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Users and Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) +* [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/_category_.json new file mode 100644 index 00000000000..bc417ec4501 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Stackdriver Verification", + "position": 130, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Stackdriver Verification" + }, + "customProps": { + "helpdocs_category_id": "5mu8983wa0" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/monitor-applications-24-7-with-stackdriver-metrics.md b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/monitor-applications-24-7-with-stackdriver-metrics.md new file mode 100644 index 00000000000..abe848be3c1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/monitor-applications-24-7-with-stackdriver-metrics.md @@ -0,0 +1,123 @@ +--- +title: Monitor Applications 24/7 with Stackdriver Metrics +description: Combined with Stackdriver Metrics, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 30 +helpdocs_topic_id: nxy3mcw053 +helpdocs_category_id: 5mu8983wa0 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. + +Once you have set up the Stackdriver Verification Provider in Harness, as described in [Connect to Stackdriver](stackdriver-connection-setup.md), you can add your Stackdriver **Metrics** monitoring to Harness 24/7 Service Guard in your Harness Application Environment. + + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [Stackdriver Verification Overview](../continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md). + + +### Visual Summary + +Here's an example of a completed Stackdriver Metrics for 24/7 Service Guard setup. + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-00.png) + + +### Step 1: Set up 24/7 Service Guard + +To set up 24/7 Service Guard for Stackdriver, do the following: + +1. Ensure that you have added Stackdriver as a Harness Verification Provider, as described in  [Connect to Stackdriver](stackdriver-connection-setup.md). +2. In your Harness Application, ensure that you have added a Service, as described in  [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see  [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Stackdriver Metrics**. The **Stackdriver Metric** dialog appears. + + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **Stackdriver**. + + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + + +### Step 4: GCP Cloud Provider + +Select the GCP Cloud Provider to use, as described in [Connect to Stackdriver](stackdriver-connection-setup.md). If you currently connect to GCP via a Kubernetes Cluster Cloud Provider, you must set up a GCP Cloud Provider for access to the Stackdriver data on your cluster. + + +### Step 5: Metrics to Monitor + +In this section you define the Stackdriver metrics you want to monitor. For example, here is a Stackdriver Metrics Explorer configured to monitor Kubernetes container restarts, filtered by a cluster name and grouped by cluster. + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-01.png) + +To reproduce these settings in **Metrics To Monitor**, you simply copy its filter and group by details via its JSON. + + +### Step 6: Metric Name, Metric Type, and Group Name + +In **Metric Name**, enter a name to identify the metric in Harness, such as **Restarts**. This is not the Stackdriver-specific name of a metric. + +In **Metric Type**, select the type of metric to monitor, such as **Infra**. + +In **Group Name**, enter a name for grouping the metrics in Harness, such as **PodRestarts**. The Group Name is useful when you want Harness to monitor multiple metrics, and be able to group them. + + +### Step 7: JSON Query + +Paste in the JSON query from Stackdriver Metrics Explorer. + +In Stackdriver Metrics Explorer, once you have your metric query set up, click the **View as JSON** option. + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-02.png) + +Next, click **COPY JSON**. + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-03.png) + +In Harness Stackdriver Metrics, in **JSON Query**, paste in the JSON. + + +### Step 8: Algorithm Sensitivity + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + + +### Step 9: Enable 24/7 Service Guard + +Click the checkbox to enable 24/7 Service Guard. + +When you are finished, the dialog will look something like this: + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-04.png) + + +### Step 10: Verify Your Settings + +Click **TEST**. Harness verifies the settings you entered. + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-05.png) + +Click **SUBMIT**. The Stackdriver 24/7 Service Guard is configured. + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-06.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/monitor-applications-24-7-with-stackdriver-metrics-07.png) + + +### Next Steps + +* [Verify Deployments with Stackdriver Logging](3-verify-deployments-with-stackdriver.md) +* [Verify Deployments with Stackdriver Metrics](verify-deployments-with-stackdriver-metrics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/stackdriver-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/stackdriver-connection-setup.md new file mode 100644 index 00000000000..2cb267d93bd --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/stackdriver-connection-setup.md @@ -0,0 +1,54 @@ +--- +title: Connect to Stackdriver +description: Connect Harness to Stackdriver and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: dysdvm3vo7 +helpdocs_category_id: 5mu8983wa0 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Connect Harness to Stackdriver to have Harness verify the success of your deployments and monitor live microservices. Harness will use your tools for verification and monitoring and use its machine learning features to identify sources of failures. + +Most APM and logging tools are added to Harness as Verification Providers. For Stackdriver, you use the Google Cloud Platform account set up as a Harness Cloud Provider. + + +### Before You Begin + +* See [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). +* See the [Stackdriver Verification Overview](../continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md). + + +### Step 1: Assign Roles and Permissions + +The following roles must be attached to the account used to connect Harness and Stackdriver as a Google Cloud Provider: + +* **Stackdriver Logs** - The minimum role requirement is **logging.viewer** +* **Stackdriver Metrics** - The minimum role requirements are **compute.networkViewer** and **monitoring.viewer**. + +See [Access control](https://cloud.google.com/monitoring/access-control) from Google. + + +### Step 2: Add GCP Cloud Provider for Stackdriver + +To add Stackdriver as a Cloud Provider, follow the steps for adding a [Google Cloud Platform](https://docs.harness.io/article/whwnovprrb-cloud-providers#google_cloud_platform_gcp) Cloud Provider. + +1. In Harness, click **Setup**, and then click **Cloud Providers**. +2. Click **Add Cloud Provider** and select **Google Cloud Platform**. + + ![](./static/stackdriver-connection-setup-29.png) + +### Step 3: Provide Google Cloud's Account Service Key File + +In **Select Encrypted Key**, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) that contains the Google Cloud's Account Service Key File. + +To obtain the Google Cloud's Account Service Key File, see [Creating and managing service account keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) from Google. + + +### Next Steps + +* [Verify Deployments with Stackdriver Logging](3-verify-deployments-with-stackdriver.md) +* [Verify Deployments with Stackdriver Metrics](verify-deployments-with-stackdriver-metrics.md) +* [Monitor Applications 24/7 with Stackdriver Logging](2-24-7-service-guard-for-stackdriver.md) +* [Monitor Applications 24/7 with Stackdriver Metrics](monitor-applications-24-7-with-stackdriver-metrics.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-16.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-16.png new file mode 100644 index 00000000000..81f818f91f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-17.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-17.png new file mode 100644 index 00000000000..e4e81fe436f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-18.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-18.png new file mode 100644 index 00000000000..5b5ce08c69e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-19.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-19.png new file mode 100644 index 00000000000..86f74b49c0d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-20.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-20.png new file mode 100644 index 00000000000..88b39dc7aa6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-21.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-21.png new file mode 100644 index 00000000000..9a762e5a747 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-22.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-22.png new file mode 100644 index 00000000000..14fcef3170e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/2-24-7-service-guard-for-stackdriver-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-08.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-08.png new file mode 100644 index 00000000000..4dc1c6996bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-09.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-09.png new file mode 100644 index 00000000000..7139200a974 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-10.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-10.png new file mode 100644 index 00000000000..e4e81fe436f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-11.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-11.png new file mode 100644 index 00000000000..5b5ce08c69e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-12.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-12.png new file mode 100644 index 00000000000..86f74b49c0d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-13.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-13.png new file mode 100644 index 00000000000..b4b60ef20bf Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-14.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-14.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-15.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-15.png new file mode 100644 index 00000000000..ce9c8831308 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/3-verify-deployments-with-stackdriver-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-00.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-00.png new file mode 100644 index 00000000000..b50e34e1ace Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-01.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-01.png new file mode 100644 index 00000000000..45e7770fd08 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-02.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-02.png new file mode 100644 index 00000000000..f855c75f8f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-03.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-03.png new file mode 100644 index 00000000000..f385dec0e3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-04.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-04.png new file mode 100644 index 00000000000..facd461104a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-05.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-05.png new file mode 100644 index 00000000000..5a0f5c1040c Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-06.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-06.png new file mode 100644 index 00000000000..2b0e5826fa5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-07.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-07.png new file mode 100644 index 00000000000..c175ca42e80 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/monitor-applications-24-7-with-stackdriver-metrics-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/stackdriver-connection-setup-29.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/stackdriver-connection-setup-29.png new file mode 100644 index 00000000000..a0f0cf7bcf1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/stackdriver-connection-setup-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-23.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-23.png new file mode 100644 index 00000000000..3e6e3fb7a1f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-24.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-24.png new file mode 100644 index 00000000000..45e7770fd08 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-25.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-25.png new file mode 100644 index 00000000000..f855c75f8f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-26.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-26.png new file mode 100644 index 00000000000..f385dec0e3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-27.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-27.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-28.png b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-28.png new file mode 100644 index 00000000000..ce9c8831308 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/static/verify-deployments-with-stackdriver-metrics-28.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/verify-deployments-with-stackdriver-metrics.md b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/verify-deployments-with-stackdriver-metrics.md new file mode 100644 index 00000000000..0dd10565fc3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/stackdriver-verification/verify-deployments-with-stackdriver-metrics.md @@ -0,0 +1,154 @@ +--- +title: Verify Deployments with Stackdriver Metrics +description: Harness can analyze Stackdriver data and analysis to verify, rollback, and improve deployments. Set up Stackdriver as a verification step in a Harness… +sidebar_position: 50 +helpdocs_topic_id: m0j49kz112 +helpdocs_category_id: 5mu8983wa0 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Stackdriver data and analysis to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up Stackdriver as a verification step in a Harness Workflow. + +This topic covers the process to set up Stackdriver Metrics in a Harness Workflow, and provides a summary of Harness verification results. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your workflow *after* you have run at least one successful deployment. + +### Before You Begin + +* Set up a Harness Application, containing a Service and Environment. See [Create an Application](../../model-cd-pipeline/applications/application-configuration.md). +* See the [Stackdriver Verification Overview](../continuous-verification-overview/concepts-cv/stackdriver-and-harness-overview.md). + +### Visual Summary + +Here's an overview of Stackdriver Metrics setup for verification. + +![](./static/verify-deployments-with-stackdriver-metrics-23.png) + +### Step 1: Set up the Deployment Verification + +To verify your deployment with Stackdriver, do the following: + +1. Ensure that you have added Google Cloud Platform as a Cloud Provider provider, as described in [Connect to Stackdriver](stackdriver-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Step**. +3. In the resulting **Add Step** settings, select **Performance Monitoring** > **Stackdriver**. +4. Click **Next**. The **Configure****Stackdriver** settings appear. +5. In **GCP Cloud Provider**, select the [Google Cloud Platform (GCP) Cloud Provider](https://docs.harness.io/article/whwnovprrb-cloud-providers#google_cloud_platform_gcp) you set up in Harness. +6. You can also enter variable expressions, such as: `${serviceVariable.stackdriver_connector_name}`. + +### Step 2: Metrics to Monitor + +In this section you define the Stackdriver metrics you want to monitor. For example, here is a Stackdriver Metrics Explorer configured to monitor Kubernetes container restarts, filtered by a cluster name and grouped by cluster. + +![](./static/verify-deployments-with-stackdriver-metrics-24.png) + +To reproduce these settings in **Metrics To Monitor**, you simply copy its filter and group by details via its JSON. + +### Step 3: Metric Name, Metric Type, and Group Name + +In **Metric Name**, enter a name to identify the metric in Harness, such as **Restarts**. This is not the Stackdriver-specific name of a metric. + +In **Metric Type**, select the type of metric to monitor, such as **Infra**. + +In **Group Name**, enter a name for grouping the metrics in Harness, such as **PodRestarts**. The Group Name is useful when you want Harness to monitor multiple metrics, and be able to group them. + +### Step 4: JSON Query + +Paste in the JSON query from Stackdriver Metrics Explorer. + +Make sure you provide `${host}` string for filtering in the query. For example: `resource.label.\"pod_name\"=\"${host}\""`. + +In Stackdriver Metrics Explorer, once you have your metric query set up, click the **View as JSON** option. + +![](./static/verify-deployments-with-stackdriver-metrics-25.png) + +Next, click **COPY JSON**. + +![](./static/verify-deployments-with-stackdriver-metrics-26.png) + +In Harness Stackdriver Metrics, in **JSON Query**, paste in the JSON. + +Here is an example using the `${host}` expression: + + +``` + "dataSets": [ + { + "timeSeriesFilter": { + "filter": "metric.type=\"kubernetes.io/container/memory/limit_utilization\" resource.type=\"k8s_container\" resource.label.\"cluster_name\"=\"harness_test\" resource.label.\"pod_name\"=\"${host}\"", + "minAlignmentPeriod": "60s", + "unitOverride": "1", + "aggregations": [ + { + "perSeriesAligner": "ALIGN_MEAN", + "crossSeriesReducer": "REDUCE_SUM", + "groupByFields": [] + }, + { + "crossSeriesReducer": "REDUCE_NONE" + } + ] + }, + "targetAxis": "Y1", + "plotType": "LINE" + } + ], + "options": { + "mode": "COLOR" + }, + "constantLines": [], + "timeshiftDuration": "0s", + "y1Axis": { + "label": "y1Axis", + "scale": "LINEAR" + } +} +``` +### Step 5: Analysis Time Duration + +Set the duration for the verification step. If a verification step exceeds the value, the workflow [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#analysis-time-duration). + +### Step 6: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +### Step 7: Execute with Previous Steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +### Step 8: Verify Your Settings + +Click **Test** to confirm your settings. In the testing assistant, select a host and click **Run**. When you have confirmed your settings, click **Submit**. + +The Stackdriver verification step is added to your Workflow. + +### Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/verify-deployments-with-stackdriver-metrics-27.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +### Step 9: View Verification Results + +Once you have deployed your Workflow (or Pipeline) using the Stackdriver verification step, you can automatically verify performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md). + +#### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Stackdriver verification, in your workflow or pipeline deployment you can expand the **Verify Service** step and then click the **Stackdriver** step. + +![](./static/verify-deployments-with-stackdriver-metrics-28.png) + +#### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The Workflow verification view is for the DevOps user who developed the workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Users and Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) +* [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/1-sumo-logic-connection-setup.md b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/1-sumo-logic-connection-setup.md new file mode 100644 index 00000000000..64000764d59 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/1-sumo-logic-connection-setup.md @@ -0,0 +1,75 @@ +--- +title: Connect to Sumo Logic +description: Connect Harness to Sumo Logic and verify the success of your deployments and live microservices. +sidebar_position: 10 +helpdocs_topic_id: 38qrwi7wu2 +helpdocs_category_id: ux6clfhfhz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in using Sumo Logic with Harness is to set up a Sumo Logic Verification Provider in Harness. + +A Harness Verification Provider is a connection to monitoring tools such as Sumo Logic. Once Harness is connected, you can use Harness 24/7 Service Guard and Deployment Verification with your Sumo Logic data and analysis. + +The Sumo Logic API is available to Sumo Logic Enterprise Accounts only. For more information, see [About the Search Job API](https://help.sumologic.com/APIs/Search-Job-API/About-the-Search-Job-API) from Sumo Logic. + + +### Before You Begin + +* See the [Sumo Logic Verification Provider Overview](../continuous-verification-overview/concepts-cv/sumo-logic-verification-overview.md). + +### Step 1: Add Sumo Logic Verification Provider + +To add Sumo Logic as a verification provider, do the following: + +1. In Harness, click **Setup**. +2. Click **Connectors**, and then click **Verification Providers**. +3. Click **Add Verification Provider**, and select **Sumo Logic**. The **Add Sumo Logic Verification Provider** dialog appears. + + ![](./static/1-sumo-logic-connection-setup-00.png) + +4. Complete the following fields of the **Add Sumo Logic Verification Provider** dialog. + +### Step 2: Display Name + +The name for the Sumo Logic verification provider connection in Harness. If you will have multiple Sumo Logic connections, enter a unique name.You will use this name to select this connection when integrating Sumo Logic with the **Verify Steps** of your workflows, described below. + +### Step 3: Sumo Logic API Server URL + +The API URL for your Sumo Logic account. The format of the URL is:https://api.*YOUR\_DEPLOYMENT*.sumologic.com/api/v1/Where *YOUR\_DEPLOYMENT* is either **us1**, **us2**, **eu**, **de**, or **au**. For **us1**, use **api.sumologic.com**.Sumo Logic applies default [rate limiting](https://help.sumologic.com/APIs/General-API-Information/API-Authentication#Rate_limiting).For more information, see [API Authentication](https://help.sumologic.com/APIs/General-API-Information/API-Authentication) from Sumo Logic. + +### Step 4: Encrypted Access ID + +For secrets and other sensitive settings, select or create a new [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets).Select/create the [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) for the access ID for the user account you want to use to connect to Sumo Logic. + +Access keys are generated by an individual user in Sumo Logic depending on the permissions set for their account.For more information on creating the access keys, see [Access Keys](https://help.sumologic.com/Manage/Security/Access-Keys) from Sumo Logic. + +### Step 5: Encrypted Access Key + +Select/create the [Harness Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) for the access key for the Sumo Logic user account using the connection. + +[![](./static/1-sumo-logic-connection-setup-01.png)](./static/1-sumo-logic-connection-setup-01.png) + +For more information, see [Access Keys](https://help.sumologic.com/Manage/Security/Access-Keys) from Sumo Logic. + +### Step 6: Usage Scope + +If you want to restrict the use of a provider to specific applications and environments, do the following: + +In **Usage Scope**, click the drop-down under **Applications**, and click the name of the application. + +In **Environments**, click the name of the environment. + +### Step 7: Test and Submit + +1. When you have set up the dialog, click **TEST**. +2. Once the test is completed, click **SUBMIT** to add the **Verification Provider**. + +Once you have set up Sumo Logic as a Verification Provider, you can integrate it into 24/7 Service Guard and your Workflows, as described below. + +### Next Steps + +* [Monitor Applications 24/7 with Sumo Logic](2-24-7-service-guard-for-sumo-logic.md) +* [Verify Deployments with Sumo Logic](3-verify-deployments-with-sumo-logic.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/2-24-7-service-guard-for-sumo-logic.md b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/2-24-7-service-guard-for-sumo-logic.md new file mode 100644 index 00000000000..8b15a259ea1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/2-24-7-service-guard-for-sumo-logic.md @@ -0,0 +1,81 @@ +--- +title: Monitor Applications 24/7 with Sumo Logic +description: Combined with Sumo Logic, Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. +sidebar_position: 20 +helpdocs_topic_id: cj2a2sb65g +helpdocs_category_id: ux6clfhfhz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness 24/7 Service Guard monitors your live applications, catching problems that surface minutes or hours following deployment. For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +You can add your Sumo Logic monitoring to Harness 24/7 Service Guard in your Harness Application Environment. For a setup overview, see  [Connect to Sumo Logic](1-sumo-logic-connection-setup.md). + +For more information on 24/7 Service Guard, see [24/7 Service Guard](https://docs.harness.io/article/l5ky4p140j-24-x-7-service-guard). + +### Before You Begin + +* See the [Sumo Logic Verification Overview](../continuous-verification-overview/concepts-cv/sumo-logic-verification-overview.md). +* See [Connect to Sumo Logic](1-sumo-logic-connection-setup.md). + +### Visual Summary + +Here's an example configuration of 24/7 Service Guard for Sumo Logic. + +![](./static/2-24-7-service-guard-for-sumo-logic-18.png) + +### Step 1: Set Up 24/7 Service Guard for Sumo Logic + +To set up 24/7 Service Guard for Sumo Logic, do the following: + +1. Ensure that you have added Sumo Logic as a Harness Verification Provider, as described in [Verification Provider Setup](#verification_provider_setup). +2. In your Harness Application, ensure that you have added a Service, as described in [Services](../../model-cd-pipeline/setup-services/service-configuration.md). For 24/7 Service Guard, you do not need to add an Artifact Source to the Service, or configure its settings. You simply need to create a Service and name it. It will represent your application for 24/7 Service Guard. +3. In your Harness Application, click **Environments**. +4. In **Environments**, ensure that you have added an Environment for the Service you added. For steps on adding an Environment, see [Environments](../../model-cd-pipeline/environments/environment-configuration.md). +5. Click the Environment for your Service. Typically, the **Environment Type** is **Production**. +6. In the **Environment** page, locate **24/7 Service Guard**. +7. In **24/7 Service Guard**, click **Add Service Verification**, and then click **Sumo Logic**. The **Sumo Logic** dialog appears. +8. Fill out the dialog. The dialog has the following fields. + +For 24/7 Service Guard, the queries you define to collect logs are specific to the application or service you want monitored. Verification is application/service level. This is unlike Workflows, where verification is performed at the host/node/pod level. + +### Step 2: Display Name + +The name that will identify this service on the **Continuous Verification** dashboard. Use a name that indicates the environment and monitoring tool, such as **SumoLogic**. + +### Step 3: Service + +The Harness Service to monitor with 24/7 Service Guard. + +### Step 4: Sumo Logic Server + +Select the Sumo Logic Verification Provider to use. + +### Step 5: Search Keywords + +Enter search keywords for your query, such as **\*exception\***. + +### Step 6: Enable 24/7 Service Guard + +Click the checkbox to enable 24/7 Service Guard. + +### Step 7: Verify Your Settings + +1. Click **TEST**. Harness verifies the settings you entered. +2. Click **SUBMIT**. The Sumo Logic 24/7 Service Guard is configured. + +![](./static/2-24-7-service-guard-for-sumo-logic-19.png) + +To see the running 24/7 Service Guard analysis, click **Continuous Verification**. + +The 24/7 Service Guard dashboard displays the production verification results. + +![](./static/2-24-7-service-guard-for-sumo-logic-20.png) + + For more information, see [24/7 Service Guard Overview](../continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +### Next Steps + +* [Verify Deployments with Sumo Logic](3-verify-deployments-with-sumo-logic.md) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/3-verify-deployments-with-sumo-logic.md b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/3-verify-deployments-with-sumo-logic.md new file mode 100644 index 00000000000..7797cbd6b27 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/3-verify-deployments-with-sumo-logic.md @@ -0,0 +1,164 @@ +--- +title: Verify Deployments with Sumo Logic +description: Harness can analyze Sumo Logic metrics to verify, rollback, and improve deployments. +sidebar_position: 30 +helpdocs_topic_id: jwfw9qy5it +helpdocs_category_id: ux6clfhfhz +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness can analyze Sumo Logic data to verify, rollback, and improve deployments. To apply this analysis to your deployments, you set up Sumo Logic as a verification step in a Harness Workflow. This section covers how to do so, and provides a summary of Harness verification results. + +In order to obtain the names of the host(s), pod(s), or container(s) where your service is deployed, the verification provider should be added to your Workflow *after* you have run at least one successful deployment. + + +## Before You Begin + +* See the [Sumo Logic Verification Overview](../continuous-verification-overview/concepts-cv/sumo-logic-verification-overview.md). +* See [Connect to Sumo Logic](1-sumo-logic-connection-setup.md). + +## Visual Summary + +Here's an example of Sumo Logic setup for verification. + +![](./static/3-verify-deployments-with-sumo-logic-03.png) + +## Step 1: Set Up the Deployment Verification + +To verify your deployment with Sumo Logic, do the following: + +1. Ensure that you have added Sumo Logic as a verification provider, as described in [Sumo Logic Connection Setup](1-sumo-logic-connection-setup.md). +2. In your Workflow, under **Verify Service**, click **Add Verification**. +3. In the resulting **Add Step** settings, select **Log Analysis** > **Sumo Logic**. + + ![](./static/3-verify-deployments-with-sumo-logic-04.png) + +4. Click **Next**. The **Configure****Sumo Logic** settings appear. + + ![](./static/3-verify-deployments-with-sumo-logic-05.png) + +## Step 2: Sumo Logic Server + +Select the Sumo Logic verification provider you added, as described above. + +## Step 3: Search Keywords + +Enter the keywords for your search. Use the Sumo Logic search field and then copy your keywords into the **Sumo Logic** dialog. + +[![](./static/3-verify-deployments-with-sumo-logic-06.png)](./static/3-verify-deployments-with-sumo-logic-06.png) + +Example keywords: **\*exception\*** and **\*error\***.For more information, see [Search Syntax Overview](https://help.sumologic.com/Search/Get-Started-with-Search/How-to-Build-a-Search/Search-Syntax-Overview) and [Keyword Search Expressions](https://help.sumologic.com/Search/Get-Started-with-Search/How-to-Build-a-Search/Keyword-Search-Expressions) from Sumo Logic. + +## Step 4: Field name for Host/Container + +Enter the message field that contains the host name. You can find this in the Sumo Logic search. In the Sumo Logic search field, start typing **\_source** and see the metadata options: + +[![](./static/3-verify-deployments-with-sumo-logic-08.png)](./static/3-verify-deployments-with-sumo-logic-08.png) + +Click on the source host option, **\_sourceHost**, and execute a query with it. + +[![](./static/3-verify-deployments-with-sumo-logic-10.png)](./static/3-verify-deployments-with-sumo-logic-10.png) + +View the query results and confirm that the **\_sourceHost** field returns the name of the host. And then enter **\_sourceHost** in the **Field name for Host/Container** field. + +## Step 5: Expression for Host/Container name + +Add an expression that evaluates to the hostname value for the **Message** field host information. For example, in Sumo Logic, if you look at an exception **Message**, you will see a **Host** field: + +[![](./static/3-verify-deployments-with-sumo-logic-12.png)](./static/3-verify-deployments-with-sumo-logic-12.png) + +In the service infrastructure where your Workflow deployed your artifact (see [Add a Service Infrastructure](../../model-cd-pipeline/environments/environment-configuration.md#add-a-service-infrastructure)), the hostname is listed in a JSON **name** label under a **host** label.Locate the **name** label that displays the same value as the **Host** field in your Sumo Logic **Message**. Locate the path to that **name** label and use it as the expression in **Expression for Host/Container name**.The default expression is **${instance.host.hostName}**. + +For AWS EC2 hostnames, use the expression `${instance.hostName`}. + +## Step 6: Analysis Time duration + +Set the duration for the verification step. If a verification step exceeds the value, the Workflow's [Failure Strategy](../../model-cd-pipeline/workflows/workflow-configuration.md#failure-strategy) is triggered. For example, if the Failure Strategy is **Ignore**, then the verification state is marked **Failed** but the Workflow execution continues. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#analysis-time-duration). + +## Step 7: Baseline for Risk Analysis + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md). + +## Step 8: Baseline for Predictive Analysis + +This option appears if you selected **Predictive Analysis** in **Baseline for Risk Analysis**. Specify the time unit Harness should use to pull logs to set as the baseline for predictive analysis, such as **Last 30 minutes**. + +A few notes about selecting the time unit for **Baseline for Predictive Analysis**: + +* The greater the length of time you specify for a **Predictive Analysis** baseline (in **Baseline for Predictive Analysis**), the longer it takes Harness to run the analysis. If you select **Last 24 hours**, it could take up to 15 or more minutes to perform predictive analysis. +* The greater the length of time you specify for a Predictive Analysis baseline, the more API calls Harness makes to the verification provider. Harness makes API calls to verification providers to obtain logs grouped in 15 minutes batches. If you specify a long amount of time for a Predictive Analysis baseline, Harness will need to make a lot of API calls to the verification provider. For example, if you select **Last 24 hours** as the baseline for Predictive Analysis, then Harness will make 96 API calls to collect that data. + +## Step 9: Algorithm Sensitivity + +Select the sensitivity that will result in the most useful results for your analysis. + +See [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria). + +## Step 10: Execute with previous steps + +Check this checkbox to run this verification step in parallel with the previous steps in **Verify Service**. + +## Step 11: Include instances from previous phases + +If you are using this verification step in a multi-phase deployment, select this checkbox to include instances used in previous phases when collecting data. Do not apply this setting to the first phase in a multi-phase deployment. + +## Step 12: Verify your Settings + +Click **Test**. Harness verifies the settings you entered. + +When you are finished, click **Submit**. The **Sumo Logic** verification step is added to your Workflow. + +![](./static/3-verify-deployments-with-sumo-logic-14.png) + +If you select **Predictive Analysis** in **Baseline for Risk Analysis**, the time unit range is displayed in the **Details** section of the results. See **Baseline** in the image below:![](./static/3-verify-deployments-with-sumo-logic-15.png) + +## Review: Harness Expression Support in CV Settings + +You can use expressions (`${...}`) for [Harness built-in variables](https://docs.harness.io/article/7bpdtvhq92-workflow-variables-expressions) and custom [Service](../../model-cd-pipeline/setup-services/service-configuration.md) and [Workflow](../../model-cd-pipeline/workflows/add-workflow-variables-new-template.md) variables in the settings of Harness Verification Providers. + +![](./static/3-verify-deployments-with-sumo-logic-16.png) + +Expression support lets you template your Workflow verification steps. You can add custom expressions for settings, and then provide values for those settings at deployment runtime. Or you can use Harness built-in variable expressions and Harness will provide values at deployment runtime automatically. + +## Step 13: View Verification Results + +Once you have deployed your Workflow (or Pipeline) using the Sumo Logic verification step, you can automatically verify cloud application and infrastructure performance across your deployment. For more information, see [Add a Workflow](../../model-cd-pipeline/workflows/workflow-configuration.md#deploy-a-workflow) and [Add a Pipeline](../../model-cd-pipeline/pipelines/pipeline-configuration.md#deploy-a-pipeline). + +### Workflow Verification + +To see the results of Harness machine-learning evaluation of your Sumo Logic verification: In your Workflow or Pipeline deployment, you can expand the **Verify Service** step, and then click the **Sumo Logic** step. + +![](./static/3-verify-deployments-with-sumo-logic-17.png) + +### Continuous Verification + +You can also see the evaluation in the **Continuous Verification** dashboard. The Workflow verification view is for the DevOps user who developed the Workflow. The **Continuous Verification** dashboard is where all future deployments are displayed for developers and others interested in deployment analysis. + +To learn about the verification analysis features, see the following sections. + +### Transaction Analysis + +* **Execution details:** See the details of verification execution. Total is the total time the verification step took, and Analysis duration is how long the analysis took. +* **Risk level analysis:** Get an overall risk level and view the cluster chart to see events. +* **Transaction-level summary:** See a summary of each transaction with the query string, error values comparison, and a risk analysis summary. + +### Execution Analysis + +* **Event type:** Filter cluster chart events by Unknown Event, Unexpected Frequency, Anticipated Event, Baseline Event, and Ignore Event. +* **Cluster chart:** View the chart to see how the selected event contrasts with other events. Click each event to see its log details. + +### Event Management + +* **Event-level analysis:** See the threat level for each event captured. +* **Tune event capture:** Remove events from analysis at the Service, Workflow, execution, or overall level. +* **Event distribution:** Click the chart icon to see an event distribution including the measured data, baseline data, and event frequency. + +## Next Steps + +* [CV Strategies, Tuning, and Best Practices](../continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#algorithm-sensitivity-and-failure-criteria) +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) +* [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/_category_.json new file mode 100644 index 00000000000..e188ebfb5c3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Sumo Logic Verification", + "position": 140, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Sumo Logic Verification" + }, + "customProps": { + "helpdocs_category_id": "ux6clfhfhz" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-00.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-00.png new file mode 100644 index 00000000000..f025949b404 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-01.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-01.png new file mode 100644 index 00000000000..2d2e230f8fa Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-02.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-02.png new file mode 100644 index 00000000000..2d2e230f8fa Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/1-sumo-logic-connection-setup-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-18.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-18.png new file mode 100644 index 00000000000..16b13e7f4d1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-19.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-19.png new file mode 100644 index 00000000000..67167ddc89f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-20.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-20.png new file mode 100644 index 00000000000..f92b66c434a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/2-24-7-service-guard-for-sumo-logic-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-03.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-03.png new file mode 100644 index 00000000000..56506c3c45d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-04.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-04.png new file mode 100644 index 00000000000..359f8bcaee0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-05.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-05.png new file mode 100644 index 00000000000..56506c3c45d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-06.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-06.png new file mode 100644 index 00000000000..af8a557be9f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-07.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-07.png new file mode 100644 index 00000000000..af8a557be9f Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-08.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-08.png new file mode 100644 index 00000000000..7adc8ab06e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-09.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-09.png new file mode 100644 index 00000000000..7adc8ab06e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-10.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-10.png new file mode 100644 index 00000000000..061a846d07b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-11.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-11.png new file mode 100644 index 00000000000..061a846d07b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-12.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-12.png new file mode 100644 index 00000000000..98f6459e35b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-13.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-13.png new file mode 100644 index 00000000000..98f6459e35b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-14.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-14.png new file mode 100644 index 00000000000..48e5863c5ae Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-15.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-15.png new file mode 100644 index 00000000000..bfedc6f858b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-16.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-16.png new file mode 100644 index 00000000000..655f0aa31bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-17.png b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-17.png new file mode 100644 index 00000000000..5bb048cfc48 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/sumo-logic-verification/static/3-verify-deployments-with-sumo-logic-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/_category_.json b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/_category_.json new file mode 100644 index 00000000000..8f92900cf05 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/_category_.json @@ -0,0 +1 @@ +{"label": "Tuning and Tracking Verification", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Tuning and Tracking Verification"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "r04ke134bi"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/custom-thresholds.md b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/custom-thresholds.md new file mode 100644 index 00000000000..fd54a2520ca --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/custom-thresholds.md @@ -0,0 +1,88 @@ +--- +title: Apply Custom Thresholds to Deployment Verification +description: Define Fast-Fail Hints (conditions that will promptly fail a Workflow) and Ignore Hints (conditions that will skip certain metric/value combinations from Deployment Verification analysis). +# sidebar_position: 2 +helpdocs_topic_id: z2n6mnf7u0 +helpdocs_category_id: r04ke134bi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Use Custom Thresholds to define two types of rules that override normal verification behavior: + +* **Ignore Hints** that instruct Harness to skip certain metrics/value combinations from verification analysis. +* **Fast-Fail Hints** that cause a Workflow to enter a failed state. + +This topic covers: + +* [Limitations](#limitations) +* [Before You Begin](#before_you_begin) +* [Step 1: Invoke Custom Thresholds](#invoke) +* [Step 2: Define a Rule](#define_rule) +* [Step 3: Select Criteria](#select_criteria) +* [Step 4: Add Rules and Save](#repeat_save) +* [Next Steps](#next_steps) + + +### Limitations + +* Harness currently supports Ignore thresholds for all verification providers. +* Fail fast thresholds are supported with [New Relic](../continuous-verification-overview/concepts-cv/new-relic-verification-overview.md), [Prometheus](../prometheus-verification/3-verify-deployments-with-prometheus.md) and [Custom APMs](../custom-metrics-and-logs-verification/custom-verification-overview.md). + +### Before You Begin + +* In a Workflow's **Verify Service** section, click **Add Verification.** +* In the resulting **Add Step** settings, select a verification provider compatible with Custom Thresholds. +Configure at least one Metrics Collection. + + +### Step 1: Invoke Custom Thresholds + +To begin defining one or more Custom Thresholds: + +1. In your Workflow, in the CV step you add, such as **Configure Prometheus**, click the pencil icon shown below. + + ![](./static/custom-thresholds-12.png) + +2. In the resulting dialog, select either the **Ignore Hints** or the **Fast-Fail Hints** tab. + +3. Click **Add Threshold** to begin defining a rule, as shown below. + + ![](./static/custom-thresholds-13.png) + +### Step 2: Define a Rule + +1. Use the drop-downs to select a desired **Transaction Name** and **Metric Name** from your defined Metrics Collections. +2. For Fast-Fail Hints, select an **Action** to take: **Fail immediately**, **Fail after multiple occurrences**, or **Fail after multiple occurrences in a row**. Two of these selections will expose a field where you must also specify the threshold **Occurrence Count**. + + +### Step 3: Select Criteria + +Select the **Criteria** for this rule, and enter a corresponding **Value**. + +For various **Criteria** selections, the **Value** field's label will change to **Less than** for Ignore Hints, and to **Greater than** and/or **Less than** selectors for Fast-Fail Hints (as shown below). + +![](./static/custom-thresholds-14.png) + +Here are the **Criteria** and **Value** options available for the metric you've selected: + + + +| | | +| --- | --- | +| **Criteria** | **Value** | +| Absolute Value | Enter a literal value of the selected metric. In Ignore Hints, observed values **Less than** this threshold will be skipped from verification analysis. In Fast-Fail Hints, use the **Range Selector** drop-down to select whether observed values **Less than** or **Greater than** your threshold **Value** will move the Workflow to a failed state. | +| Percentage Deviation | Enter a threshold percentage at which to either skip the metric from analysis (Ignore Hints), or fail the Workflow (Fast-Fail Hints). Units here are percentages, so entering `3` will set the threshold at a 3% anomaly from the norm. | +| Deviation | This also sets a threshold deviation from the norm. But here, the units are not percentages, but literal values of the selected metric. | + + +### Step 4: Add Rules and Save + +1. If you want to define additional rules, click **Add Threshold**, then repeat Steps 2–3. +2. Click **Submit** to save your rules and apply them to this Verification step. + + +### Next Steps + +When you deploy this Workflow: Where a Fast-Fail Hint moves the Workflow to a failed state, the Workflow's Details panel for the corresponding Verification step will indicate the triggering threshold. + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/jira-cv-ticket.md b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/jira-cv-ticket.md new file mode 100644 index 00000000000..f095f6315c0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/jira-cv-ticket.md @@ -0,0 +1,141 @@ +--- +title: File Jira Tickets on Verification Events +description: Create a Jira ticket from a Harness Verification event. +# sidebar_position: 2 +helpdocs_topic_id: v4d4pd5lxi +helpdocs_category_id: r04ke134bi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can create a Jira ticket from a Harness Verification event, either in a deployment or in 24/7 Service Guard. + +In this topic: + +* [Review: Filing Jira Tickets and Refining Analysis Feedback](#review_filing_jira_tickets_and_refining_analysis_feedback) +* [Step 1: File a Jira Ticket on an Event](#step_1_file_a_jira_ticket_on_an_event) +* [Step 2: Summary](#step_2_summary) +* [Step 3: Description](#step_3_description) +* [Step 4: Jira Connector](#step_4_jira_connector) +* [Step 5: Project](#step_5_project) +* [Step 6: Issue Type](#step_6_issue_type) +* [Step 7: Priority](#step_7_priority) +* [Step 8: Labels](#step_8_labels) +* [Step 9: Custom Fields](#step_9_custom_fields) + +### Review: Filing Jira Tickets and Refining Analysis Feedback + +![](./static/jira-cv-ticket-00.png) + +For deployments, filing Jira tickets on events is a great method for removing the causes of a deployment's failure. + +For 24/7 Service Guard, filing Jira tickets on events is very powerful: it helps you get a jump on live production issues before they fail future deployments. + +For information on refining event analysis feedback, see: + +* [Harness Verification Feedback Overview](../continuous-verification-overview/concepts-cv/harness-verification-feedback-overview.md) +* [Refine 24/7 Service Guard Verification Analysis](refine-24-7-service-guard-verification-analysis.md) +* [Refine Deployment Verification Analysis](refine-deployment-verification-analysis.md) +* [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications) + +### Step 1: File a Jira Ticket on an Event + +To create a Jira ticket from an event item, do the following: + +1. To use Jira integration with your Harness deployment verifications and 24/7 Service Guard, you need to add a Jira account as a Harness Collaboration Provider, as described [Add Collaboration Providers](https://docs.harness.io/article/cv98scx8pj-collaboration-providers#jira). +2. View a deployment verification in **Continuous Deployments** or a live production verification in **24/7 Service Guard**: + a. In a deployment, click a verify step to see its logs and events. + b. 24/7 Service Guard, click a heatmap entry to see its logs and events. + + ![](./static/jira-cv-ticket-01.png) + +3. If the event does not have a priority assigned to it, assign it a priority as described in: + * [Refine 24/7 Service Guard Verification Analysis](refine-24-7-service-guard-verification-analysis.md) + * [Refine Deployment Verification Analysis](refine-deployment-verification-analysis.md) +4. Once the event is assigned a priority, the Jira icon appears next to the event. Click the **Jira** icon. + +![](./static/jira-cv-ticket-02.png) + +The dialog for filing a Jira ticket appears. + + +![](./static/jira-cv-ticket-03.png) + +### Step 2: Summary + +The **Summary** field contains information about Harness Service and Environment. For example: + +`Continuous Verification anomaly detected for service: ToDo List in environment: CV-Test-Env` + +This is because Verification Feedback is at the Harness Service and Environment level. Learn more about feedback in [Harness Verification Feedback Overview](../continuous-verification-overview/concepts-cv/harness-verification-feedback-overview.md). + +### Step 3: Description + +The **Description** field contains the following: + +* Link to Deployment or 24/7 Service Guard log event. +* Log content. + +![](./static/jira-cv-ticket-04.png) + +Fill out the remaining Jira ticket fields. + +### Step 4: Jira Connector + +Select the Jira account to use by selecting the Jira Collaboration Provider set up for that account. For more information, see [Jira](https://docs.harness.io/article/cv98scx8pj-collaboration-providers#jira). + +### Step 5: Project + +Select a Jira project from the list. A Jira project is used to create the issue key and ID when the issue is created. The unique issue number is created automatically by Jira. + +### Step 6: Issue Type + +Select a Jira issue type from the list of types in the Jira project you selected. + +### Step 7: Priority + +Select a priority for the Jira issue. The list is generated from the Jira project you selected. + +### Step 8: Labels + +Add labels to the issue. These will be added to the Jira project you selected. + +### Step 9: Custom Fields + +Click **Configure Fields** to add custom fields. + +![](./static/jira-cv-ticket-05.png) + +When you are done, click **Submit**. + +The Jira issue number is added to the event details: + +![](./static/jira-cv-ticket-06.png) + +Click the Jira issue link to open Jira and see the issue. The issue contains the summary, link, and log content that was displayed in the Continuous Verification Feedback dialog: + +![](./static/jira-cv-ticket-07.png) + +You can update this ticket using the Jira command in Workflows. For more information, see [Jira Integration](../../model-cd-pipeline/workflows/jira-integration.md). + +Back in Harness, you can view all of the feedback for your event by clicking the **View Feedback** option: + +![](./static/jira-cv-ticket-08.png) + +The **Continuous Verification Feedback** dialog appears: + +![](./static/jira-cv-ticket-09.png) + +Once you have filed a Jira ticket, you might want to mark the event as **Not a Risk** so that it does not cause future deployment failures. Anomalous Events, with or without a priority, fail deployments. + +Also, since the event is being resolved actively, it does not need to fail a deployment. + +In the **Preferences** dialog, simply select **Mark as Not a Risk** and click **Submit**. + +![](./static/jira-cv-ticket-10.png) + +The **Anomalous Event** tag is crossed out and the **Not a Risk** tag is attached. + +![](./static/jira-cv-ticket-11.png) + +For details about event priorities and classifications, see [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications). \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/refine-24-7-service-guard-verification-analysis.md b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/refine-24-7-service-guard-verification-analysis.md new file mode 100644 index 00000000000..631590a244a --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/refine-24-7-service-guard-verification-analysis.md @@ -0,0 +1,139 @@ +--- +title: Refine 24/7 Service Guard Verification Analysis +description: Change the priority of an event in 24/7 Service Guard. +# sidebar_position: 2 +helpdocs_topic_id: 4r2a5nc6q0 +helpdocs_category_id: r04ke134bi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can refine the priority or classification of an event in 24/7 Service Guard. + +For information on changing event priorities in a deployment, see [Refine Deployment Verification Analysis](refine-deployment-verification-analysis.md). Event classifications are covered in [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications). For an overview of verification analysis feedback, see [Harness Verification Feedback Overview](../continuous-verification-overview/concepts-cv/harness-verification-feedback-overview.md).In this topic: + +* [Review: Permissions Required](#review_permissions_required) +* [Step 1: Change Event Priority in 24/7 Service Guard](#step_1_change_event_priority_in_24_7_service_guard) +* [Option: Remove from Baseline](#option_remove_from_baseline) +* [Option: Mark as Not a Risk](#option_mark_as_not_a_risk) +* [Option: Update the Event Priority](#option_update_the_event_priority) +* [Option: Add Feedback Note](#option_add_feedback_note) +* [Step 2: Review 24/7 Service Guard Feedback](#step_2_review_24_7_service_guard_feedback) + +### Review: Permissions Required + +To mark an event as not a risk (**Mark as Not a Risk**) or change its priority setting (**P0**, **P1**, etc), a Harness User must be a member of a User Group with the following User Group **Application Permissions**: + + + +| | | | | +| --- | --- | --- | --- | +| **Permission Types** | **Applications** | **Filters** | **Actions** | +| **Services** | `` | `` | **Update** | +| **Environments** | `` | `` | **Update** | + + For example, the User Group Application Permissions might look like this: + +![](./static/refine-24-7-service-guard-verification-analysis-29.png) + +### Step 1: Change Event Priority in 24/7 Service Guard + +To change the priority of an event in 24/7 Service Guard, do the following: + +1. In Harness Manager, click **Continuous Verification**. +2. Locate the Service you are interested in reviewing, and then click in the heatmap of the verification tool to view its verification analysis. + + ![](./static/refine-24-7-service-guard-verification-analysis-30.png) + +3. Review the analysis to determine if any events need to be changed. +4. To change the risk assessment for an event, click the risk assessment icon: + + ![](./static/refine-24-7-service-guard-verification-analysis-31.png) + + The priority adjustment dialog appears: + + ![](./static/refine-24-7-service-guard-verification-analysis-32.png) + +5. Select a different priority setting. The options in the dialog are described below. + +Once you have assigned a priority to an event (P0-P5), you can create a Jira issue using the event. See [File Jira Tickets on Verification Events](jira-cv-ticket.md). + +For details on the different verification settings, see [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications). + +#### Priority Events in 24/7 Service Guard + +While adding P# priority to events after a deployment is very useful (as described in [Refine Deployment Verification Analysis](https://harness.helpdocs.io/article/gd9skrjb4g-refine-deployment-verification-analysis)), priority events are especially useful in 24/7 Service Guard. + +24/7 Service Guard monitors your live, production application or service. You can mark events that show up in the live monitoring as P0-P5 and assign Jira issues for them, thereby fixing issues as soon as they show up. This prevents issues from surfacing during your next deployment. + +See [File Jira Tickets on Verification Events](https://harness.helpdocs.io/article/v4d4pd5lxi-jira-cv-ticket). + +### Option: Remove from Baseline + +This option appears if the event was marked as **Not a Risk**. + + +![](./static/refine-24-7-service-guard-verification-analysis-33.png) + +If you consider this event to be a risk, click **Remove from baseline** and assign a priority to the event. + +### Option: Mark as Not a Risk + +This option appears if the event was marked with a priority (P0-P5). + +![](./static/refine-24-7-service-guard-verification-analysis-34.png) + +Priority events fail deployments (as do Anomalous Events) .If the event should not fail the deployment, select **Mark as Not a Risk**. + +![](./static/refine-24-7-service-guard-verification-analysis-35.png) + +The next time this deployment is run, this event will be marked as a **Known Event** and added to the baseline for comparison with future deployments. + +### Option: Update the Event Priority + +This option is available if an event has a priority assigned to it (P0-P5). + +![](./static/refine-24-7-service-guard-verification-analysis-36.png) + +**Update the Event Priority** lets you change the priority for the event. All priority events fail deployments, but using the priority levels helps to reveal the different levels in the Deployments page. The priority level colors are also reflected in the 24/7 Service Guard heatmap: + +![](./static/refine-24-7-service-guard-verification-analysis-37.png) + +Once you have assigned a priority to an event (P0-P5), you can create a Jira issue using the event. See [File Jira Tickets on Verification Events](jira-cv-ticket.md). + +### Option: Add Feedback Note + +You can add notes to each event using the **Add Feedback Note** option. + +![](./static/refine-24-7-service-guard-verification-analysis-38.png) + +The note will remain with the event in future deployments. + +### Step 2: Review 24/7 Service Guard Feedback + +Once you have changed the priority or classification of an event, the event is listed in the Continuous Verification Feedback dialog for the 24/7 Service Guard Analysis. + +![](./static/refine-24-7-service-guard-verification-analysis-39.png) + +To review the 24/7 Service Guard feedback, do the following: + +1. In Harness Manager, click **Continuous Verification** to open 24/7 Service Guard. +2. Locate a Service you want to review. +3. Locate the Verification Provider for the Service you want to review. +4. Click the more options menu (**︙**) and then click **View Feedback**. + +![](./static/refine-24-7-service-guard-verification-analysis-40.png) + +The **Continuous Verification Feedback** dialog appears. + +![](./static/refine-24-7-service-guard-verification-analysis-41.png) + +1. Review the Execution Analysis to determine if any events need to be changed. +2. Click the **More Options** and then click **View Feedback**. + +![](./static/refine-24-7-service-guard-verification-analysis-42.png) + +The event title will change to indicate who updated it by adding your name to **Updated priority by <User name>**. + +All future analyses will use the new priority setting for similar events (similar by text similarity). + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/refine-deployment-verification-analysis.md b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/refine-deployment-verification-analysis.md new file mode 100644 index 00000000000..8a307e0194b --- /dev/null +++ b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/refine-deployment-verification-analysis.md @@ -0,0 +1,127 @@ +--- +title: Refine Deployment Verification Analysis +description: Change the priority of a log event in Workflow deployment. +# sidebar_position: 2 +helpdocs_topic_id: gd9skrjb4g +helpdocs_category_id: r04ke134bi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can refine the priority or category of an event in the Workflow deployment. + +For information on changing event priorities in 24/7 Service Guard, see [Refine 24/7 Service Guard Verification Analysis](refine-24-7-service-guard-verification-analysis.md). Event classifications are covered in [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications). For an overview of verification analysis feedback, see [Harness Verification Feedback Overview](../continuous-verification-overview/concepts-cv/harness-verification-feedback-overview.md).In this topic: + +* [Review: Permissions Required](#review_permissions_required) +* [Step 1: Change Event Priority in a Deployment](#step_1_change_event_priority_in_a_deployment) +* [Option: Remove from Baseline](#option_remove_from_baseline) +* [Option: Mark as Not a Risk](#option_mark_as_not_a_risk) +* [Option: Update the Event Priority](#option_update_the_event_priority) +* [Option: Add Feedback Note](#option_add_feedback_note) +* [Step 2: Review Deployment Verification Feedback](#step_2_review_deployment_verification_feedback) + +### Review: Permissions Required + +To mark an event as not a risk (**Mark as Not a Risk**) or change its priority setting (**P0**, **P1**, etc), a Harness User must be a member of a User Group with the following User Group **Application Permissions**: + + + +| | | | | +| --- | --- | --- | --- | +| **Permission Types** | **Applications** | **Filters** | **Actions** | +| **Services** | `` | `` | **Update** | +| **Environments** | `` | `` | **Update** | + + For example, the User Group Application Permissions might look like this: + +![](./static/refine-deployment-verification-analysis-15.png) + +### Step 1: Change Event Priority in a Deployment + +To change the priority of an event in a Deployment, do the following: + +1. In Harness Manager, click **Continuous Deployments**. +2. Click the name of the Workflow deployment (or Pipeline containing the Workflow) that deployed the Service you are interested in reviewing. +3. In the deployment, click the verification step you want to refine to view its **Analysis**. + + ![](./static/refine-deployment-verification-analysis-16.png) + +4. Review the **Analysis** to determine if the priority of any events need to be changed. +5. To change the risk assessment for an event, click the risk assessment icon: + + ![](./static/refine-deployment-verification-analysis-17.png) + + The event details dialog appears: + + ![](./static/refine-deployment-verification-analysis-18.png) + +6. Select a different priority setting. For details on the different verification settings, see [Verification Event Classifications](https://docs.harness.io/article/339hy0kbnu-verification-event-classifications). + +The options in the dialog are described below. + +### Option: Remove from Baseline + +This option appears if the event was marked as **Not a Risk**. + + +![](./static/refine-deployment-verification-analysis-19.png) + +If you consider this event to be a risk, click Remove from baseline and assign a priority to the event. + +### Option: Mark as Not a Risk + +This option appears if the event was marked with a priority (P0-P5). + +![](./static/refine-deployment-verification-analysis-20.png) + +Priority events fail deployments (as do Anomalous Events) .If the event should not fail the deployment, select **Mark as Not a Risk**. + +![](./static/refine-deployment-verification-analysis-21.png) + +The next time this deployment is run, this event will be marked as a **Known Event** and added to the baseline for comparison with future deployments. + +### Option: Update the Event Priority + +This option is available if an event has a priority assigned to it (P0-P5). + +![](./static/refine-deployment-verification-analysis-22.png) + +**Update the Event Priority** lets you change the priority for the event. All priority events fail deployments, but using the priority levels helps to reveal the different levels in the Deployments page. The priority level colors are also reflected in the 24/7 Service Guard heatmap: + +![](./static/refine-deployment-verification-analysis-23.png) + +### Option: Add Feedback Note + +You can add notes to each event using the **Add Feedback Note** option. + +![](./static/refine-deployment-verification-analysis-24.png) + +The note will remain with the event in future deployments. + +### Step 2: Review Deployment Verification Feedback + +Once you have changed the priority or classification of an event, the event is listed in the Continuous Deployment Feedback dialog for the verification step Analysis. + +![](./static/refine-deployment-verification-analysis-25.png) + +To review the Continuous Verification feedback, do the following: + +1. In Harness Manager, click **Continuous Deployments** to open a deployment. +2. Click a deployment's name to open it, and then expand the deployment steps until you find the verification you want to review. Click the verification step to display its **Analysis**. +3. Click the more options menu (**︙**) and then click **View Feedback**. + + ![](./static/refine-deployment-verification-analysis-26.png) + + The **Continuous Verification Feedback** dialog appears. + + ![](./static/refine-deployment-verification-analysis-27.png) + +4. Review the Execution Analysis to determine if any events need to be changed. +5. To change the risk assessment for an event, click feedback icon. + + ![](./static/refine-deployment-verification-analysis-28.png) + + The event title will change to indicate who updated it by adding your name to **Updated priority by <User name>**. + +All future verifications will use the new priority setting for similar events (similar by text similarity). + diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-12.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-12.png new file mode 100644 index 00000000000..613bb6dd65e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-12.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-13.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-13.png new file mode 100644 index 00000000000..b708ef8b4aa Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-13.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-14.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-14.png new file mode 100644 index 00000000000..956f8e1f533 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/custom-thresholds-14.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-00.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-00.png new file mode 100644 index 00000000000..d982e5b2eee Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-00.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-01.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-01.png new file mode 100644 index 00000000000..b26c840b96b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-01.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-02.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-02.png new file mode 100644 index 00000000000..4ccc0d53b1b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-02.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-03.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-03.png new file mode 100644 index 00000000000..c79b652aab0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-03.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-04.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-04.png new file mode 100644 index 00000000000..9c5efb556c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-04.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-05.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-05.png new file mode 100644 index 00000000000..165d9d6165b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-05.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-06.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-06.png new file mode 100644 index 00000000000..3677889acd9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-06.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-07.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-07.png new file mode 100644 index 00000000000..27b2450143b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-07.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-08.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-08.png new file mode 100644 index 00000000000..8895b2a0a14 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-08.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-09.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-09.png new file mode 100644 index 00000000000..a1b75fbac61 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-09.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-10.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-10.png new file mode 100644 index 00000000000..b9022420263 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-10.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-11.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-11.png new file mode 100644 index 00000000000..038c73d5b30 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/jira-cv-ticket-11.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-29.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-29.png new file mode 100644 index 00000000000..0ed79def27d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-29.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-30.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-30.png new file mode 100644 index 00000000000..0c71636fe2a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-30.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-31.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-31.png new file mode 100644 index 00000000000..0045589a55a Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-31.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-32.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-32.png new file mode 100644 index 00000000000..e79d6c5f9ee Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-32.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-33.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-33.png new file mode 100644 index 00000000000..99bd740ee45 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-33.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-34.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-34.png new file mode 100644 index 00000000000..bf2f1691dea Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-34.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-35.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-35.png new file mode 100644 index 00000000000..c83526821c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-35.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-36.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-36.png new file mode 100644 index 00000000000..964a2b50453 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-36.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-37.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-37.png new file mode 100644 index 00000000000..c29deff52e0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-37.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-38.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-38.png new file mode 100644 index 00000000000..dd7355ce9e9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-38.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-39.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-39.png new file mode 100644 index 00000000000..419fbce4e4e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-39.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-40.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-40.png new file mode 100644 index 00000000000..b32848fc0e8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-40.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-41.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-41.png new file mode 100644 index 00000000000..419fbce4e4e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-41.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-42.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-42.png new file mode 100644 index 00000000000..2eb13a43fa0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-24-7-service-guard-verification-analysis-42.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-15.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-15.png new file mode 100644 index 00000000000..0ed79def27d Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-15.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-16.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-16.png new file mode 100644 index 00000000000..edacdc12891 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-16.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-17.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-17.png new file mode 100644 index 00000000000..6b66eabc681 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-17.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-18.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-18.png new file mode 100644 index 00000000000..40fcfc3b15b Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-18.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-19.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-19.png new file mode 100644 index 00000000000..99bd740ee45 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-19.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-20.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-20.png new file mode 100644 index 00000000000..bf2f1691dea Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-20.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-21.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-21.png new file mode 100644 index 00000000000..c83526821c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-21.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-22.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-22.png new file mode 100644 index 00000000000..964a2b50453 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-22.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-23.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-23.png new file mode 100644 index 00000000000..c29deff52e0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-23.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-24.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-24.png new file mode 100644 index 00000000000..dd7355ce9e9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-24.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-25.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-25.png new file mode 100644 index 00000000000..419fbce4e4e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-25.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-26.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-26.png new file mode 100644 index 00000000000..3c7cdff6aaa Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-26.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-27.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-27.png new file mode 100644 index 00000000000..419fbce4e4e Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-27.png differ diff --git a/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-28.png b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-28.png new file mode 100644 index 00000000000..2eb13a43fa0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/continuous-verification/tuning-tracking-verification/static/refine-deployment-verification-analysis-28.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/_category_.json b/docs/first-gen/continuous-delivery/custom-deployments/_category_.json new file mode 100644 index 00000000000..f8deee78e00 --- /dev/null +++ b/docs/first-gen/continuous-delivery/custom-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "Custom Deployments", "position": 120, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Custom Deployments"}, "customProps": { "helpdocs_category_id": "29o4taom9v"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/custom-deployments/create-a-custom-deployment.md b/docs/first-gen/continuous-delivery/custom-deployments/create-a-custom-deployment.md new file mode 100644 index 00000000000..bef623d5ed6 --- /dev/null +++ b/docs/first-gen/continuous-delivery/custom-deployments/create-a-custom-deployment.md @@ -0,0 +1,414 @@ +--- +title: Create a Custom Deployment using Deployment Templates +description: Harness provides deployment support for all of the major platforms, listed in the ​Continuous Delivery category. In some cases, you might be using a platform that does not have first class support… +# sidebar_position: 2 +helpdocs_topic_id: g7m5a380kl +helpdocs_category_id: 29o4taom9v +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness provides deployment support for all of the major platforms, listed in the[​Continuous Delivery](https://docs.harness.io/category/continuous-delivery) category. + +In some cases, you might be using a platform that does not have first class support in Harness, such as WebLogic, WebSphere, or [Google Cloud functions](https://community.harness.io/t/google-cloud-functions-with-harness-deployment-template/598). For these situations, Harness provides a custom deployment option using Deployment Templates. + +Deployment Templates use shell scripts to connect to target platforms, obtain target host information, and execute deployment steps. + +### Before You Begin + +You can review some of the other custom options Harness provides in addition to its support for all major platforms: + +* [Using Custom Artifact Sources](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source) +* [Add and Use a Custom Secrets Manager](https://docs.harness.io/article/ejaddm3ddb-add-and-use-a-custom-secrets-manager) +* [Custom Shell Script Approvals](https://docs.harness.io/article/lf79ixw2ge-shell-script-ticketing-system) +* [Shell Script Provisioner](https://docs.harness.io/article/1m3p7phdqo-shell-script-provisioner) +* [Custom Verification Overview](https://docs.harness.io/article/e87u8c63z4-custom-verification-overview) + +Google Cloud Function deployments using Deployment Templates are covered in [Google Cloud Functions with Harness Deployment Template](https://community.harness.io/t/google-cloud-functions-with-harness-deployment-template/598). + +### Visual Summary + +The following illustration shows how the settings in the Deployment Template are applied in a Harness Service, Infrastructure Definition, and Workflow Fetch Instances and Shell Script steps. + +![](./static/create-a-custom-deployment-00.png) + +### Review: Custom Deployment using Deployment Template Overview + +Here is a summary of the steps for setting up custom deployments using Deployment Templates: + +1. Create a Deployment Template. +2. In the template, include a script that returns a JSON array containing a list of the target instances Harness will use to deploy your artifact. +3. Identify the array path to the host object in the JSON so Harness can locate it at deployment runtime. +4. Map any important host attributes that you want to reference later, like IP, region, etc. +5. Create a Harness Service that using the Deployment Template.Artifacts are added just as they are for supported platforms. See [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). Harness includes the [Custom Artifact Source](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source) also. +6. Create a Harness Infrastructure Definition that uses the template. +7. Create a Workflow that uses the Harness Service and Infrastructure Definition. +8. In the Workflow, add the **Fetch Instances** step wherever you want to execute the script in your template. + +That's it. Your Workflow will fetch the target instances as you requested and deploy your artifact to them. + +### Limitations + +Unlike the deployments for supported platforms, like Kubernetes and AWS, Deployment Templates have certain limitations: + +* No steady state checks on deployed services. +* Harness does not track releases. +* The Deployment Template where you define your infrastructure can be created in the account-wide Template Library only (also called the Shared Template Library). Not in an Application-wide Template Library. See [Use Templates](../concepts-cd/deployment-types/use-templates.md). +* Only Basic, Canary, and Multi-Service Deployment [Workflow types](https://docs.harness.io/article/m220i1tnia-workflow-configuration#workflow_types) are supported. + +### Step 1: Harness Delegate Setup + +Install a Delegate in your deployment environment, verify that its host/pod can connect to the server you plan to query for your target host information, and the target host. + +See [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +### Step 2: Connectors Setup + +In a custom deployment using Deployment Templates, Harness Connectors are only used for the Artifact Server. See [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +No Harness Cloud Providers are used unless you are using an artifact source from GCP or AWS. + +### Step 3: Create Custom Deployment Template + +The Deployment Template contains a script that will query a server at deployment runtime to obtain the target host information needed to deploy your artifact to the target host(s). + +#### Infrastructure Variables + +These are variables that you can use in the following places: + +* In the script in **Fetch Instances Command Script**. For example, you can create a variable for the URL that the script in **Fetch Instances Command Script** uses. +* When you define the Infrastructure Definition for your deployment. The variable values can be overwritten in the Infrastructure Definition, as we will show later. +1. Click **Add** to add the variables you need in the script in **Fetch Instances Command Script**. +2. Add the variables you will need to identify the target host(s) in your Harness Infrastructure Definition. +For example, if you will be targeting a cluster, add the variable `cluster`, and then you can provide a value for the variable in the Infrastructure Definition. + +![](./static/create-a-custom-deployment-01.png) + +If you want to make the URL that obtains the target host information a variable that can be configured in an Infrastructure Definition, be sure to include it in **Infrastructure Variables**. + +Often, you will add variables for username and password so that they can be provided in the Infrastructure Definition. + +Any variables set here can he referenced in your Workflow using the expression `${infra.custom.vars.varName}`. For example: + + +``` +echo ${infra.custom.vars.url} +echo ${infra.custom.vars.cluster} +``` +#### Fetch Instances Command Script + +Enter the shell script to pull the JSON collection from your server. + +The script is expected to query the server and receive a JSON array containing the target hosts, saved in the environment variable `${INSTANCE_OUTPUT_PATH}`. + +This shell script will be executed at runtime by the Harness Delegate on its host. This should be a shell script you have run on the Delegate host to ensure that the host can connect to your server. + +The script should return a JSON array containing the target host information Harness needs to deploy. + +Here is an example: + + +``` +apt-get -y install awscli +aws configure set aws_access_key_id ${secrets.getValue("access_key") +aws configure set aws_secret_access_key ${secrets.getValue("password") +aws configure set region us-west-1 +aws ec2 describe-instances --instance-ids i-0beacf0f260edd19f > "${INSTANCE_OUTPUT_PATH}" +``` +This example uses AWS. Harness already has full, first-class support for AWS deployments. We just use this script as an example. See the AWS Quickstarts in [Start Here](https://docs.harness.io/category/get-started). + +This example also uses Harness secrets for username and password. See [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +Here's another example using Kubernetes and NGINX (Kubernetes also has [first-class support](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart)): + + +``` +POD=$(kubectl get pod -l app=mynginx -o json) +echo ${POD} > "${INSTANCE_OUTPUT_PATH}" +``` +When you create your Harness Workflow later, you will add a **Fetch Instances** step that will run this script: + +![](./static/create-a-custom-deployment-02.png) + +#### Host Object Array Path + +Enter the JSON path to the JSON array object for the target host. + +For example, the following JSON object contains an Instances array with two items (the JSON is abbreviated): + + +``` +{ + "Instances": [ + { + "StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f", + ... + "InfrastructureClass": "ec2", + "RootDeviceVolumeId": "vol-d08ec6c1", + "SubnetId": "subnet-b8de0ddd", + "InstanceType": "t1.micro", + "CreatedAt": "2015-02-24T20:52:49+00:00", + "AmiId": "ami-35501205", + "Hostname": "ip-192-0-2-0", + "Ec2InstanceId": "i-5cd23551", + "PublicDns": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com", + "SecurityGroupIds": [ + "sg-c4d3f0a1" + ], + ... + }, + { + "StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f", + ... + "InfrastructureClass": "ec2", + "RootDeviceVolumeId": "vol-e09dd5f1", + "SubnetId": "subnet-b8de0ddd", + "InstanceProfileArn": "arn:aws:iam::123456789102:instance-profile/aws-opsworks-ec2-role", + "InstanceType": "c3.large", + "CreatedAt": "2015-02-24T21:29:33+00:00", + "AmiId": "ami-9fc29baf", + "SshHostDsaKeyFingerprint": "fc:87:95:c3:f5:e1:3b:9f:d2:06:6e:62:9a:35:27:e8", + "Ec2InstanceId": "i-8d2dca80", + "PublicDns": "ec2-192-0-2-1.us-west-2.compute.amazonaws.com", + "SecurityGroupIds": [ + "sg-b022add5", + "sg-b122add4" + ], + ... + } + ] +} +``` +In this case, we want to point to the first item in the JSON file using its index, and so we use `Instances` in **Host Object Array Path**. + +To ensure that you referring to the correct item in your array, test your **Host Object Array Path** using your JSON collection and an online validator such as  [JSON Editor Online](https://jsoneditoronline.org/). + +#### Payloads without High-Level Objects + +In some cases you might have a JSON payload without a high-level object. In these cases, you can use `$` in **Host Object Array Path**. + +Let's look at an example: + + +``` +[ + { + "id": "aef-default-0000000000000-qnhh", + "instance": { + "id": "aef-default-0000000000000-qnhh", + "name": "apps/sales/services/default/versions/0000000000000/instances/aef-default-0000000000000-qnhh", + "startTime": "2021-01-07T21:05:54.658Z", + "vmIp": "192.168.0.0", + "vmLiveness": "HEALTHY", + "vmStatus": "RUNNING" + }, + "service": "default", + "version": "0000000000000" + }, + { + "id": "aef-default-0000000000000-0sbt", + "instance": { + "id": "aef-default-0000000000000-0sbt", + "name": "apps/sales/services/default/versions/0000000000000/instances/aef-default-0000000000000-0sbt", + "startTime": "2021-01-07T21:05:46.262Z", + "vmIp": "192.168.255.255", + "vmLiveness": "HEALTHY", + "vmStatus": "RUNNING" + }, + "service": "default", + "version": "0000000000000" + } +] +``` +In this example, the **Host Object Array Path** is `$`and the **hostname** field would use  `instance.vmIp`. + +#### Host Attributes + +Now that you have provided a path to the host object, you can map any useful JSON keys in **Host Attributes**. + +**The** **`hostname`** **value in the Field Name setting is mandatory.** + +You must use `hostname` to identify the target host(s) in the JSON array.Map the keys containing information you want to reference in your Workflow, most likely in a Shell Script step. + +You can reference the host in your Workflow using the expression `${instance.hostName}`, but you reference Host Attributes using `${instance.host.properties.}`. + +For example, to reference the Host Attribute hostname below you would use `${instance.host.properties.hostname}`. + +![](./static/create-a-custom-deployment-03.png) + +You can also use any of the default Harness expressions that are host-related. See [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables). + +#### Artifact Build Number and Different Artifact Versions + +Currently, this feature is behind the feature flag `CUSTOM_DEPLOYMENT_ARTIFACT_FROM_INSTANCE_JSON`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If your deployment targets are a fixed set of instances but you don't update all of them on each new deployment, Harness might not show the artifact versions of older artifacts in the Service Dashboard. + +For example: + +1. Initially there are 10 instances with version V1. +2. You deploy version V2 to 5 of the instances but still output all 10 instances. +3. The **Current Deployment Status** in the Service Dashboard will show 10 instances with version V2. + +This result occurs because the **Fetch Instances** Workflow step doesn't consider the artifact information form your script by default. The step considers the instances information only. + +To display the artifact versions, include the `artifactBuildNumber` in **Host Attributes**. and reference the artifact build number in your script output. + +For example, here's a script output where `artifactBuildNo` includes the build numbers: + + +``` +{ + "hosts": [ + { + "hostname": "host-1.harness.com", + "artifactBuildNo": "1.0" + }, + { + "hostname": "host-2.harness.com", + "artifactBuildNo": "1.0" + }, + { + "hostname": "host-3.harness.com", + "artifactBuildNo": "2.0" + } + ] +} +``` +In **Host Attributes**, you map `artifactBuildNumber` to `artifactBuildNo`: + +![](./static/create-a-custom-deployment-04.png) + +In the Services Dashboard, you will see different artifact versions and the number of hosts where they were deployed: + +![](./static/create-a-custom-deployment-05.png) + +### Step 4: Create Harness Service + +Create your Harness Service as described in [Add Specs and Artifacts using a Harness Service](https://docs.harness.io/article/eb3kfl8uls-service-configuration). + +In **Deployment Type**, select your Deployment Template. + +![](./static/create-a-custom-deployment-06.png) + +In the new Service, add your Artifact Source just as you would with any other Harness Service deployment type. All supported artifact sources are available, as is the custom artifact source. + +You must reference the artifact source somewhere in your Harness entities, such as a Shell Script Workflow step. If you do not reference the artifact source, Harness does not prompt you to select an artifact version when you deploy your Workflow. See [Option: Reference Artifact Sources](#option_reference_artifact_sources).See + +* [Service Types and Artifact Sources](https://docs.harness.io/article/qluiky79j8-service-types-and-artifact-sources) +* [Add a Docker Artifact Source](https://docs.harness.io/article/gxv9gj6khz-add-a-docker-image-service) +* [Using Custom Artifact Sources](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source) + +There are no specs for the custom deployment service. You can add Configuration variables (environment variables) and files that can be referenced and used in your Workflow, and [overwritten by Harness Environments](https://docs.harness.io/article/4m2kst307m-override-service-files-and-variables-in-environments). + +See: + +* [Add Service Config Variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) +* [Add Service Config Files](https://docs.harness.io/article/iwtoq9lrky-add-service-level-configuration-files) + +### Step 5: Create Target Infrastructure Definition + +Next you create an Infrastructure Definition that uses the Deployment Template's Infrastructure Variables settings to define the target hosts/container. + +1. In the Infrastructure Definition settings, in **Cloud Provider Type**, select **Template**. +2. In **Deployment Type**, select the Deployment Template you created. +3. In **Select Version**, select the version of the template you want to use. Harness templates can have multiple versions. See [Use Templates](../concepts-cd/deployment-types/use-templates.md). + +Here is an example targeting a cluster: + +![](./static/create-a-custom-deployment-07.png) + +In the Infrastructure Definition, you can edit the variable values from the Deployment Template. You can use Harness variable expressions and secrets. See [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) and [Managing Harness Secrets](https://docs.harness.io/article/8bldcebkkf-managing-harness-secrets). + +**If you do not change the defaults**, and the variables are changed in the Deployment Template, or new ones are added, the variables in the Infrastructure Definition are updated with the new defaults from the Deployment Template automatically. + +If you do change the default, changes made to the Deployment Template default variables doe not impact the Infrastructure Definition. + +In the Infrastructure Definition **Scope to Specific Services** setting, you can select the Service you created using the Deployment Template, but this is not mandatory. + +Now that the Infrastructure Definition is completed, you can use it in a Workflow. + +You can also override any Deployment Template variable values in the Environment overrides settings. See [Override Variables at the Infrastructure Definition Level](../kubernetes-deployments/override-variables-per-infrastructure-definition.md). + +### Step 6: Create the Workflow + +Once you have the created the Harness Service and Infrastructure Definition using the Deployment Template, you can create a Workflow to execute the **Fetch Instances Command Script** in the template. + +In the Workflow, you add a **Fetch Instances** step where you want the script in **Fetch Instances Command Script** to execute. You can also reference any variable from the Deployment Template's **Infrastructure Variables** section, such as in a **Shell Script** step. + +![](./static/create-a-custom-deployment-08.png) + +Deployment Templates are supported in the following Workflow Deployment Types: + +* Basic +* Canary +* Multi-Service + +To create the Workflow, do the following: + +1. In **Workflows**, click **Add Workflow**. The Workflow settings appear. +2. Name the Workflow. +3. Select one of the supported Workflow Types. +4. Select the Environment that contains the Infrastructure Definition using your Deployment Template. +5. Select the Service using your Deployment Template. +6. Select the Infrastructure Definition using your Deployment Template. +7. Click **Submit**. + +The Workflow is created. The Workflow does not have any default steps added like the platform-specific Workflows. + +The Workflow is fully customizable. You can add sections, phases, Rollback Steps, etc, as needed. See [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration). + +As this a custom deployment Workflow, the number of available steps is limited: + +![](./static/create-a-custom-deployment-09.png) + +The only required step for custom deployment Workflows is **Fetch Instances**. + +### Option: Reference Artifact Sources + +If you added an artifact source to the Harness Service used by this Workflow, you must reference the artifact source somewhere in your Harness entities, such as a Shell Script Workflow step. + +If you do not reference the artifact source, Harness does not prompt you for an artifact version when you deploy the Workflow. + +For example, let's say you added an artifact source for a WAR file in the Service's Artifact Source: + +![](./static/create-a-custom-deployment-10.png) + +In your Workflow, add a [Shell Script step](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) that echos the `${artifact.buildNo}` or other [artifact built-in variables](https://docs.harness.io/article/aza65y4af6-built-in-variables-list#artifact). + +Now when you deploy this Workflow, you will be prompted to select an artifact version. + +If you did not reference the artifact in the Workflow, you would not be prompted. + +See [Run Shell Scripts in Workflows](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output). + +### Step 7: Fetch Instances + +The **Fetch Instances** step runs the script in your Deployment Template's **Fetch Instances Command Script** setting: + +![](./static/create-a-custom-deployment-11.png) + +1. Add Fetch Instances to any point in the Workflow where you want to run your script. +2. In **Delegate Selector**, select the Delegate you want to use to run this step. See [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +### Option: Deployment Template Variable Expressions + +Any variables set in **Infrastructure Variables** can he referenced in your Workflow using the expression `${infra.custom.vars.varName}`. + +You can reference the host in your Workflow using the expression `${instance.hostName}`. + +You can also use any of the default Harness expressions that are host-related. See [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables). + +Here is an example using a Shell Script step: + +![](./static/create-a-custom-deployment-12.png) + +### See Also + +* [Using Custom Artifact Sources](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source) +* [Add and Use a Custom Secrets Manager](https://docs.harness.io/article/ejaddm3ddb-add-and-use-a-custom-secrets-manager) +* [Custom Shell Script Approvals](https://docs.harness.io/article/lf79ixw2ge-shell-script-ticketing-system) +* [Shell Script Provisioner](https://docs.harness.io/article/1m3p7phdqo-shell-script-provisioner) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-00.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-00.png new file mode 100644 index 00000000000..11ee20cbc6c Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-00.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-01.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-01.png new file mode 100644 index 00000000000..83ebc1cd004 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-01.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-02.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-02.png new file mode 100644 index 00000000000..7f7ddc872c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-02.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-03.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-03.png new file mode 100644 index 00000000000..d4adaf55b6f Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-03.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-04.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-04.png new file mode 100644 index 00000000000..dbd12b55270 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-04.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-05.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-05.png new file mode 100644 index 00000000000..793bdafc2b0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-05.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-06.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-06.png new file mode 100644 index 00000000000..92588fcb3ff Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-06.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-07.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-07.png new file mode 100644 index 00000000000..f2faf1268cf Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-07.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-08.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-08.png new file mode 100644 index 00000000000..6f711ed3656 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-08.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-09.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-09.png new file mode 100644 index 00000000000..fb477064088 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-09.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-10.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-10.png new file mode 100644 index 00000000000..5f06fafafee Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-10.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-11.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-11.png new file mode 100644 index 00000000000..7f7ddc872c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-11.png differ diff --git a/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-12.png b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-12.png new file mode 100644 index 00000000000..4e9f6a17e29 Binary files /dev/null and b/docs/first-gen/continuous-delivery/custom-deployments/static/create-a-custom-deployment-12.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/1-delegate-and-connectors-for-iis.md b/docs/first-gen/continuous-delivery/dotnet-deployments/1-delegate-and-connectors-for-iis.md new file mode 100644 index 00000000000..195b3ffe74e --- /dev/null +++ b/docs/first-gen/continuous-delivery/dotnet-deployments/1-delegate-and-connectors-for-iis.md @@ -0,0 +1,233 @@ +--- +title: 1 - Delegate and Connectors for IIS +description: Set up the Delegate, WinRM Connection, Artifact Server, and Cloud Provider. +sidebar_position: 20 +helpdocs_topic_id: 1n0t9vo7e4 +helpdocs_category_id: 3lkbch7kgn +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic sets up the Harness Delegate, WinRM Connection, Artifact Server, and Cloud Provider for your IIS Deployment. You can use these connections globally for all the IIS services and environments you add in Harness, or restrict them to specific applications and environments. + + +### Harness Delegate Connections for Azure + +A Harness Delegate needs to connect to the Artifact Servers, Cloud Providers, and hosts you configure with Harness. The Delegate is only supported for Linux and cannot be run on a Windows VM in Azure. + +To ensure that the Harness Delegate you use for Azure deployments can connect to your Azure resources, you can run the Delegate on a Linux VM in your Azure VPC (such as Ubuntu) or simply ensure that the Delegate has network access to resources in your Azure VPC. + +For steps on setting up the Harness Delegate, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +### Set Up WinRM on Instances and Network + +WinRM is a management protocol used by Windows to remotely communicate with another server, in our case, the Harness delegate. WinRM communicates over HTTP (5985)/HTTPS (5986), and is included in all recent Windows operating systems. + +For WinRM, you need the following networking configured: + +* The VMs must listen on HTTP (5985)/HTTPS (5986) +* Open the VMs ports for HTTP (5985)/HTTPS (5986) +* Open the subnet ports for HTTP (5985)/HTTPS (5986) + +In cases where WinRM is not already set up on your Windows instances, you can set WinRM for HTTPS up using the following command: + + +``` +Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1')) +``` +We recommend using this command as it configures the necessary port and firewall settings for the Windows instance.**HTTP only** — To set up WinRM for HTTP (not HTTPS), from the command line (not PowerShell), run the following (the default HTTP port is 5985): + + +``` +winrm quickconfig +winrm set winrm/config/service @{AllowUnencrypted="true"} +netsh advfirewall firewall add rule name="WinRM-HTTP" dir=in localport=5985 protocol=TCP action=allow +``` +#### Set Up WinRM in Azure + +Here is an example of the PowerShell set up of WinRM on an Azure Windows Server VM. + + +``` +C:\Users\harness>PowerShell.exe +Windows PowerShell +Copyright (C) 2016 Microsoft Corporation. All rights reserved. + +PS C:\Users\harness> Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent. +com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1')) +Self-signed SSL certificate generated; thumbprint: 4B4AAFE402B3B96EAC3C26FE0DE7332E9010B1C7 + + +wxf : http://schemas.xmlsoap.org/ws/2004/09/transfer +a : http://schemas.xmlsoap.org/ws/2004/08/addressing +w : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd +lang : en-US +Address : http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous +ReferenceParameters : ReferenceParameters + +Ok. + +PS C:\Users\harness> exit + +C:\Users\harness>winrm e winrm/config/listener +Listener + Address = * + Transport = HTTP + Port = 5985 + Hostname + Enabled = true + URLPrefix = wsman + CertificateThumbprint + ListeningOn = 10.0.1.4, 127.0.0.1, ::1, 2001:0:9d38:90d7:1868:10ad:f5ff:fefb, fe80::5efe:10.0.1.4%9, fe80::1868:10ad:f5ff:fefb%10, fe80::915e:d4bb:bf06:9807%5 + +Listener + Address = * + Transport = HTTPS + Port = 5986 + Hostname = doc + Enabled = true + URLPrefix = wsman + CertificateThumbprint = 4B4AAFE402B3B96EAC3C26FE0DE7332E9010B1C7 + ListeningOn = 10.0.1.4, 127.0.0.1, ::1, 2001:0:9d38:90d7:1868:10ad:f5ff:fefb, fe80::5efe:10.0.1.4%9, fe80::1868:10ad:f5ff:fefb%10, fe80::915e:d4bb:bf06:9807%5 +``` +You can also see WinRM running in Server Manager: + +![](./static/1-delegate-and-connectors-for-iis-19.png) + +You can also test if the WinRM service is running on a local or remote computer using the Test-WSMan PowerShell command. For more information, see [Test-WSMan](https://docs.microsoft.com/en-us/powershell/module/microsoft.wsman.management/test-wsman?view=powershell-6) from Microsoft. + +Ensure that the ports you need for the WinRM connection are open on your network security group and VM. For more information, see [How to open ports to a virtual machine with the Azure portal](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/nsg-quickstart-portal) to open the WinRM Inbound security rule.**Network Security Group:** + +![](./static/1-delegate-and-connectors-for-iis-20.png) + +**VM Inbound Port Rules:** + +![](./static/1-delegate-and-connectors-for-iis-21.png) + +For more information about Azure and WinRM, see the following: + +* [Setting up WinRM access for Virtual Machines in Azure Resource Manager](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/winrm) +* [Deploy a Windows VM and configures WinRM https listener](https://azure.microsoft.com/en-au/resources/templates/vm-winrm-windows/) (Azure Template) +* [WinRM on a Windows VM](https://azure.microsoft.com/en-us/resources/templates/vm-winrm-keyvault-windows/) (Azure Template) +* [Create a Key Vault](https://azure.microsoft.com/en-us/resources/templates/key-vault-create/) (Azure Template) +* [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](https://docs.microsoft.com/en-us/azure/key-vault/quick-create-portal) +* [Manage Key Vault in Azure Stack by using the portal](https://docs.microsoft.com/en-us/azure/azure-stack/user/azure-stack-kv-manage-portal) +* [How to connect and log on to an Azure virtual machine running Windows](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/connect-logon) +* [Understanding and troubleshooting WinRM connection and authentication](http://www.hurryupandwait.io/blog/understanding-and-troubleshooting-winrm-connection-and-authentication-a-thrill-seekers-guide-to-adventure) + +#### Set Up WinRM in AWS + +In AWS EC2, you can enter the command as User data when creating the instance: + + +``` + +Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1')) + +``` +![](./static/1-delegate-and-connectors-for-iis-22.png) + +If you launch more than one instance at a time, the user data is available to all the instances in that reservation. + +You can also remote into the EC2 instance, open a PowerShell session, and run the `Invoke-Expression`. + +To verify that WinRM is running on your Windows instance, run the command: + + +``` +winrm e winrm/config/listener +``` +The successful output will be something like this: + + +``` +C:\Windows\system32>winrm e winrm/config/listener +Listener + Address = * + Transport = HTTP + Port = 5985 + Hostname + Enabled = true + URLPrefix = wsman + CertificateThumbprint + ListeningOn = 127.0.0.1, .... + +Listener + Address = * + Transport = HTTPS + Port = 5986 + Hostname = EC2AMAZ-Q0MO0AP + Enabled = true + URLPrefix = wsman + CertificateThumbprint = 1A1A1A1A1A1A1A1A1A1A1 + ListeningOn = 127.0.0.1, ... +``` +Here's an example: + +![](./static/1-delegate-and-connectors-for-iis-23.png) + +For more information about EC2 and WinRM, see the following: + +* [WinRM (Windows Remote Management) Troubleshooting](https://blogs.technet.microsoft.com/jonjor/2009/01/09/winrm-windows-remote-management-troubleshooting/) +* [Running Commands on Your Windows Instance at Launch](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html) +* [How can I execute user data after the initial launch of my EC2 instance?](https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/) +* [Connecting to Your Windows Instance](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html) + +If the default methods for setting up WinRM in your Windows instances is not working, try the script listed here: [Configure a Windows host for remote management with Ansible](https://gist.github.com/ardeshir/e0eabf7fb7e6700b314204be686f9113). + +#### Set Up the WinRM Connection in Harness + +Add a WinRM connection in Harness to execute deployment steps on the remote Windows servers. + +1. Mouseover **Continuous Security**, and then click **Secrets Management**. The Secrets Management page appears. +2. Under **Executions Credentials**, click **WinRM Connection**. The **WinRM Connection Attributes** dialog appears. + + ![](./static/1-delegate-and-connectors-for-iis-24.png) + +3. Fill out the **WinRM Connection Attributes** dialog and click **SUBMIT**. The **WinRM Connection Attributes** dialog has the following fields. + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Name** | Name to identify the connection. You will use this name to identify this connection when setting up the **Connection Attributes** in the Environment Infrastructure Definition. | +| **Auth Scheme** | Specifies the mechanism used to authenticate the credentials used in this connection. Currently only NTLM is supported. | +| **Domain** | The Active Directory domain name where the user account in the credentials is registered. This can be left blank when using a local user. | +| **User Name** / **Password** | The user account credentials for this connection. The user must belong to the same Active Directory domain as the Windows instances that this connection uses. These are the same user account credentials you would use to log into the VM using a remote connection such as Microsoft Remote Desktop.In cases when **Domain** is blank (local user), you can put **./** before the user name. The **./** prefix is equivalent to `local_host_or_ip\user`. | +| **Use SSL** | Enables an HTTPS connection instead of an HTTP connection. SSL is recommended. | +| **Skip Cert Check** | When connected over HTTPS (**Use SSL** in enabled), the client does not validate server certificate. | +| **WinRM Port** | Specifies the network port on the remote computer to use.To connect to a remote computer, the remote computer must be listening on the port that the connection uses. The default ports are 5985, which is the WinRM port for HTTP, and **5986**, which is the WinRM port for HTTPS.To determine what ports WinRM is listening on, use the command:**winrm e winrm/config/listener** | + +If you experience errors getting the WinRM connection to work, you might need to restart the Windows Server. + +### Cloud Provider Setup + +Add a connection to the Cloud Provider where the IIS website, application, or virtual directory will be deployed. + +1. Click **Setup**, and then click **Cloud Providers**. The **Cloud Providers** page appears. +2. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears.![](./static/1-delegate-and-connectors-for-iis-25.png) +3. In **Type**, select the type of cloud provider you want to add, such as **Amazon Web Services** or **Microsoft Azure**. The dialog settings will change for each cloud provider. +4. Enter the cloud provider account information, and then click **SUBMIT**. For account details and Harness permission requirements for the different providers, see [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-infrastructure-providers) ([AWS](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#amazon_web_services_aws_cloud), [Azure](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#azure)). +Once you have created Harness applications and environments, you can return to this dialog and add **Usage Scope** on which applications and environments may use this provider. + +For certain Cloud Providers, such as AWS, instead of using account information to set up the Cloud Provider, you can install a Harness Delegate in your VPC and use its credentials for your Cloud Provider. After you install the Delegate, you add a Selector to the Delegate, and then simply select that Selector in the Cloud Provider dialog. For an example, see [Installation Example: Amazon Web Services and ECS](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#installation_example_amazon_web_services_and_ecs).### Artifact Server Setup + +Add a connection to the Artifact Server where Harness can pull the IIS website, application, or virtual directory artifact. + +If you are using the same Cloud Provider as artifact server, then you can skip this step. For example, if you added AWS EC2 as a Cloud Provider and you are using AWS S3 as an artifact server, you do not need to add AWS S3 as an artifact server. You can simply use the same AWS connection. + +1. Click **Setup**, and then click **Connectors**. The **Connectors** page appears. +2. Click **Artifact Server**, and then click **Add Artifact Server**. The artifact server dialog appears.![](./static/1-delegate-and-connectors-for-iis-26.png) +3. In **Type**, click the artifact source you want to use. The dialog settings will change for each server. +4. Enter the artifact server information and click **SUBMIT**. For account details and Harness permission requirements for the different servers, such as SMB and SFTP, see [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). +Once you have created Harness applications and environments, you can return to this dialog and add **Usage Scope** on which applications and environments may use this server. + +Now that you have connected the IIS artifact server and a cloud provider, and configured a WinRM connection to execute deployment steps on the remote Windows servers, you can add your Harness application. + +Azure Storage is not currently supported as an Artifact Source. Azure Container Registry is supported for Docker and Kubernetes deployments. + +### Next Step + +* [2 - Services for IIS (.NET)](2-services-for-iis-net.md) + diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/2-services-for-iis-net.md b/docs/first-gen/continuous-delivery/dotnet-deployments/2-services-for-iis-net.md new file mode 100644 index 00000000000..06e6a3011de --- /dev/null +++ b/docs/first-gen/continuous-delivery/dotnet-deployments/2-services-for-iis-net.md @@ -0,0 +1,186 @@ +--- +title: 2 - Services for IIS +description: Create Harness Services for the IIS website, application, and virtual directory. +sidebar_position: 30 +helpdocs_topic_id: mm84gjllge +helpdocs_category_id: 3lkbch7kgn +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The procedures in this guide pull IIS website, application, and virtual directory metadata from AWS S3 and deploy them to a Windows instance in AWS EC2 or a Microsoft Azure VM. The first step is creating Harness Services for the IIS website, application, and virtual directory. + +This topic describes how to create the IIS website, application, and virtual directory Harness Services. + +### Add Harness Application + +[A Harness Application](https://docs.harness.io/article/bucothemly-application-configuration) is a logical grouping of the Services, Environments, and Workflows for your IIS website deployment. First, we will create the Harness Application, and then each of the IIS Services. + +To add the Harness Application, do the following: + +1. Click **Setup**, and then click **Add Application**. The **Application** dialog appears. +2. Enter the name for your Application, such as **IIS-Example**, and click **SUBMIT**. Your new Application appears. +3. Click your Application’s name. The application entities appear. These are the tools you will use to define and execute your deployment. + +### Add IIS Website Service + +Services include artifact sources, deployment specifications, and configuration variables. For more information, see [Add a Service](https://docs.harness.io/article/eb3kfl8uls-service-configuration). + +In this procedure you will define which artifact(s) to use for your IIS Website. Harness will then create a Deployment Specification using PowerShell. + +We are using IIS artifact types in this topic, but Harness WinRM Services support IIS and Docker artifact types:To add a service for your IIS Website, do the following: + +1. In your Application, click **Services**, and then click **Add Service**. The **Service** dialog appears. +2. Give the service a name, such as **IIS-website**. +3. In **Deployment Type**, select **Windows Remote Management (WinRM)**. +4. In **Artifact Type**, select **IIS Website**. +There are several types supported, including **Other**, which you can use for a Windows-native application:![](./static/2-services-for-iis-net-31.png) +5. Click **SUBMIT**. The service is displayed. In **Service Overview** you can see the name and type of your service. +6. To add the artifact source for your IIS website, click **Add Artifact Source**. A list of artifact source types appears. +7. Click the artifact source type. The artifact sources consist of the cloud providers and build servers you added earlier. For this guide, we will use **Amazon S3**. Harness also supports other popular Windows protocols such as SMB and SFTP. +8. For the Amazon S3 example, in **Cloud Provider**, select the S3 provider you added earlier. Harness connects to the provider automatically +9. In **Bucket**, select one of the buckets in the S3. The list is automatically populated by Harness. +10. In **Artifact Path**, select the file(s) for your IIS website. The list is automatically populated by Harness. For this guide, we will use a zip file. +**Harness** **uses Metadata-only:** For WinRM connections, Harness does not use direct artifact copy when you deploy. Harness executes Powershell scripts on the target host(s) to download the artifact. +Metadata is used to download the artifact directly onto the Windows Host during deployment runtime. For this reason, ensure that the target host has connectivity to the Artifact Server before deploying.When you are finished, the Artifact Source will look something like this:![](./static/2-services-for-iis-net-32.png) +11. Click **SUBMIT**. Harness builds the service, including the **Deployment Specification**. + ![](./static/2-services-for-iis-net-33.png) + The **Deployment Specification** section is automatically filled with the Install IIS Website template, which pulls the artifact, expands it, creates the Application Pool (AppPool) and creates the website. Later you will create the environment where the scripts will be executed. +To add more scripts, mouseover anywhere in the script and click the plus icon:![](./static/2-services-for-iis-net-34.png) + +By default the Install IIS Website Template is *linked* to the template in the Template Library and its scripts cannot be edited in the Service (although its variables may be edited). + +##### Install IIS Website Template + +When you create a Service of type IIS Website, a link to the Install IIS Website template is added to the Service. The template contains the following script steps: + +* **Download Artifacts** - As noted earlier, for WinRM connections, Harness does not use direct artifact copy when you set up the Service. Only metadata is supported and is used to download the artifact directly onto the target Windows host during deployment runtime. Ensure the target host has connectivity to your Artifact Server. +* **Expand Artifacts** - PowerShell script runs to expand the artifact. +* **Create App Pool** - See [Application Pools](#application_pools) below. +* **Create Website** - PowerShell script to create the IIS Website. + +For more information on templates, see [Use Templates](../concepts-cd/deployment-types/use-templates.md). + +##### Template Variables + +To edit the variables used in the Install IIS Website Template in the Service, click **Variables**. The **Edit Command** dialog appears. + +![](./static/2-services-for-iis-net-35.png) + +Add values for the variables as needed. The variables used in the Install IIS Website Template in the Service can be modified without influencing the Install IIS Website Template in the Template Library. + +##### Application Pools + +Harness manages the deployment of the new IIS website, application, or virtual directory artifacts to your Windows instances using the default Application pool. If you want to specify the Application pool, use the **applicationPool** element, adding it to the **New-Item** script during application creation. + +For more information, see: + +* [New-Item](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.management/new-item?view=powershell-6) +* [<applicationPool< Element (Web Settings)](https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/web/applicationpool-element-web-settings) + +When you create the IIS services in Harness, you can modify the AppPool name used in the **Create AppPool** settings of the **Deployment Specification**. + +### Add IIS Application Service + +In this procedure you will define which artifact(s) to use for your IIS Application. Harness will then create a Deployment Specification using PowerShell. + +To add a service for your IIS Application, do the following: + +1. In your application, Click **Services**, and then click **Add Service**. The **Service** dialog appears. +2. Give the service a name, such as **IIS-application**. +3. In **Deployment Type**, select **Windows Remote Management (WinRM)**. +4. In **Artifact Type**, select **IIS Application**. +5. Click **SUBMIT**. The service is displayed. + ![](./static/2-services-for-iis-net-36.png) + In **Service Overview** you can see the name and type of your service. +6. To add the artifact source for your IIS application, click **Add Artifact Source**. A list of artifact source types appears. +7. Click the artifact source type. The artifact sources consist of the cloud providers and build servers you added earlier. For this guide, we will use **Amazon S3**. +8. For the Amazon S3 example, in **Cloud Provider**, select the S3 provider you added earlier. Harness connects to the provider automatically +9. In **Bucket**, select one of the buckets in the S3. The list is automatically populated by Harness. +10. In **Artifact Path**, select the file(s) for your IIS application. The list is automatically populated by Harness. For this guide, we will use a zip file. +For WinRM connections, Harness does not use direct artifact copy when you set up the Service. Metadata is supported and is used to download the artifact directly onto the Windows Host during deployment runtime. For this reason, ensure that the target host has connectivity to the Artifact Server before deploying.When you are finished, the Artifact Source will look something like this:![](./static/2-services-for-iis-net-37.png) + +Click **SUBMIT** to add the artifact source. Harness builds the service, including the **Deployment Specification**. + +![](./static/2-services-for-iis-net-38.png) + +The **Deployment Specification** section is automatically filled with, and linked to, the Install IIS Application template. + +By default, the Install IIS Application template is linked to the template in the Template Library and its scripts cannot be edited in the Service (although its variables may be edited). To modify its scripts, you must copy the template instead of linking to it. To do this, in the **Deployment Specification**, click **Add Command**, select **From Template Library**, select the **Install IIS Application Template**, and choose **Copy** instead of **Link**. + +##### Install IIS Application Template + +When you create a Service of type IIS Application, a link to the Install IIS Application template is added to the Service. The template contains the following script steps: + +* **Download Artifact** - As noted earlier, for WinRM connections, Harness does not use direct artifact copy when you set up the Service. Meta-data is supported and is used to download the artifact directly onto the Windows Host during deployment runtime. Ensure the target host has connectivity to your Artifact Server. +* **Expand Artifacts** - PowerShell script runs to expand the artifact. +* Create Virtual Directory - PowerShell script runs to create the Virtual Directory. + +For more information on templates, see [Use Templates](../concepts-cd/deployment-types/use-templates.md). + +To edit the values for variables used in the template, click **Variables**. The variables used in the Install IIS Application template in the Service can be modified without influencing the Install IIS Application Template in the Template Library. + +### Add IIS Virtual Directory Service + +In this procedure you will define which artifact(s) to use for your IIS virtual directory. Harness will then create a **Deployment Specification** using PowerShell. + +To add a service for your IIS virtual directory, do the following: + +1. In your application, Click **Services**, and then click **Add Service**. The **Service** dialog appears. +2. Give the service a name, such as **IIS-virtual-directory**. +3. In **Deployment Type**, select **Windows Remote Management (WinRM)**. +4. In **Artifact Type**, select **IIS Virtual Directory**. +5. Click **SUBMIT**. The service is displayed. In **Service Overview** you can see the name and type of your service. +6. To add the artifact source for your IIS virtual directory, click **Add Artifact Source**. A list of artifact source types appears. +7. Click the artifact source type. The artifact sources consist of the cloud providers and build servers you added earlier. For this guide, we will use **Amazon S3**. +8. For the Amazon S3 example, in **Cloud Provider**, select the S3 provider you added earlier. Harness connects to the provider automatically +9. In **Bucket**, select one of the buckets in the S3. The list is automatically populated by Harness. +10. In **Artifact Path**, select the file(s) for your IIS virtual directory. The list is automatically populated by Harness. For this guide, we will use a zip file. +For WinRM connections, Harness does not use direct artifact copy when you set up the Service. Meta-data is supported and is used to download the artifact directly onto the Windows Host during deployment runtime. For this reason, ensure that the target host has connectivity to the Artifact Server before deploying.When you are finished, the Artifact Source will look something like this:![](./static/2-services-for-iis-net-39.png) + +Click **SUBMIT** to add the artifact source. Harness builds the service, including the **Deployment Specification**. + +![](./static/2-services-for-iis-net-40.png) + +The Install IIS Application template is also used for the IIS Virtual Directory service type. By default, as with the IIS Application Service, the Install IIS Application template is linked to the template in the Template Library and its scripts cannot be edited in the Service (although its variables may be edited). To modify its scripts, you must copy the template instead of linking to it. To do this, in the **Deployment Specification**, click **Add Command**, select **From Template Library**, select the **Install IIS Application Template**, and choose **Copy** instead of **Link**. + +You can modify the variables of the Install IIS Application template scripts by click **Variables**. The variables used in the Install IIS Application template in the Service can be modified without influencing the Install IIS Application Template in the Template Library. + +Click **Create Virtual Directory**, and the **Create Virtual Directory** dialog appears. The dialog contains the following default PowerShell script that will be run during shell session of your Harness workflow: + + +``` +Import-Module WebAdministration + +$siteName="Default Web Site" +$releaseId="${workflow.ReleaseNo}" +$virtualDirectoryName="${service.Name}" +$appPhysicalDirectory=$env:SYSTEMDRIVE + "\Artifacts\" + $virtualDirectoryName + "\release-" + $releaseId + +Write-Host "Creating Virtual Directory" $virtualDirectoryName ".." +$VirtualDirPath = 'IIS:\Sites\' + $siteName + '\' + $virtualDirectoryName +New-Item -Path $VirtualDirPath -Type VirtualDirectory -PhysicalPath $appPhysicalDirectory -Force + +Write-Host "Done." +``` +Note the following: + +* The `$releaseId` variable has the value `${workflow.ReleaseNo}`. This is one of the builtin Harness variables. For more information, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). +* The `$VirtualDirPath` variable shows you where your IIS directory will be deployed. + +When you deploy this service as part of your workflow, you will see these variables used in the Harness deployment dashboard: + +![](./static/2-services-for-iis-net-41.png) + +### WinRM and Copy Configs Command + +The Copy Configs command copies configuration files defined in the **Configuration** section of the Service. + +There is no file size limit on the config files that can be copied using the Copy Configs command. + +![](./static/2-services-for-iis-net-42.png) + +### Next Step + +* [3 - IIS Environments in AWS and Azure](iis-environments.md) + diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/4-iis-workflows.md b/docs/first-gen/continuous-delivery/dotnet-deployments/4-iis-workflows.md new file mode 100644 index 00000000000..5f0f7fed3e3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/dotnet-deployments/4-iis-workflows.md @@ -0,0 +1,139 @@ +--- +title: 4 - IIS Workflows and Pipelines +description: Combine Workflows for your IIS website, application, and virtual directory in a Harness Pipeline. +sidebar_position: 50 +helpdocs_topic_id: z6ls3tgkqc +helpdocs_category_id: 3lkbch7kgn +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Once you have the Harness Services for IIS website, application, and virtual directory, and the Harness Environment for your target infrastructure, you create Harness Workflows to deploy the IIS website, application, and virtual directory Services. + +In this topic we walk through create a Workflow for the IIS website Service, but the Workflows for IIS application and virtual directory Services follow the same steps: + +* [Create an IIS Website Workflow](4-iis-workflows.md#create-an-iis-website-workflow) +* [Deploy IIS Website](4-iis-workflows.md#deploy-iis-website) + + [Confirm Deployment in your Windows Instance](4-iis-workflows.md#confirm-deployment-in-your-windows-instance) +* [Create an IIS Application and Virtual Directory Workflows](4-iis-workflows.md#create-an-iis-application-and-virtual-directory-workflows) +* [Deploy IIS Pipeline](4-iis-workflows.md#deploy-iis-pipeline) +* [Next Step](4-iis-workflows.md#next-step) + +Before deploying the IIS website, application, or virtual directory to your Windows instances, there must be an existing [IIS Web Server Role](https://docs.microsoft.com/en-us/iis/web-hosting/web-server-for-shared-hosting/installing-the-web-server-role) on the instance. This ensures that the environment is ready for deployment. Harness IIS Website deployment requires the IIS Web Server Role. The Harness IIS Application and IIS Virtual Directory deployments require that an IIS Website exists. For more information, see [Installing IIS from the Command Line](5-best-practices-and-troubleshooting.md#installing-iis-from-the-command-line) below. + +### Create an IIS Website Workflow + +Workflows are the deployment steps for services and environments, including types such as Canary and Blue/Green. Workflows also include verification, rollback, and notification steps. + +1. In your application, click **Workflows**. +2. Click **Add Workflow**. The **Workflow** dialog appears. + +![](./static/4-iis-workflows-00.png) + +The dialog has the following fields. + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Name** | Give your Workflow a name and description that tells users what it is deploying. | +| **Workflow Type** | In this guide, we will do a simple Basic workflow. For a summary of workflow types, see [Add a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration#workflow_types). | +| **Environment** | Select the environment you added. | +| **Service** | Select your IIS Website service. | +| **Infrastructure Definition** | Select the **Infrastructure Definition** you added. | + +When you are finished, click **SUBMIT**. The workflow is generated: + +![](./static/4-iis-workflows-01.png) + +1. Under **Prepare Infra**, click **Select Nodes**. The **Node Select** dialog appears. + ![](./static/4-iis-workflows-02.png) + +If you deploy in multiple phases, you can control what hosts to use for each phase. If you are simply doing one phase, you do not need to select hosts. +The **Node Select** dialog has the following fields. + +| | | +| --- | --- | +| **Field** | **Description** | +| **Select Specific hosts?** | Select **Yes** to enter the hostname(s) of the nodes where you want the website deployed.Select **No** to have Harness select the host(s) in the Infrastructure Definition based on the setup the in **Environment**. For AWS deployments, Tags are often used to help Harness select hosts. | +| **Desired** **Instances** | Enter the number of instances you want deployed.You can also enter a variable expression in this setting, such as a Workflow variable:This turns the setting into a deployment parameter. When the Workflow is deployed (manually or by [Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2)) you can provide a value for the parameter.See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). | +| **Instance Unit Type** | Identify if the number in **Instances** is a count or percentage.For example, if you select **10** in **Instances**, you can select **Count** and the artifact is deployed to **10** instances. Or you can enter 100 in **Instances** and select **Percent** and the artifact is deployed to **100%** of the instances in the Infrastructure Definition. | + +When you're finished, click **SUBMIT**. + +2. Under **Deploy Service**, click **Install**. The **Install** dialog appears.![](./static/4-iis-workflows-03.png) + You can set how long the installation may take before it's timed out. The default is 60000ms or 10 minutes. When you are finished, click **SUBMIT**. + +Now that your workflow is complete, you are ready to deploy. + +### Deploy IIS Website + +Before deploying the IIS website, application, or virtual directory to your Windows instances, there must be an existing [IIS Web Server Role](https://docs.microsoft.com/en-us/iis/web-hosting/web-server-for-shared-hosting/installing-the-web-server-role) on the instance. This ensures that the environment is ready for deployment. Harness IIS Website deployment requires the IIS Web Server Role. The Harness IIS Application and IIS Virtual Directory deployments require that an IIS Website exists. For more information, see [Installing IIS from the Command Line](5-best-practices-and-troubleshooting.md#installing-iis-from-the-command-line) below.Now you can deploy your workflow, observe the deployment steps in real-time, and confirm in your VPC. + +1. In your workflow, click **Deploy**.![](./static/4-iis-workflows-04.png) + The **Start New Deployment** dialog appears. + ![](./static/4-iis-workflows-05.png) + Here you simply select the artifact build and version to be deployed. +2. In **Artifacts**, click the dropdown menu and select the artifact build and version to deploy. The list is generated automatically by Harness from the artifact source you specified when you set up your service. + +You can also elect to skip any instances that already have the artifact build and version. +3. Click **SUBMIT** to deploy. Harness shows the deployment in real-time: + +![](./static/4-iis-workflows-06.png) + +Each workflow step is displayed. Click through each deployment step to see the logs and details. + +#### Confirm Deployment in your Windows Instance + +Now that the deployment was successful, confirm the website was added to the Windows instance(s): + +1. In your workflow Deployment page, click the **Install** step. + + ![](./static/4-iis-workflows-07.png) + +2. Expand the **Create Website** section. + + ![](./static/4-iis-workflows-08.png) + +3. In the log, note the location where the website was created: + + +``` +INFO 2018-08-27 15:34:43 IIS-website 2 Started C:\Artifacts\IIS-website\relea http :8080:* +``` +4. Connect to your Windows instance via Microsoft Remote Desktop or other console. +5. On the Windows instance, navigate to the location Harness reported to confirm the website was created:![](./static/4-iis-workflows-09.png) + +### Create an IIS Application and Virtual Directory Workflows + +The steps for creating Workflows for IIS Applications and Virtual Directories are the same as the steps for creating a Workflow for an IIS Website, described above. + +1. Use the Harness Services you created for the IIS Application and Virtual Directory. +2. Use the same Harness Environment and Infrastructure Definition you used to create the IIS Website Workflow. +3. Deploy the IIS Virtual Directory Workflow first, then the IIS Application Workflow, and lastly the IIS Website Workflow. This sequence is best performed using a Harness Pipeline, described below. + +Before deploying the IIS website, application, or virtual directory to your Windows instances, there must be an existing [IIS Web Server Role](https://docs.microsoft.com/en-us/iis/web-hosting/web-server-for-shared-hosting/installing-the-web-server-role) on the instance. This ensures that the environment is ready for deployment. Harness IIS Website deployment requires the IIS Web Server Role. The Harness IIS Application and IIS Virtual Directory deployments require that an IIS Website exists. For more information, see [Installing IIS from the Command Line](#installing_iis_from_the_command_line) below. + +### Deploy IIS Pipeline + +Once you have workflows for your IIS website, application, and virtual directory set up, you can create a Harness pipeline that deploys them in the correct order. For IIS, you must deploy them in the order website, application, and then virtual directory. + +In you Harness application, click **Pipelines**. Follow the steps in [Add a Pipeline](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) to add a stage for each of your workflows. + +When you are done, the pipeline will look like this: + +![](./static/4-iis-workflows-10.png) + +Click **Deploy**. The **Start New Deployment** dialog opens. Select each workflow artifact. When you are done, the dialog will look like this: + +![](./static/4-iis-workflows-11.png) + +Click **SUBMIT**. Here's what the pipeline looks in the deployment dashboard. You can see each Stage was successful: + +![](./static/4-iis-workflows-12.png) + +### Next Step + +* [5 - Best Practices and Troubleshooting](5-best-practices-and-troubleshooting.md) + diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/5-best-practices-and-troubleshooting.md b/docs/first-gen/continuous-delivery/dotnet-deployments/5-best-practices-and-troubleshooting.md new file mode 100644 index 00000000000..c6faa4bba3f --- /dev/null +++ b/docs/first-gen/continuous-delivery/dotnet-deployments/5-best-practices-and-troubleshooting.md @@ -0,0 +1,72 @@ +--- +title: 5 - Best Practices and Troubleshooting +description: Best Practices and troubleshooting steps for IIS deployments in Harness. +sidebar_position: 60 +helpdocs_topic_id: l639i8uqxs +helpdocs_category_id: 3lkbch7kgn +helpdocs_is_private: false +helpdocs_is_published: true +--- + +In this topic will we cover some of the Best Practices for IIS deployments using Harness, and some of the steps you can take to troubleshoot issues. + +### Best Practices + +#### Testing Scripts + +When modifying the default scripts, it is better to test test your scripts on a Windows instance using [PowerShell\_ise](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/powershell_ise) than deploying your workflow multiple times. + +#### Using Tags for Instances + +When configuring the Infrastructure Definition in **Environment**, instance Tags make identifying your instances easier. Assign the same tag to all instances and simply use that tag. + +#### Installing IIS from the Command Line + +Before deploying the IIS website, application, or virtual directory to your Windows instances, there must be at least an existing IIS Web Server Role on the instance. + +The following script will prepare your Windows instance with the necessary IIS set up: + + +``` +Start /w pkgmgr /iu:IIS-WebServerRole;IIS-WebServer;IIS-CommonHttpFeatures;IIS-StaticContent;IIS-DefaultDocument;IIS-DirectoryBrowsing;IIS-HttpErrors;IIS-ApplicationDevelopment;IIS-ASPNET;IIS-NetFxExtensibility;IIS-ISAPIExtensions;IIS-ISAPIFilter;IIS-HealthAndDiagnostics;IIS-HttpLogging;IIS-LoggingLibraries;IIS-RequestMonitor;IIS-Security;IIS-RequestFiltering;IIS-HttpCompressionStatic;IIS-WebServerManagementTools;IIS-ManagementConsole;WAS-WindowsActivationService;WAS-ProcessModel;WAS-NetFxEnvironment;WAS-ConfigurationAPI +``` +You will see IIS installed in the Server Manager. + +![](./static/5-best-practices-and-troubleshooting-27.png) + +In the IIS listing, in ROLES and FEATURES, you can see the Web Server Role: + +![](./static/5-best-practices-and-troubleshooting-28.png) + +For more information, see [Installing IIS 7.0 from the Command Line](https://docs.microsoft.com/en-us/iis/install/installing-iis-7/installing-iis-from-the-command-line) from Microsoft. + +### Troubleshooting + +The following problems can occur when deploying your IIS website, application, or virtual directory. + +#### Error: No delegates could reach the resource + +You receive this error when deploying your workflow. + +![](./static/5-best-practices-and-troubleshooting-29.png) + +##### Solutions + +* Ensure your artifact can be deployed via WinRM onto a Windows instance. It's possible to select the wrong artifact in Service. +* Ensure you have access to the deployment environment, such as VPC, subnet, etc. +* Ensure your WinRM Connection can connect to your instances, and that your instances have the correct ports open. See [Set Up WinRM on Instances and Network](1-delegate-and-connectors-for-iis.md#set-up-win-rm-on-instances-and-network). + +#### Port Conflicts + +Do not target the same port as another website. In the Harness **Service**, in **Variables**, ensure **${SitePort}** points to a port that isn't in use. In the following example, the port was changed to **8080** to avoid the error: + +![](./static/5-best-practices-and-troubleshooting-30.png) + +You can keep the same port and use host header names to host multiple IIS sites using the same port. For more information, search the Web for `use same port and use host header names to host multiple IIS sites`. There are multiple examples. + +### Next Steps + +Now that you have a working deployment, you can use the Harness Continuous Verification machine learning to verify and improve your deployments. For more information, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + +Enhance your IIS deployment workflow(s) with the multiple options available, including Failure Strategy, Rollback Steps, Barriers, and templating. For more information, see [Add a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration). + diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/_category_.json b/docs/first-gen/continuous-delivery/dotnet-deployments/_category_.json new file mode 100644 index 00000000000..04813fedfee --- /dev/null +++ b/docs/first-gen/continuous-delivery/dotnet-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "IIS (.NET) Deployments", "position": 70, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "IIS (.NET) Deployments"}, "customProps": { "helpdocs_category_id": "3lkbch7kgn"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/iis-environments.md b/docs/first-gen/continuous-delivery/dotnet-deployments/iis-environments.md new file mode 100644 index 00000000000..f8243ef2e05 --- /dev/null +++ b/docs/first-gen/continuous-delivery/dotnet-deployments/iis-environments.md @@ -0,0 +1,107 @@ +--- +title: 3 - IIS Environments in AWS and Azure +description: Define AWS and Azure infrastructures for IIS deployments. +sidebar_position: 40 +helpdocs_topic_id: itseg37bji +helpdocs_category_id: 3lkbch7kgn +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to add Harness Infrastructure Definitions for target deployment environments in AWS and Azure. + +For more information, see [Add an Environment](https://docs.harness.io/article/n39w05njjv-environment-configuration). + +### Create an Environment + +To add an environment for your IIS Website, do the following: + +1. In your application, click **Environments**. The Environments page for the applications appears. +2. Click **Add Environment**. The **Environment** dialog appears. +3. Enter a name for your deployment environment, such as **IIS-EC2**, and then, in **Environment Type**, click **Non-Production** or **Production**. When you are finished, click **SUBMIT**. The Environment page appears.![](./static/iis-environments-13.png) + +Next, you will add an Infrastructure Definition using the Cloud Provider you added to define where your IIS Website will be deployed. + +### Add an Infrastructure Definition + +​Infrastructure Definitions specify the target deployment infrastructure for your Harness Services, and the specific infrastructure details for the deployment, like VPC settings.  + +You define the Kubernetes cluster to use for deployment as an ​Infrastructure Definition. For this guide, we will use the GCP Cloud Provider you added and the Kubernetes cluster with Helm installed. + +To add the Infrastructure Definition, do the following: + +1. In the Harness Environment, click **Add Infrastructure Definition**. The **Infrastructure Definition** dialog appears.![](./static/iis-environments-14.png) +2. In **Name**, enter the name you will use to select this Infrastructure Definition when you create a Workflow. +3. In **Cloud Provider Type**, select the type of Cloud Provider you added earlier, such as **Amazon Web Services**, **Microsoft Azure**, etc. In the following steps we use the **Amazon Web Services** type, and so the settings are specific to AWS. +4. In **Deployment Type**, select **Windows Remote Management (WinRM)**. +5. In **Cloud Provider**, select the Cloud Provider you added earlier. +6. In **Region**, select the AWS region where you want to deploy. This list is populated using the Cloud Provider you selected. +7. In **Load Balancer**, select the load balancer used by the VPC. +8. In **Connection Attribute**, select the name of the [WinRM Connection](1-delegate-and-connectors-for-iis.md#set-up-win-rm-on-instances-and-network) you created. This is the value you entered in **Name** when you created the WinRM Connection in **Secrets Management**. +9. In **Host Name Convention**, in most cases, you can leave the default expression in this field. +Host Name Convention is used to derive the instance(s) hostname. The hostname that results from the convention should be same as the output of the command **hostname** on the host itself. +For information on obtaining the AWS instance hostname, see [Instance Metadata and User Data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) from AWS. +10. In **Scope to specific Services**, select the Harness Service you created earlier. +The Infrastructure Definition will look something like this:![](./static/iis-environments-15.png) +11. Click **Submit**. The new Infrastructure Definition is added to the Harness Environment. + +![](./static/iis-environments-16.png)That is all you have to do to set up the deployment Environment in Harness. + +Now that you have the Service and Environment set up. Now you can create the deployment Workflow in Harness. + +### AWS Infrastructure Definition + +This following table describes the fields for an AWS EC2 Infrastructure Definition. + + + +| | | +| --- | --- | +| **Field** | **What to Enter** | +| **Cloud Provider Type** | Select the type of Cloud Provider you added earlier, such as Amazon Web Services. | +| **Deployment Type** | Select **Windows Remote Management (WinRM)**. | +| **Cloud Provider** | Select the Cloud Provider you added. | +| **Provision Type** | If you have a Windows instance running in your Cloud Provider, click **Already Provisioned**. If you need to set up an instance, create the instance in your Cloud Provider, and then return to the Harness environment set up.If you have an Infrastructure Provisioner configured, select **Dynamically Provisioned**. This guide does not cover Harness Infrastructure Provisioners. For more information, see [Add an Infra Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner). | +| **Region** | Region for the VPC. | +| **Load Balancer** | The load balancer used by the VPC. | +| **WinRM** **Connection Attributes** | Select the name of the WinRM Connection you created. This is the value you entered in **Name** when you created the WinRM Connection in **Secrets Management**. | +| **Host Name Convention** | Host Name Convention is used to derive the instance(s) hostname. The hostname that results from the convention should be same as the output of the command **hostname** on the host itself. Agent-based solutions like AppDynamics, Splunk, New Relic, etc, use the hostname to uniquely identify the instance.In most cases, you can leave the default expression in this field.For information on obtaining the AWS instance hostname, see [Instance Metadata and User Data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) from AWS. | +| **User Auto Scaling Group** / **Use AWS Instance Filter** | **User Auto Scaling Group:** If you are using an Auto Scaling Group, you can select it from the list.**Use AWS Instance Filter:** Specify the VPC, Tags, Subnet, and Security Group where your instance(s) will be deployed.Using **Tags** is the easiest way to reference an instance. | +| **Use Public DNS for connection** | If locating the VPC requires public DNS name resolution, enable this option. | +| **Scope to specific Services** | Select the Harness Service you created for your IIS Website. | + +### Azure Infrastructure Definition + +You can locate most of the Azure information on the VM overview page: + +![](./static/iis-environments-17.png) + +This following table describes the fields for an Azure Infrastructure Definition. + + + +| | | +| --- | --- | +| **Field** | **What to Enter** | +| **Cloud Provider Type** | Select the type of Cloud Provider you added earlier, such as Microsoft Azure. | +| **Deployment Type** | Select **Windows Remote Management (WinRM)**. | +| **Cloud Provider** | Select the Cloud Provider you added. | +| **Subscription ID** | Select the Azure subscription to use. When you set up the Azure cloud provider in Harness, you entered the Client/Application ID for the Azure App registration. To access resources in your Azure subscription, you must assign the Azure app using this Client ID to a role in that subscription. Now, when you are setting up an Azure Infrastructure Definition in a Harness environment, you select the subscription. If the Azure App registration using this Client ID is not assigned a role in a subscription, no subscriptions will be available.For more information, see [Assign the application to a role](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#assign-the-application-to-a-role) from Microsoft. | +| **Resource Group** | Select the resource group where your VM is located. | +| **WinRM** **Connection Attributes** | Select the name of the WinRM Connection you created. This is the value you entered in **Name** when you created the WinRM Connection in **Secrets Management**. | +| **Tags** | Click **Add** to use a tag to quickly select the VM you want to use. | +| **Use Public DNS for connection** | If locating the VM(s) requires public DNS name resolution, enable this option. Since the Harness delegate can only run on Linux, it must either be run on a Linux VM in the same subnet as your deployment target VMs or on a Linux server with network access to your Azure VMs. In the latter case, you can use public DNS to resolve the hostname of the target VMs. | +| **Scope to specific Services** | Select the Harness Service you created for your IIS Website. | + +### Option: Override an IIS Service Configuration + +Currently, support for Config Files override is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions.You can configure your Environment to override the **Config Variables** and **Config Files** in the IIS Services that use the Environment. + +![](./static/iis-environments-18.png) + +For more information, see [Override a Service Configuration in an Environment](https://docs.harness.io/article/4m2kst307m-override-service-files-and-variables-in-environments). + +### Next Step + +* [4 - IIS Workflows and Pipelines](4-iis-workflows.md) + diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/iis-net-deployment.md b/docs/first-gen/continuous-delivery/dotnet-deployments/iis-net-deployment.md new file mode 100644 index 00000000000..0c733d509e4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/dotnet-deployments/iis-net-deployment.md @@ -0,0 +1,83 @@ +--- +title: IIS (.NET) Deployment Overview +description: Overview of deploying IIS Websites, IIS Applications, and IIS Virtual Directory. +sidebar_position: 10 +helpdocs_topic_id: d485c2vy7e +helpdocs_category_id: 3lkbch7kgn +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This guide will walk you through deploying an IIS Website, Application, and Virtual Directory using Harness. For all service types, Harness automatically creates the Deployment Specification, which you can customize. + +Walk through this guide in the following order: + +1. [Deployment Overview](iis-net-deployment.md#deployment-overview) +2. [Delegate and Connectors for IIS](1-delegate-and-connectors-for-iis.md) +3. [Services for IIS (.NET)](2-services-for-iis-net.md) +4. [IIS Environments in AWS and Azure](iis-environments.md) +5. [IIS Workflows and Pipelines](4-iis-workflows.md) +6. [Best Practices and Troubleshooting](5-best-practices-and-troubleshooting.md) + +### Deployment Overview + +Harness provides support for Microsoft IIS and .NET for Azure and AWS deployments. Your Harness applications support IIS in the following entities: + +* **Services**: There are three IIS service types: + + IIS Website. + + IIS Application. + + IIS Virtual Directory. + + For all service types, Harness automatically creates the Deployment Specification, which you can customize. +* **Environments:** You can deploy your services to Windows instances in your enterprise network or VPC, such as AWS and Azure. +* **Workflows:** Harness provides Basic, Canary, Blue/Green, Multi-Phase, and Rolling Deployment options. Default Rollback, Notification, and Failure Strategies are provided and can be easily changed. +* **Pipelines and Triggers:** Create pipelines containing multiple workflows and triggers to run your workflows or pipelines automatically. + +Harness provides PowerShell and WinRM support to execute workflows and communicate within the Microsoft ecosystem. + +It is important to note that a site contains one or more applications, an application contains one or more virtual directories, and a virtual directory maps to a physical directory on a computer. To use Harness to deploy IIS sites, applications, and virtual directories, the IIS structural requirements (site > application > virtual directory) must be established on the Windows instances. + +Before deploying the IIS website, application, or virtual directory to your Windows instances, there must be an existing [IIS Web Server Role](https://docs.microsoft.com/en-us/iis/web-hosting/web-server-for-shared-hosting/installing-the-web-server-role) on the instance. This ensures that the environment is ready for deployment. Harness IIS Website deployment requires the IIS Web Server Role. The Harness IIS Application and IIS Virtual Directory deployments require that an IIS Website exists. For more information, see [Installing IIS from the Command Line](5-best-practices-and-troubleshooting.md#installing-iis-from-the-command-line) below.For information about IIS sites, applications, and virtual directories, see [Understanding Sites, Applications, and Virtual Directories on IIS 7](https://docs.microsoft.com/en-us/iis/get-started/planning-your-iis-architecture/understanding-sites-applications-and-virtual-directories-on-iis). + +#### Microsoft .NET and Azure Video Summary + +Here is a quick primer on deploying Microsoft IIS .NET applications and Microsoft .NET Core container applications using Harness Continuous Delivery. + + + + +#### Deployment Preview + +Deploying IIS websites, applications, or virtual directories using Harness is a simple process. It involves setting up your deployment environment, establishing a connection with Harness, and then using Harness to define your deployment goals. + +Here are the major steps in an IIS (.NET) deployment: + +1. Add connections: + 1. **WinRM Connection:** Add a WinRM connection in Harness to execute deployment steps on the remote Windows servers. The connection must use a user account with permission to execute commands on the Windows instances. + 2. **Cloud Provider:** Add a connection to the Cloud Provider where the IIS website, application, or virtual directory will be deployed, such as AWS or Azure. + 3. **Artifact Server:** Add a connection to the Artifact Server where Harness can pull the IIS website, application, or virtual directory artifacts. +2. **Application:** Add a Harness Application for your IIS website, application, or virtual directory. An application is a logical grouping of entities, such as services, environments, workflows, and triggers. +3. **Service:** Add a Harness Service in your Application for each IIS website, application, or virtual directory. This guide covers IIS Website, IIS Application, and IIS Virtual Directory. +4. **Environment:** Add a Harness Environment and an Infrastructure Definition that consists of a WinRM Deployment type and a Cloud Provider connection that specifies the deployment VPC, subnets, security groups, etc. Or, if the environment is a physical data center, specify the IP address. +5. **Workflow:** Add a Harness Workflow to deploy your website, application, or virtual directory. We will review the Workflow steps generated by Harness automatically. +6. **Deploy:** Deploy your Workflows in the following order in a Harness Pipeline: + 1. IIS Website Workflow. + 2. IIS Application Workflow. + 3. IIS Virtual Directory Workflow. + 4. Observe the deployment steps in real-time, and confirm in your VPC. +7. **Continuous Verification:** Once your deployments are successful, you can add verification steps into the workflow using your verification provider. For more information, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). +8. **Refinements:** Add notification steps, failure strategy, and make your workflow a template for other users. For more information, see [Add a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration). + +### Before You Begin + +This guide assumes that you are familiar with Harness architecture and have downloaded the Harness delegate into your enterprise network or VPC. For more information, see: + +* [Harness Requirements](https://docs.harness.io/article/70zh6cbrhg-harness-requirements) +* [Delegate Installation](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Architecture](https://docs.harness.io/article/de9t8iiynt-harness-architecture) +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) + +### Next Step + +* [1 - Delegate and Connectors for IIS](1-delegate-and-connectors-for-iis.md) + diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-19.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-19.png new file mode 100644 index 00000000000..20abf4b3cf7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-19.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-20.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-20.png new file mode 100644 index 00000000000..1e5141c501b Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-20.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-21.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-21.png new file mode 100644 index 00000000000..7dd37004d0c Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-21.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-22.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-22.png new file mode 100644 index 00000000000..3e8ed51958a Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-22.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-23.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-23.png new file mode 100644 index 00000000000..ee1ef3b79cd Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-23.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-24.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-24.png new file mode 100644 index 00000000000..09b14405551 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-24.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-25.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-25.png new file mode 100644 index 00000000000..905fcdaec47 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-25.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-26.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-26.png new file mode 100644 index 00000000000..dd0dc741ad0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/1-delegate-and-connectors-for-iis-26.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-31.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-31.png new file mode 100644 index 00000000000..0b3411ba333 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-31.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-32.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-32.png new file mode 100644 index 00000000000..8df1655b3f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-32.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-33.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-33.png new file mode 100644 index 00000000000..eecb14b090f Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-33.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-34.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-34.png new file mode 100644 index 00000000000..e9419e51ef7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-34.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-35.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-35.png new file mode 100644 index 00000000000..ae1310af9f7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-35.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-36.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-36.png new file mode 100644 index 00000000000..38b4e7dc495 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-36.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-37.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-37.png new file mode 100644 index 00000000000..6528a6fd8c9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-37.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-38.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-38.png new file mode 100644 index 00000000000..bb79fb30679 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-38.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-39.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-39.png new file mode 100644 index 00000000000..6528a6fd8c9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-39.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-40.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-40.png new file mode 100644 index 00000000000..c5edccdd1e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-40.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-41.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-41.png new file mode 100644 index 00000000000..417ef29e75a Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-41.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-42.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-42.png new file mode 100644 index 00000000000..7a878136fe1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/2-services-for-iis-net-42.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-00.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-00.png new file mode 100644 index 00000000000..417a1301c37 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-00.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-01.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-01.png new file mode 100644 index 00000000000..3a00d454345 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-01.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-02.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-02.png new file mode 100644 index 00000000000..6db2088425c Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-02.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-03.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-03.png new file mode 100644 index 00000000000..1cba5b2db11 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-03.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-04.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-04.png new file mode 100644 index 00000000000..590792ac9de Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-04.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-05.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-05.png new file mode 100644 index 00000000000..d1850e32dea Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-05.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-06.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-06.png new file mode 100644 index 00000000000..ca6f1f35964 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-06.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-07.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-07.png new file mode 100644 index 00000000000..cb3b1d6d536 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-07.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-08.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-08.png new file mode 100644 index 00000000000..079106a105c Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-08.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-09.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-09.png new file mode 100644 index 00000000000..363c2455e93 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-09.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-10.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-10.png new file mode 100644 index 00000000000..6f32b1b5c46 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-10.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-11.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-11.png new file mode 100644 index 00000000000..bfea44f7b5c Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-11.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-12.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-12.png new file mode 100644 index 00000000000..83e2bf4a176 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/4-iis-workflows-12.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-27.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-27.png new file mode 100644 index 00000000000..c090246e778 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-27.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-28.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-28.png new file mode 100644 index 00000000000..078db410547 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-28.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-29.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-29.png new file mode 100644 index 00000000000..43355bea91a Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-29.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-30.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-30.png new file mode 100644 index 00000000000..3557e735c50 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/5-best-practices-and-troubleshooting-30.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-13.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-13.png new file mode 100644 index 00000000000..3e6314dd381 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-13.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-14.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-14.png new file mode 100644 index 00000000000..f583e73f5e9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-14.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-15.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-15.png new file mode 100644 index 00000000000..6558ee91f29 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-15.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-16.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-16.png new file mode 100644 index 00000000000..c95d6249cc1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-16.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-17.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-17.png new file mode 100644 index 00000000000..8530e394756 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-17.png differ diff --git a/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-18.png b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-18.png new file mode 100644 index 00000000000..75b5e058b95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/dotnet-deployments/static/iis-environments-18.png differ diff --git a/docs/first-gen/continuous-delivery/google-cloud/_category_.json b/docs/first-gen/continuous-delivery/google-cloud/_category_.json new file mode 100644 index 00000000000..d72d74c2061 --- /dev/null +++ b/docs/first-gen/continuous-delivery/google-cloud/_category_.json @@ -0,0 +1 @@ +{"label": "Google Cloud", "position": 50, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Google Cloud"}, "customProps": { "helpdocs_category_id": "btqlctlqsj"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/google-cloud/static/trigger-google-cloud-builds-00.png b/docs/first-gen/continuous-delivery/google-cloud/static/trigger-google-cloud-builds-00.png new file mode 100644 index 00000000000..871b617e311 Binary files /dev/null and b/docs/first-gen/continuous-delivery/google-cloud/static/trigger-google-cloud-builds-00.png differ diff --git a/docs/first-gen/continuous-delivery/google-cloud/trigger-google-cloud-builds.md b/docs/first-gen/continuous-delivery/google-cloud/trigger-google-cloud-builds.md new file mode 100644 index 00000000000..b3fb3ea2e33 --- /dev/null +++ b/docs/first-gen/continuous-delivery/google-cloud/trigger-google-cloud-builds.md @@ -0,0 +1,233 @@ +--- +title: Run Google Cloud Builds +description: Currently, this feature is behind the feature flag GCB_CI_SYSTEM. Contact Harness Support to enable the feature. Google Cloud Build (GCB) can import source code from a variety of repositories or clou… +# sidebar_position: 2 +helpdocs_topic_id: dvm5q9j0d0 +helpdocs_category_id: btqlctlqsj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `GCB_CI_SYSTEM`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Google Cloud Build (GCB) can import source code from a variety of repositories or cloud storage spaces, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives. + +Harness GCB integration lets you do the following: + +* Run GCB builds as part of your Harness Workflow. +* Run GCB builds using config files inline or in remote Git repos. +* Execute GCB Triggers, including substituting specific variables at build time. + +### Before You Begin + +* If you are new to GCB, review [Overview of Cloud Build](https://cloud.google.com/cloud-build/docs/overview) and [Quickstart: Build](https://cloud.google.com/cloud-build/docs/quickstart-build) from Google. +* [Add Google Cloud Platform Cloud Provider](https://docs.harness.io/article/6x52zvqsta-add-google-cloud-platform-cloud-provider) +* [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) +* [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration) + +### Review: Harness GCB Integration + +You can add the Google Cloud Build step to any Workflow type and Workflow section.The following steps describe, in general, the lifecycle of a GCB build with Harness: + +1. In GCB: + 1. Prepare your application code and any needed assets. + 2. Create a build config file in JSON format. + 3. For GCB triggers, create a trigger in Google Cloud Build. +2. In Harness: + 1. Connect Harness to your Google Cloud Platform account. + 2. Connect Harness to your Git account if you will be using remote GCB build config files. + 3. Add a GCB step to your Workflow. + 4. Target your GCB build using one of the following: + 1. Inline config file. + 2. Remote config file. + 3. GCB trigger. + 5. Deploy the Workflow to execute the GCB build. + +Let's set it up. + +### Step 1: Connect to Google Cloud Platform + +1. Connect Harness to your GCP account by setting up a Harness [Google Cloud Platform Cloud Provider](https://docs.harness.io/article/6x52zvqsta-add-google-cloud-platform-cloud-provider). +You set up this connection using a [GCP service account key file](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) (in JSON format). +2. Ensure that the service account used has the [**GCB Cloud Build Editor role**](https://cloud.google.com/cloud-build/docs/iam-roles-permissions#predefined_roles). + +For a detailed description of the permissions used, see the following section. + +#### Required GCP Permissions + +GCB roles and permissions are described in [IAM Roles and permissions](https://cloud.google.com/cloud-build/docs/iam-roles-permissions) from Google.Harness uses get, create, and list [permissions](https://cloud.google.com/cloud-build/docs/iam-roles-permissions). Here is how these permissions map to Harness GCB operations: + +* Run inline builds: `cloudbuild.builds.create` +* Run remote builds: `cloudbuild.builds.create` +* Run triggers: `cloudbuild.builds.create` +* Fetch logs: `cloudbuild.builds.get` +* List triggers: `cloudbuild.builds.list` (if you are entering the trigger name manually, this is not needed) + +If you create your own role, ensure that it includes these permissions. + +### Step 2: Add Google Cloud Build Step + +You can add the Google Cloud Build step to any Workflow type and Workflow section. + +1. In your Harness Workflow, in any section, click **Add Step**. +2. Select **Google Cloud Build**, and click **Next**. +3. In **Google Cloud Provider**, select the Harness Google Cloud Provider you set up earlier. See [Add Google Cloud Platform Cloud Provider](https://docs.harness.io/article/6x52zvqsta-add-google-cloud-platform-cloud-provider).You can turn this setting into a deployment runtime parameter by clicking the template button **[T]**. This will create a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) for the setting. When you deploy this Workflow, you can enter the option for the setting. +4. Complete the step using the following settings. + +### Option 1: Inline JSON Build Spec + +You can enter an inline JSON build configuration just like you would in a Cloud Build config file. + +1. In **Build Specification**, click **Inline**. + +Enter your build config spec in JSON. For example: + + +``` +{ + "steps": [ + { + "name": "gcr.io/cloud-builders/git", + "args": ["clone", "https://github.com/john-smith/gcb.git", "."] + }, + { + "name": "gcr.io/cloud-builders/gradle", + "entrypoint": "gradle", + "args": ["build"] + }, + { + "name": "gcr.io/cloud-builders/docker", + "args": ["build", "-t", "gcr.io/$PROJECT_ID/v-image", "--build-arg=JAR_FILE=build/libs/playground-0.0.1.jar", "."] + } + ], + "options": {"logStreamingOption": "STREAM_ON"}, + "images": ["gcr.io/$PROJECT_ID/v-image"] +} +``` +Harness uses the Cloud Build API [Build resource](https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds) to process the build config spec. + +### Option 2: Pulling Build Spec from Git Repo + +In this option, you specify the repo where your build config file and its related files are located. + +1. Ensure you have set up a Harness Source Repo Provider that points to the Git repo containing your build config file. See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +2. In **Build Specification**, select **Remote**. +3. In **Source Repository**, select the Source Repro Provider that connects to your build config file repo.You can turn this setting into a deployment runtime parameter by clicking the template button **[T]**. This will create a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) for the setting. When you deploy this Workflow, you can enter the option for the setting. +4. In **Commit ID**, select **Latest from Branch** or **Specific Commit ID**. +5. Enter the branch name or commit ID. Both of these settings allow Harness variables, such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). +When you set up the Source Repo Provider, you entered in a branch name. The branch name you enter in the **Google Cloud Build** step **Branch Name** overrides that setting. +6. In **File Path**, enter the full path from the root of the repo to the build config file. If the build file location in the repo is **https://github.com/john-smith/gcb/cloudbuild.json**, then the file is at the repo root and you would just enter **cloudbuild.json** in **File Path**. + +### Option 3: Execute Existing GCB Trigger + +Select this option if you have created a [Cloud Build trigger](https://cloud.google.com/cloud-build/docs/automating-builds/create-manage-triggers) for your Cloud Build and you want to execute it in your Workflow. + +1. In **Build Specification**, click **Trigger**. +2. In **Trigger Name**, select the name of the Cloud Build trigger you want to execute.You can enter the name of an existing variable expression in this setting. For example, if you created the [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) `triggerName`, you can enter `${workflow.variables.triggerName}`. The variable expression should refer to a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) or other available [Harness variable](https://docs.harness.io/article/9dvxcegm90-variables). When you deploy this Workflow, you can enter the option for the setting. +3. In **Trigger Type**, select one of the following: + 1. **Branch Name:** Set your trigger to start a build using commits from a particular branch. + 2. **Tag Name:** Set your trigger to start a build using commits that contain a particular tag. + 3. **Commit SHA:** Set your trigger to start a build using an explicit commit SHA. + All of these settings allow Harness variables, such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +#### Substitutions + +GCB lets you use substitutions for specific variables at build time. You can do this in the Harness Google Cloud Build step, also. + +See [Substituting variable values](https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values) from Google.1. In **Substitutions**, click **Add**. +2. In **Name**, enter the name of the variable you want to make a substitution on. +For example, if your Cloud Build config file uses `name: ${_NAME}`, you would enter **\_Name**. +3. In **Value**, enter the value to substitute for the variable. + +This is often used for tagging. For example, if your config file has `tags: ["${_TAG1}", "${_TAG2}"]`, you would create substitutions with the names **\_TAG1** and **\_TAG2** and then enter the tag values in **Value**. + +### Step 3: Execution Settings + +#### Timeout + +The timeout period determines how long to wait for the step to complete. When the timeout expires, it is considered a workflow failure and the workflow [Failure Strategy](https://docs.harness.io/article/vfp0ksdzg3-define-workflow-failure-strategy-new-template) is initiated. + +#### Execute with previous steps + +Select this checkbox to run this step in parallel with the previous Workflow step(s). + +#### Wait interval before execution + +Set how long the deployment process should wait before executing the step. + +### Option 4: Use Output Variables + +Harness provides the following information about the builds it executes: + +* Activity ID: `activityId` +* Build Url: `buildUrl` +* Build# `buildNo` +* Tags: `tags` +* Status: `buildStatus` +* Build Name: `name` +* Created At: `createTime` +* Substitutions (an array of key-value pairs): `substitutions` +* Logs URL: `logUrl` +* Images: `images` +* Bucket: `artifactLocation` +* Artifacts (an array of artifacts): `artifacts` + +You create an output variable in the Google Cloud Build step, and then you can reference each build output item in a subsequent [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +1. Select **Build output in the Context**. +2. In **Name**, enter a name for the output variable, such as **gcb**. +3. In **Scope**, select **Workflow**, **Pipeline**, or **Phase** (Canary or Multi-Service Workflows only). Scope is used to avoid variable name conflicts and to make the output information available across Workflows in a Pipeline. + +For example, if I create the variable **gcb**, in a subsequent Shell Script step, I could enter the following: + + +``` +echo "Activity ID: ${gcb.activityId}" +echo "Build Url: ${gcb.buildUrl}" +echo "Build# ${gcb.buildNo}" +echo "Tags: ${gcb.tags}" +echo "Status: ${gcb.buildStatus}" +echo "Build Name: ${gcb.name}" +echo "Created At: ${gcb.createTime}" +echo "Substitutions: ${gcb.substitutions}" +echo "Logs URL: ${gcb.logUrl}" +echo "Images: ${gcb.images}" +echo "Bucket: ${gcb.artifactLocation}" +echo "Artifacts: ${gcb.artifacts}" +``` +When the Workflow is deployed, the Harness Deployments page will show the output information in the Shell Script step: + + +``` +Activity ID: up9p7jG6SoCAIvwyv8sE6A +Build Url: https://console.cloud.google.com/cloud-build/builds/8ba3539e-658e-44bb-8624-d34638264f9b +Build# 8ba3539e-658e-44bb-8624-d34638264f9b +Tags: [TAG1, TAG2] +Status: SUCCESS +Build Name: operations/build/project-123/OGJhMzUzOWUtNjU4ZS00NGJiLTg2MjQtZDM0NjM4MjY0Zjli +Created At: 2020-07-29T15:56:22.306572924Z +Substitutions: {_NAME=gcr.io/cloud-builders/docker, _TAG2=test2, _TAG1=test1, BRANCH_NAME=master, REPO_NAME=gcb, REVISION_ID=d89b008ffd36d40d3e9c71cca5f0a9e699602f60, COMMIT_SHA=d89b008ffd36d40d3e9c71cca5f0a9e699602f60, SHORT_SHA=d89b008} +Logs URL: https://console.cloud.google.com/cloud-build/builds/8ba3539e-658e-44bb-8624-d34638264f9b?project=196121614392 +Images: [gcr.io/project-123/v-image] +Bucket: gs://gcb-playgound/ +Artifacts: [build/libs/playground-0.0.1.jar] +``` +### Step 4: Deploy Workflow + +When you are finished setting up the Google Cloud Build step, and any other steps, deploy your Workflow. + +The Google Cloud Build step **Details** displays information about the build, including a build URL you can click to open the build in the GCB console: + +![](./static/trigger-google-cloud-builds-00.png) + +### Limitations + +Harness only supports the use of JSON in inline and remote build config files. If you use a GCB trigger in the Google Cloud Build step, the config file it uses can be either YAML or JSON. + +### See Also + +* [Using the Jenkins Command](https://docs.harness.io/article/5fzq9w0pq7-using-the-jenkins-command) +* [Configure Workflows Using YAML](https://docs.harness.io/article/0svkm9v7vr-configure-workflow-using-yaml) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/harness-git-based/_category_.json b/docs/first-gen/continuous-delivery/harness-git-based/_category_.json new file mode 100644 index 00000000000..fb583391393 --- /dev/null +++ b/docs/first-gen/continuous-delivery/harness-git-based/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Harness Git-based How-tos", + "position": 600, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Harness Git-based How-tos" + }, + "customProps": { + "helpdocs_category_id": "goyudf2aoh" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/harness-git-based/harness-git-ops.md b/docs/first-gen/continuous-delivery/harness-git-based/harness-git-ops.md new file mode 100644 index 00000000000..a5af795eab6 --- /dev/null +++ b/docs/first-gen/continuous-delivery/harness-git-based/harness-git-ops.md @@ -0,0 +1,27 @@ +--- +title: Harness Git Integration Overview +description: Harness' GitOps integration enables you to use Git as a single source of truth to trigger Harness deployments. +sidebar_position: 10 +helpdocs_topic_id: khbt0yhctx +helpdocs_category_id: goyudf2aoh +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness enables a developer-centric experience for managing applications by implementing the Git-based methodology across its components. For example, you can trigger deployments in Harness using Git Pull and Push events. As a result, Harness Git integration allows you to use Git as the single source of truth when maintaining the state of the deployment process in Harness. + +In addition, Harness lets you sync any Harness Application with a Git repo, using either a one-way or two-way sync. Almost anything you can do in the Harness UI, you can do in YAML in Git. For more information, see [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code).The following table describes the many Git-enabled components of Harness. + + + +| | | | +| --- | --- | --- | +| **Harness Feature** | **Git Functionality** | **Links to Topics** | +| Triggers with Git Webhooks | Use a Harness Trigger Git Webhook URL to execute a Harness deployment in response to a Github, BitBucket, or GitLab event. | [Trigger Deployments using Git Events](../model-cd-pipeline/triggers/trigger-a-deployment-on-git-event.md) | +| File-based Repo Triggers | Initiate the Harness Trigger only when **specific files** in the repo are changed. For example, initiate the Trigger only when a Helm values.yaml file in Git is changed. | [Trigger a Deployment when a File Changes](../model-cd-pipeline/triggers/trigger-a-deployment-when-a-file-changes.md) | +| Using Git Push and Pull Request Variables in Harness Applications | Git push and pull request variables are available in a Trigger, and can be passed to the Workflows (and Pipelines) executed by the Trigger. An example variable is `${pullrequest.id}` for Pull request ID.
Examples:
▪ Map the Git payload to create uniquely-named Harness Environments and [Infrastructure Definitions](../model-cd-pipeline/environments/environment-configuration.md#add-an-infrastructure-definition).
▪ Use the Git payload with Git events, and Harness can respond to a Git event to build the artifact and deploy it to a unique infrastructure. |
▪ [Push and Pull Request Variables](../model-cd-pipeline/expressions/passing-variable-into-workflows.md#push-and-pull-request-variables)

▪[Passing Variables into Workflows and Pipelines from Triggers](../model-cd-pipeline/expressions/passing-variable-into-workflows.md) | +| Helm and Kubernetes Deployments |

▪ Pull a Helm chart from a Git repo.

▪ Specify values or a full values.yaml file in Git repo and Harness will fetch the values during runtime.

▪ Override Workflow variables using values added via a Git connector.

▪ Map a Git payload, such as a PR number, to a Kubernetes namespace via variables. For example, you could use the PR number to create a unique Kubernetes namespace and deploy to that namespace to evaluate the PR build. |

▪ [Helm Native Deployment Guide Overview](https://docs.harness.io/article/ii558ppikj-helm-deployments-overview)

▪ [Kubernetes How-tos](https://docs.harness.io/article/pc6qglyp5h-kubernetes-deployments-overview) | + +:::note +Webhooks do not work with older versions of Bitbucket and will cause issues with push events sent to Harness. You need to install the [Post Webhooks for Bitbucket](https://marketplace.atlassian.com/apps/1215474/post-webhooks-for-bitbucket?hosting=server&tab=overview) plugin in Bitbucket to enable Harness to allow two-way sync with Bitbucket. For more information, see [Bitbucket Post Webhooks Plugin](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers#bitbucket_post_webhooks_plugin). +::: \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/harness-git-based/onboard-teams-using-git-ops.md b/docs/first-gen/continuous-delivery/harness-git-based/onboard-teams-using-git-ops.md new file mode 100644 index 00000000000..303767aecb3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/harness-git-based/onboard-teams-using-git-ops.md @@ -0,0 +1,314 @@ +--- +title: Onboard Teams Using Git +description: Create an Application template you can sync and clone in Git for onboarding new teams. +sidebar_position: 20 +helpdocs_topic_id: 3av5pc4goc +helpdocs_category_id: goyudf2aoh +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic shows you how to create a Harness Application template you can sync and clone in Git for onboarding new teams. + +Often, teams create an Application template for engineering leads or DevOps engineers. Each team then gets a clone of the Application in Git that they can modify for their purposes. + +Development teams can then deploy consistently without using the Harness UI to create their Applications from scratch. They simply change a few lines of YAML vis scripts and deploy their application. + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) — Ensure you know Harness Key Concepts. +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) — Review Harness YAML interface. +* [Harness Git Integration Overview](harness-git-ops.md) — Review Harness support for Git. + +### Review: Why Git-based Onboarding? + +Here are a few examples of why Harness customers use Git for onboarding: + +* Developers working in Git don't want to navigate to another screen to configure their deployment Workflows and Pipelines. +* For some developers, UI's take too long to navigate when coding rapidly. The Harness YAML interface uses a simply folder structure for easy navigation. +* Segmenting Applications from overall Harness management. In a single repo, developers can manage their Applications, container specifications and manifests, and Harness component configuration. +* Create a Golden Template Application and use it to onboard Applications for teams. + +### Step 1: Set Up Git Connector + +Set up a Source Repo Provider connection in Harness that connects to the Git repo you want to use. For details, see [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +For example, here is a new GitHub repo named **Golden Template Application** and its corresponding set up in Harness as a Source Repo Provider: + +![](./static/onboard-teams-using-git-ops-00.png) + +Remember the following important settings: + +* **Repo URL** — The HTTPS repo URL is pasted into the Harness Source Repo Provider **URL**. The HTTPS setting is selected for both. +* **Harness Webhook URL** — The **Generate Webhook URL** setting was enabled when the Source Repro Provider was created, and the Webhook URL was pasted into the repo's Webhook **Payload URL**. +* **Content type** — The **Content type** in the repo is **application/json**. +* **Just the push event** — In the repo Webhook's **Which events would you like to trigger this webhook?** setting, only the **Just the push event** option is selected. + +For details, see [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code). + +### Step 2: Create Application Template + +First, you will create the Application template in Harness. You will sync it with the Golden Template Application repo and enable all of its template settings. + +Create an Application named **Golden Template Application**. When you create it, select **Set up Git Sync**, and select the Source Repo Provider in **Git Connector**: + +![](./static/onboard-teams-using-git-ops-01.png)For information on creating Applications, see [Create an Application](../model-cd-pipeline/applications/application-configuration.md). + +Once you click **Submit** you will see the Application in your repo: + +![](./static/onboard-teams-using-git-ops-02.png)The repo will be updated with Application components as you create them in Harness. + +#### Service Template + +Next, create a Harness Service in the Application. For this example, we'll create a Kubernetes Service named **SampleK8s**. For details, see [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart). + +The Service is set up with the following: + +* **Artifact Source placeholder** — We'll add a publicly-available Nginx Docker image from Docker Hub. When teams clone this Application, they can update the Artifact Source. See [Add Container Images for Kubernetes Deployments](https://docs.harness.io/article/6ib8n1n1k6-add-container-images-for-kubernetes-deployments) +* **Remote manifests** — When teams clone this Application, they can update the link to point to their own manifests. See [Link Resource Files or Helm Charts in Git Repos](https://docs.harness.io/article/yjkkwi56hl-link-resource-files-or-helm-charts-in-git-repos). +* **Service Config Variable for the namespace** — A Service Config Variable is created for the namespace used in the manifests. This will enable teams to simply update the variable in their clones with their own namespaces. See [Using Harness Config Variables in Manifests](https://docs.harness.io/article/qy6zw1u0y2-using-harness-config-variables-in-manifests). + +Once you create the Service it is synced with your repo automatically: + +![](./static/onboard-teams-using-git-ops-03.png)The default manifests are also synced with your repo. + +First, set up an Artifact Source placeholder in the Service. Here we use a publicly-available Nginx Docker image from Docker Hub: + +![](./static/onboard-teams-using-git-ops-04.png)For details on setting up the Artifact Server connection to Docker Hub in Harness, see [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +Next, we'll configure the SampleK8s **Manifests** section to use remote manifests. + +If you use remote manifest, you typically need to add another Source Repro Provider for the repo where they are stored. Here is how it is mapped in our example: + +![](./static/onboard-teams-using-git-ops-05.png) + +:::note +Once you have set up the remote manifests, the default manifest files that were synched when you created the Service will be ignored. You can delete them if you like. +::: + +Once this template Application is cloned and used by other teams, we want them to have a simple way to change the target namespace for the deployment. There are different options (see [Create Kubernetes Namespaces based on InfraMapping](https://docs.harness.io/article/5xm4z4q3d8-create-kubernetes-namespaces-based-on-infra-mapping), [Create Kubernetes Namespaces with Workflow Variables](https://docs.harness.io/article/nhlzsni30x-create-kubernetes-namespaces-with-workflow-variables)), but for this example, we will use a Service variable. + +Create a Service variable and then reference it in the values.yaml file in your remote manifests repo. Here's an example using a Service variable named **namespace**: + +![](./static/onboard-teams-using-git-ops-06.png)The value of the namespace Service variable is `${env.name}`. The `${env.name}` expression references the name of the Environment used by the Workflow that deploys this Service. This is a useful default value because Environments are often named after the namespaces teams use, such as **dev** and **prod**. + +:::note +We use lowercase names for Environments because the names will be used for namespaces and Kubernetes requires lowercase names for namespaces. +::: + +The Service template is complete. Next, we'll create the Environment and Infrastructure Definition templates. + +#### Environment and Infrastructure Definition Templates + +We'll add two Environments: one Environment for prod and one for dev. + +##### Prod Environment and Infrastructure Definition + +![](./static/onboard-teams-using-git-ops-07.png) + +Note how the `${serviceVariable.namespace}` we created is used in the **Namespace** setting. + +##### Dev Environment and Infrastructure Definition + +![](./static/onboard-teams-using-git-ops-08.png) + +Note how the `${serviceVariable.namespace}` we created is used in the **Namespace** setting. + +When the Environments and Infrastructure Definitions are created they are synced with Git automatically: + +![](./static/onboard-teams-using-git-ops-09.png) + +If you want to overwrite the namespace value used in the **Namespace** setting for the prod or dev Infrastructure Definitions, you can use a **Service Configuration Override** in the Environment. + +![](./static/onboard-teams-using-git-ops-10.png) + +#### Workflow Template + +For this example, we create a Kubernetes Rolling Deployment template. Create the Workflow type(s) you expect your teams will need. You can always remove unneeded Workflows from Git later. + +All of the major settings of Harness Workflows can be templated, but first you need to set up the Workflow with actual values. + +Create the Workflow by selecting the Environment, Service, and Infrastructure Definition you created earlier. + +Next, open the settings again and click the **[T]** button for all of the settings. This will replace the settings with Workflow variables, thereby templating the Workflow. + +![](./static/onboard-teams-using-git-ops-11.png) + +Now that the Workflow is templated, you will see the Workflow variables in the repo Workflow YAML and the `templatized: true` key. + +![](./static/onboard-teams-using-git-ops-12.png) + +Now the templated Workflow can be used by any Service, Environment, and Infrastructure Definition. + +Next, we'll create a Pipeline for the deployment. + +#### Pipeline Template + +For this example, we create a three Stage Pipeline: + +1. Stage 1 — Rolling Workflow into the Dev environment. +2. Stage 2 — Approval Step. +3. Stage 3 — Rolling Workflow into the Production environment. + +This Pipeline is a common use case and can be augmented as needed. For more details on Pipelines, see [Pipelines](../model-cd-pipeline/pipelines/pipeline-configuration.md). + +First, create the Pipeline. + +![](./static/onboard-teams-using-git-ops-13.png) + +Next, create Stage 1 using the **dev** Environment and **Dev** Infrastructure Definition: + +![](./static/onboard-teams-using-git-ops-14.png) + +Next, create the Approval step for Stage 2: + +![](./static/onboard-teams-using-git-ops-15.png) + +Finally, create Stage 3 using the **prod** Environment and **Prod** Infrastructure Definition: + +![](./static/onboard-teams-using-git-ops-16.png) + +When you are done, the Pipeline will look like this: + +![](./static/onboard-teams-using-git-ops-17.png) + +The Pipeline is synched with Git: + +![](./static/onboard-teams-using-git-ops-18.png) + +The Golden Template Application is complete. Now your teams can clone and modify it as needed. + +### Step 3: Clone and Change the Application + +Clone the Golden Template Application using whatever Git tool you want. Here's an example using GitHub Desktop: + +![](./static/onboard-teams-using-git-ops-19.png) + +Next, copy the Application and paste the copy as a peer of **Golden Template Application** in the **Applications** folder: + +![](./static/onboard-teams-using-git-ops-20.png) + +Name the new Application folder **Development**: + +![](./static/onboard-teams-using-git-ops-21.png) + +Change the new Application description in its index.yaml file: + +![](./static/onboard-teams-using-git-ops-22.png) + +Rename the Service: + +![](./static/onboard-teams-using-git-ops-23.png) + +You do not need to update the Workflow with the new Service name because the Workflow is templated. + +Update the Pipeline with the new Service name. + +![](./static/onboard-teams-using-git-ops-24.png) + +Update the following Service settings to customize this Application for a new team: + +* **Artifact Source placeholder** — Replace the Nginx Docker image from Docker Hub. +* **Remote manifests** — Update the link to point to their own manifests. +* **Service Config Variable for the namespace** — Update the Service variable with a new namespace. + +### Step 4: Commit and Sync New Application + +When you are done making changes to the new Application, you can commit and push the changes. + +![](./static/onboard-teams-using-git-ops-25.png) + +The new Application is in Git: + +![](./static/onboard-teams-using-git-ops-26.png) + +And the new Application is automatically synced and added to Harness: + +![](./static/onboard-teams-using-git-ops-27.png) + +### Troubleshooting + +**Something not working?** If some Application component does not appear in Harness it is likely because of a conflict between the YAML file and some settings in Harness. + +For example, if you didn't update the Service name in the Pipeline YAML to match the new name of the Service, Harness cannot locate the Service listed in the Pipeline YAML. Consequently, Harness refuses to add the Pipeline from Git. + +Another possible issue is a change to an Account setting in Harness or the Git YAML, such as the name of a Cloud Provider. + +Harness displays Git errors in the Configuration as Code: + +![](./static/onboard-teams-using-git-ops-28.png)You can also see them in the repo Webhook. For example, GitHub has a **Recent Deliveries** section at the bottom of the Webhook setting: + +![](./static/onboard-teams-using-git-ops-29.png) + +### Step 5: Automating New Application Creation + +Once you are comfortable creating new Applications using Git, you can write automation scripts to clone Applications and change values in the new Application YAML. + +For example, some customers' sample scripts are based on an input in a UI or Shell Script that generates YAML stored in Git. The YAML is then synced to Harness through the Git Sync process on the push event. + +You can use tools like [yq](https://mikefarah.gitbook.io/yq/) to manipulate specific YAML fields inline. Tools like [yamllint](https://pypi.org/project/yamllint/) are excellent for validating YAML. + +Here is a sample YAML automation flow: + +1. Create a Golden Application that is fully templated and sync it with Git. +2. Create a script to create a new Harness Application, copy the content of the Golden Application into it, and edit the necessary fields. For example, a script might update the namespace and Cloud Provider YAML. +3. Commit changes to Git and review the results in Harness. + +If there are issues, Harness displays Git Sync errors in **Configure As Code**. + +### Conclusion + +This topic showed you how Git can be used for safe, version-controlled, easy Harness component management. + +Managing new Harness Application setup in Git brings deployment closer to developers. It enables them to live in their code. + +With Harness Git support, developers don't need to check deployment status in the Harness Manager UI. For example, they can use [Harness GraphQL](https://docs.harness.io/article/tm0w6rruqv-harness-api). Here's a simple Pipeline executions query and result: + + +``` +{ + executions( + filters: [ + { pipeline: { operator: EQUALS, values: ["Kn3X_70dQy-VY-Wt2b2qVw"] } } + ] + limit: 30 + ) { + pageInfo { + total + } + nodes { + id + } + } +} +... +{ + "data": { + "executions": { + "pageInfo": { + "total": 1303 + }, + "nodes": [ + { + "id": "tbdwrYw5RS2bEERFEQ6oiA" + }, + { + "id": "JhzVnLFMT5Wxlws-2hu18A" + }, + { + "id": "j-Oe2VUASsSmWo4ALzQGzg" + }, +... + ] + } + } +} +``` +In addition, Harness Application that live in code are reusable and versioned. If anything breaks, there is a working version to revert back to. + +### Next Steps + +* [Triggers](../model-cd-pipeline/triggers/add-a-trigger-2.md) +* [Harness API](https://docs.harness.io/article/tm0w6rruqv-harness-api) +* [Harness Git Integration Overview](harness-git-ops.md) + diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-00.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-00.png new file mode 100644 index 00000000000..533846320c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-00.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-01.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-01.png new file mode 100644 index 00000000000..24c6ee000bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-01.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-02.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-02.png new file mode 100644 index 00000000000..388dcccab4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-02.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-03.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-03.png new file mode 100644 index 00000000000..9f72cc5af7d Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-03.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-04.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-04.png new file mode 100644 index 00000000000..22d9b1d2ab3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-04.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-05.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-05.png new file mode 100644 index 00000000000..d0c3c30b490 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-05.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-06.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-06.png new file mode 100644 index 00000000000..8388766b347 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-06.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-07.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-07.png new file mode 100644 index 00000000000..c967e2c6c0b Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-07.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-08.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-08.png new file mode 100644 index 00000000000..b94ae0d8be5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-08.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-09.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-09.png new file mode 100644 index 00000000000..ca3d0db4d21 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-09.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-10.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-10.png new file mode 100644 index 00000000000..d5b85e77771 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-10.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-11.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-11.png new file mode 100644 index 00000000000..6b789ac2c98 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-11.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-12.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-12.png new file mode 100644 index 00000000000..568dccd302f Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-12.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-13.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-13.png new file mode 100644 index 00000000000..4af710f47d9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-13.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-14.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-14.png new file mode 100644 index 00000000000..ba3eef48716 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-14.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-15.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-15.png new file mode 100644 index 00000000000..de93045b630 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-15.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-16.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-16.png new file mode 100644 index 00000000000..b0de7ee7042 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-16.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-17.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-17.png new file mode 100644 index 00000000000..d528d9059be Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-17.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-18.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-18.png new file mode 100644 index 00000000000..c8d4c2863fa Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-18.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-19.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-19.png new file mode 100644 index 00000000000..9b284fd23a7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-19.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-20.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-20.png new file mode 100644 index 00000000000..0d1711f87ff Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-20.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-21.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-21.png new file mode 100644 index 00000000000..32f22259ad2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-21.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-22.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-22.png new file mode 100644 index 00000000000..f1fb9ec6d7f Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-22.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-23.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-23.png new file mode 100644 index 00000000000..aa60002a8d6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-23.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-24.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-24.png new file mode 100644 index 00000000000..b472f9219ab Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-24.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-25.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-25.png new file mode 100644 index 00000000000..fdaec02d1c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-25.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-26.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-26.png new file mode 100644 index 00000000000..02f73678e7e Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-26.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-27.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-27.png new file mode 100644 index 00000000000..a7f9c9bb2fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-27.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-28.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-28.png new file mode 100644 index 00000000000..a6fec1da844 Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-28.png differ diff --git a/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-29.png b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-29.png new file mode 100644 index 00000000000..b4f3f44f95e Binary files /dev/null and b/docs/first-gen/continuous-delivery/harness-git-based/static/onboard-teams-using-git-ops-29.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/2-connectors-providers-and-helm-setup.md b/docs/first-gen/continuous-delivery/helm-deployment/2-connectors-providers-and-helm-setup.md new file mode 100644 index 00000000000..72b09b9c097 --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/2-connectors-providers-and-helm-setup.md @@ -0,0 +1,361 @@ +--- +title: 1 - Delegate, Providers, and Helm Setup +description: Set up the Harness Delegate, Connectors, and Cloud Providers for Helm. +sidebar_position: 20 +helpdocs_topic_id: rm03ceguuq +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at).This topic describes how to set up the Harness Delegate, Connectors, and Cloud Providers for Helm, and provides some basic Helm setup information. + +Harness includes both Kubernetes and Helm deployments, and you can Helm charts in both. Harness [Kubernetes Deployments](../kubernetes-deployments/kubernetes-deployments-overview.md) allow you to use your own Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. See [Helm Charts](https://docs.harness.io/article/t6zrgqq0ny-kubernetes-services#helm_charts). + + + +### Permissions for Connections and Providers + +You connect Docker registries and Kubernetes clusters with Harness using the accounts you have with those providers. The following list covers the permissions required for the Docker, Kubernetes, Helm components. + +* Docker: + + **Read permissions for the Docker repository** - The Docker registry you use as an Artifact Server in Harness must have Read permissions for the Docker repository. + + **List images and tags, and pull images** - The user account you use to connect the Docker registry must be able to perform the following operations in the registry: List images and tags, and pull images. If you have a **Docker Hub** account, you can access the NGINX Docker image we use in this guide. +* Kubernetes Cluster: + + For the **Kubernetes Cluster** or other Cloud Providers, please see [Kubernetes Cluster](https://docs.harness.io/article/whwnovprrb-cloud-providers#kubernetes_cluster) and the Harness [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) doc. + + For a cluster or provider such as OpenShift, please see [Kubernetes Cluster](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#kubernetes_cluster). +* Helm: + + **URL for the Helm chart** - For this guide, we use a publicly available Helm chart for NGINX from Bitnami, hosted on their Github account. You do not need a Github account. + + **Helm and Tiller** - Helm and Tiller must be installed and running in your Kubernetes cluster. Steps for setting this up are listed below. + +For a list of all of the permissions and network requirements for connecting Harness to providers, see [Delegate Connection Requirements](https://docs.harness.io/article/11hjhpatqz-connectivity-and-permissions-requirements). + +### Harness Kubernetes Delegate + +The Harness Kubernetes Delegate runs in your target deployment cluster and executes all deployment steps, such the artifact collection and kubectl commands. The Delegate makes outbound connections to the Harness Manager only. + +You can install and run the Harness Kubernetes Delegate in any Kubernetes environment, but the permissions needed for connecting Harness to that environment will be different for each environment. + +The simplest method is to install the Harness Delegate in your Kubernetes cluster and then set up the Harness Cloud Provider to use the same credentials as the Delegate.For information on how to install the Delegate in a Kubernetes cluster, see [Kubernetes Cluster](https://docs.harness.io/article/whwnovprrb-cloud-providers##kubernetes_cluster). For an example installation of the Delegate in a Kubernetes cluster in a Cloud Platform, see [Installation Example: Google Cloud Platform](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#installation_example_google_cloud_platform). + +Here is a quick summary of the steps for installing the Harness Delegate in your Kubernetes cluster: + +1. In Harness, click **Setup**, and then click **Harness Delegates**. +2. Click **Download Delegate** and then click **Kubernetes YAML**. +3. In the **Delegate Setup** dialog, enter a name for the Delegate, such as **doc-example**, select a Profile (the default is named **Primary**), and click **Download**. the YAML file is downloaded to your machine.![](./static/2-connectors-providers-and-helm-setup-02.png) +4. Install the Delegate in your cluster. You can copy the YAML file to your cluster any way you choose, but the following steps describe a common method. + 1. In a Terminal, connect to the Kubernetes cluster, and then use the same terminal to navigate to the folder where you downloaded the Harness Delegate YAML file. For example, **cd ~/Downloads**. + 2. Extract the YAML file: `tar -zxvf harness-delegate-kubernetes.tar.gz`. + 3. Navigate to the harness-delegate folder that was created: + + `cd harness-delegate-kubernetes` + 4. Paste the following installation command into the Terminal and press enter: + + `kubectl apply -f harness-delegate.yaml` + + You will see the following output (this Delegate is named **doc-example**): + + + ``` + namespace/harness-delegate created + + clusterrolebinding.rbac.authorization.doc-example/harness-delegate-cluster-admin created + + statefulset.apps/doc-example-lnfzrf created + ``` + Run this command to verify that the Delegate pod was created: + + `kubectl get pods -n harness-delegate` + + You will see output with the status Pending. The Pending status simply means that the cluster is still loading the pod. + Wait a few moments for the cluster to finish loading the pod and for the Delegate to connect to Harness Manager. + In Harness Manager, in the **Harness Delegates** page, the new Delegate will appear. You can refresh the page if you like.![](./static/2-connectors-providers-and-helm-setup-03.png) + +### Connections and Providers Setup + +This section describes how to set up Docker and Kubernetes with Harness, and what the requirements are for using Helm. + +#### Docker Artifact Server + +You can add a Docker repository, such as Docker Hub, as an Artifact Server in Harness. Then, when you create a Harness service, you specify the Artifact Server and artifact(s) to use for deployment. + +For this guide, we will be using a publicly available Docker image of NGINX, hosted on Docker Hub at [hub.docker.com/\_/nginx/](https://hub.docker.com/_/nginx/). You will need to set up or use an existing Docker Hub account to use Docker Hub as a Harness Artifact Server. To set up a free account with Docker Hub, see [Docker Hub](https://hub.docker.com/). + +To specify a Docker repository as an Artifact Server, do the following: + +1. In Harness, click **Setup**. +2. Click **Connectors**. The **Connectors** page appears. +3. Click **Artifact Servers**, and then click **Add Artifact Server**. The **Artifact Servers** dialog appears. +4. In **Type**, click **Docker Registry**. The dialog changes for the Docker Registry account. +5. In **Docker Registry URL**, enter the URL for the Docker Registry (for Docker Hub, **https://registry.hub.docker.com/v2/**). +6. Enter a username and password for the provider (for example, your **Docker Hub** account). +7. Click **SUBMIT**. The artifact server is displayed. + +![](./static/2-connectors-providers-and-helm-setup-04.png) + +##### Single GCR Docker Registry across Multiple Projects + +In this document, we perform a simple set up using Docker Registry. Another common artifact server for Kubernetes deployments is GCR (Google Container Registry), also supported by Harness. + +An important note about using GCR is that if your GCR and target GKE Kubernetes cluster are in different GCP projects, Kubernetes might not have permission to pull images from the GCR project. For information on using a single GCR Docker registry across projects, see [Using single Docker repository with multiple GKE projects](https://medium.com/google-cloud/using-single-docker-repository-with-multiple-gke-projects-1672689f780c) from Medium and the [Granting users and other projects access to a registry](https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry) section from *Configuring access control* by Google. + +#### Kubernetes Cluster + +For a Cloud Provider in Harness, you can specify a Kubernetes-supporting Cloud platform, such as Google Cloud Platform and OpenShift, or your own Kubernetes Cluster, and then define the deployment environment for Harness to use. + +For this guide, we will use the **Kubernetes Cluster** Cloud Provider. If you use GCP, see [Creating a Cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster) from Google. + +The specs for the Kubernetes cluster you create will depend on the microservices or apps you will deploy to it. To give you guidance on the node specs for the Kubernetes Cluster used in this guide, here is a node pool created for a Kubernetes cluster in GCP: + +![](./static/2-connectors-providers-and-helm-setup-05.png) + +For Harness deployments, a Kubernetes cluster requires the following: + +* Credentials for the Kubernetes cluster in order to add it as a Cloud Provider. If you set up GCP as a cloud provider using a GCP user account, that account should also be able to configure the Kubernetes cluster on the cloud provider. +* The kubectl command-line tool must be configured to communicate with your cluster. +* A kubeconfig file for the cluster. The kubeconfig file configures access to a cluster. It does not need to be named **kubeconfig**. + +For more information, see [Accessing Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/) and [Configure Access to Multiple Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) from Kubernetes. + +##### Set Up a Kubernetes Cluster Cloud Provider + +To set up a **Kubernetes Cluster** Cloud Provider, do the following: + +1. In Harness, click **Setup**. +2. Click **Cloud Providers**. +3. Click **Add Cloud Provider**. The **Cloud Provider** dialog opens. +In this example, we will add a **Kubernetes Cluster** Cloud Provider, but there are several other provider options. In some cases, you will need to provide access keys in order for the delegate to connect to the provider. +4. In **Type**, select **Kubernetes Cluster**. +5. In **Display Name**, enter a name for the Cloud Provider. +6. Click the option **Inherit from selected Delegate** to use the credentials of the Delegate you installed in your cluster. +7. In **Delegate Name**, select the name of the Delegate installed in your cluster. When you are done, the dialog will look something like this:![](./static/2-connectors-providers-and-helm-setup-06.png) +8. Click **SUBMIT**. The Kubernetes Cluster Cloud Provider is added. + +#### Helm Setup + +There are only two Helm requirements Harness needs to deploy to your Kubernetes cluster: + +* Helm and Tiller installed and running in one pod in your Kubernetes cluster. +* Helm chart hosted on an accessible server. The server may allow anonymous access. + +The Helm version must match the Tiller version running in the cluster (use `helm version` to check). If Tiller is not the latest version, then upgrade Tiller to the latest version (`helm init --upgrade`), or match the Helm version with the Tiller version. You can set the Helm version in the Harness Delegate YAML file using the `HELM_DESIRED_VERSION` environment property. Include the `v` with the version. For example, `HELM_DESIRED_VERSION: v2.13.0`. For more information, see [Helm and the Kubernetes Delegate](#helm_and_the_kubernetes_delegate) in this document. + +##### Set Up Helm on a Kubernetes Cluster + +Setting up Helm and Tiller on a Kubernetes cluster is a simple process. Log into the cluster (for example, using the **Google Cloud Shell**), and use the following commands to set up Helm. + + +``` +# Add the Helm version that you want to install +HELM_VERSION=v2.14.0 +# v2.13.0 +# v2.12.0 +# v2.11.0 + +export DESIRED_VERSION=${HELM_VERSION} + +echo "Installing Helm $DESIRED_VERSION ..." + +curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash + +# If Tiller is already installed in the cluster +helm init --client-only + +# If Tiller is not installed in the cluster +# helm init +``` +The easiest method for installing Helm on the Delegate cluster is a Delegate Profile. For more information, see [Delegate Profiles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles).Here is an example of a shell session with the commands and the output. This example also adds a namespace for RBAC purposes: + + +``` +j_doe@cloudshell:~ (project-121212)**$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash** + % Total % Received % Xferd Average Speed Time Time Time Current + Dload Upload Total Spent Left Speed +100 7230 100 7230 0 0 109k 0 --:--:-- --:--:-- --:--:-- 110k +Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz +Preparing to install helm and tiller into /usr/local/bin +helm installed into /usr/local/bin/helm +tiller installed into /usr/local/bin/tiller +Run 'helm init' to configure helm. + +j_doe@cloudshell:~ (project-121212)**$ kubectl --namespace kube-system create sa tiller** +serviceaccount "tiller" created + +j_doe@cloudshell:~ (project-121212)**$ kubectl create clusterrolebinding tiller \ +> --clusterrole cluster-admin \ +> --serviceaccount=kube-system:tiller** +clusterrolebinding.rbac.authorization.k8s.io "tiller" created + +j_doe@cloudshell:~ (project-121212)**$ helm init --service-account tiller** +$HELM_HOME has been configured at /home/john_doe/.helm. +Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. +Happy Helming! + +j_doe@cloudshell:~ (project-121212)**$ helm repo update** +Hang tight while we grab the latest from your chart repositories... +...Skip local chart repository +...Successfully got an update from the "stable" chart repository +Update Complete. ⎈ Happy Helming!⎈ + +j_doe@cloudshell:~ (project-121212)**$ kubectl get deploy,svc tiller-deploy -n kube-system** +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +deployment.extensions/tiller-deploy 1 1 1 1 20s +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/tiller-deploy ClusterIP 10.63.251.235 44134/TCP 20s +``` +If you are using TLS for communication between Helm and Tiller, ensure that you use the `--tls` parameter with your commands. For more information, see **Command Flags** in [Helm Deploy Step](#helm_deploy_step) in this document, and see [Using SSL Between Helm and Tiller](https://docs.helm.sh/using_helm/#using-ssl-between-helm-and-tiller) from Helm, and the section **Securing your Helm Installation** in that document.##### Helm Chart Example + +In this guide, we will be using a simple Helm chart template for NGINX created by Bitnami. The Helm chart sets up Kubernetes for a Docker image of NGINX. There are three main files in the Helm chart template: + +* **svc.yaml** - Defines the manifest for creating a service endpoint for your deployment. +* **deployment.yaml** - Defines the manifest for creating a Kubernetes deployment. +* **vhost.yaml** - ConfigMap used to store non-confidential data in key-value pairs. + +The Helm chart is pulled from the [Bitnami Github repository](https://github.com/bitnami/charts/tree/master/bitnami/nginx). You can view all the chart files there, but the key templates are included below. + +Here is an svc.yaml file: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: {{ template "fullname" . }} + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +spec: + type: {{ .Values.serviceType }} + ports: + - name: http + port: 80 + targetPort: http + selector: + app: {{ template "fullname" . }} +``` +Here is a deployment.yaml file: + + +``` +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: {{ template "fullname" . }} + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" +spec: + selector: + matchLabels: + app: {{ template "fullname" . }} + release: "{{ .Release.Name }}" + replicas: 1 + template: + metadata: + labels: + app: {{ template "fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: "{{ .Release.Name }}" + heritage: "{{ .Release.Service }}" + spec: + {{- if .Values.image.pullSecrets }} + imagePullSecrets: + {{- range .Values.image.pullSecrets }} + - name: {{ . }} + {{- end}} + {{- end }} + containers: + - name: {{ template "fullname" . }} + image: "{{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + ports: + - name: http + containerPort: 8080 + livenessProbe: + httpGet: + path: / + port: http + initialDelaySeconds: 30 + timeoutSeconds: 5 + failureThreshold: 6 + readinessProbe: + httpGet: + path: / + port: http + initialDelaySeconds: 5 + timeoutSeconds: 3 + periodSeconds: 5 + volumeMounts: + {{- if .Values.vhost }} + - name: nginx-vhost + mountPath: /opt/bitnami/nginx/conf/vhosts + {{- end }} + volumes: + {{- if .Values.vhost }} + - name: nginx-vhost + configMap: + name: {{ template "fullname" . }} + {{- end }} +``` +##### Helm Chart Repository + +A Helm chart repository is an HTTP server that houses an **index.yaml** file and, if needed, packaged charts. For details, see [The Chart Repository Guide](https://helm.sh/docs/topics/chart_repository/) from Helm. + +You can add a Helm Chart Repository as a Harness Artifact Server and then use it in Harness Kubernetes and Helm Services. + +![](./static/2-connectors-providers-and-helm-setup-07.png) + +For steps on adding a Helm Chart Repository as a Harness Artifact Server, see [Helm Repository](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server#helm_repository). + +### Helm and the Kubernetes Delegate + +You can set the Helm version for the Harness Kubernetes Delegate to use. + +The Harness Kubernetes Delegate is configured and run using a YAML file that you download from Harness, as described in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). You can edit the YAML file and set the desired Helm version to use with the **HELM\_DESIRED\_VERSION** parameter. + +Here is a sample of the Kubernetes delegate YAML file with the **HELM\_DESIRED\_VERSION** parameter in bold: + + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: harness-delegate +... +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + ... +spec: + ... + template: + metadata: + labels: + harness.io/app: harness-delegate + harness.io/account: yvcrcl + harness.io/name: harness-delegate + spec: + containers: + - image: harness/delegate:latest + imagePullPolicy: Always + name: harness-delegate-instance + resources: + limits: + cpu: "1" + memory: "6Gi" + env: + ... + **- name: HELM\_DESIRED\_VERSION + value: ""** + restartPolicy: Always +``` +You can find the Helm versions to use on [Github](https://github.com/helm/helm/tags). + +### Next Step + +* [2 - Helm Services](2-helm-services.md) + diff --git a/docs/first-gen/continuous-delivery/helm-deployment/2-helm-services.md b/docs/first-gen/continuous-delivery/helm-deployment/2-helm-services.md new file mode 100644 index 00000000000..69df920dcc3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/2-helm-services.md @@ -0,0 +1,252 @@ +--- +title: 2 - Helm Services +description: Create a Harness Service for Helm. +sidebar_position: 30 +helpdocs_topic_id: svso08ogpb +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at).This topic describes how to create a Harness Application and adds a Service that uses a Docker image and Helm chart for a Kubernetes deployment. + +Harness includes both Kubernetes and Helm deployments, and you can use Helm charts in both. Harness [Kubernetes Deployments](../kubernetes-deployments/kubernetes-deployments-overview.md) allow you to use your own Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. See [Helm Charts](https://docs.harness.io/article/t6zrgqq0ny-kubernetes-services#helm_charts). + + +### Release Name Required + +When using a Native Helm deployment in Harness, ensure that the workload-related manifests include a Release Name in their metadata. For an example, see this [StatefulSet spec from Artifactory](https://github.com/helm/charts/blob/master/stable/artifactory/templates/artifactory-statefulset.yaml). + +You can see that the release name is under: + +* metadata.labels +* spec.selector.matchLabels +* spec.template.metadata.labels + +Harness requires a release name for tracking different deployments as well as tracking resources deployed under that release. + +The release name must be unique across the cluster. + +### Create the Harness Application + +An application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD. For more information, see [Application Checklist](https://docs.harness.io/article/bucothemly-application-configuration). + +To add the Harness Application and service, do the following: + +1. In **Harness**, click **Setup**. +2. Click **Add Application**. The **Application** dialog appears. +3. Give your application a name that describes your microservice or app. For the purposes of this guide, we use the name **Docker-Helm**. +4. Click **SUBMIT**. The new application is added. +5. Click the application name to open the application. The application entities are displayed. + + ![](./static/2-helm-services-36.png) + +### Add the Helm Service to the Application + +1. In your new application, click **Services**. The **Services** page appears. +2. In the **Services** page, click **Add Service**. The **Service** dialog appears. +3. In **Name**, enter a name for your microservice. For example, if your microservice were the checkout service in a Web application, you could name it **checkout**. For this guide, we will use **Docker-Helm**. +4. In **Deployment Type**, select **Native** **Helm**. Harness will create a service that is designed for Helm deployments. When you are finished, the dialog will look like this:![](./static/2-helm-services-37.png) +5. Select **Enable Helm V3**. This ensures that you are using the latest Helm settings. +6. Click **SUBMIT**. The new service is added. Let's look at where Docker and Helm are configured in the Service page: + +![](./static/2-helm-services-38.png) + +The following table describes the different sections. + + + +| | | | +| --- | --- | --- | +| **Component** | **Section in Service** | **Description** | +| Docker | **Artifact Source** | You add a pointer to the Docker image location you want to deploy. | +| Helm | **Chart Specification** | You enter the Helm chart repo and chart to use.
Typically, this is the only Helm configuration needed in a Harness service.
This is the easiest way to point to your chart, but you can add the chart info in **Values YAML Override** instead. | +| Helm | **Values YAML Override** | You can enter the contents of a Helm values.yaml file here. This file contains the default values for a chart.
Values entered here will overwrite values in the values.yaml entered in **Chart Specification**.
If you want to point to your Helm chart here, you can simply add the YAML as shown in the following example. | + +#### YAML Example + +``` +harness:   + helm:     + chart:       + name: nginx + version: 1.0.1       + url: https://charts.bitnami.com/bitnami +``` + + +### Add the Docker Artifact Source + +1. In the new service, click **Add****Artifact Source**, and select **Docker Registry**. There are a number of artifact sources you can use. For more information, see [Add a Docker Image Server](https://docs.harness.io/article/gxv9gj6khz-add-a-docker-image-service#add_a_docker_image_service). The **Docker Registry** dialog appears. + + ![](./static/2-helm-services-39.png) +2. In **Name**, let Harness generate a name for the source. +3. In **Source Server**, select the Artifact Server you added earlier in this guide. We are using **Docker Hub** in this guide. +4. In **Docker Image Name**, enter the image name. Official images in public repos such as Docker Hub need the label **library**. For example, **library/nginx**. For this guide, we will use Docker Hub and the publicly available NGINX at **library/nginx**. +5. Click **SUBMIT**. The Artifact Source is added. + +![](./static/2-helm-services-40.png) + +### Add the Helm Chart + +As explained earlier, you have two options when entering in the Helm chart info. The **Chart Specifications** or the **Values YAML**. For this guide, we will use the **Chart Specifications**. + +1. Click **Chart Specifications**. The **Chart Specification** settings appear. +2. In **Location**, select **Helm Repository** or **Source Repository**. + +For **Source Repository**, do the following: + +1. In **Source Repository**, select a Git SourceRepo Provider for the Git repo you added to your Harness account. For more information, see [Add SourceRepo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +2. In **Commit ID**, select **Latest from Branch** or **Specific Commit ID**. +3. In **Branch** or **Commit ID**, enter the branch or commit ID for the remote repo. +4. In **File/Folder path**, enter the repo file and folder path. + +Helm [chart dependencies](https://helm.sh/docs/topics/charts/) are not supported in **Source Repository**. If your Helm chart in a Git repo uses chart dependencies, you will need to move to the **Helm Repository** option.For **Helm Repository**, do the following: + +1. In **Helm Repository**, select the Helm Chart Repository you added as a Harness Artifact Server. For more information, see [Helm Repository](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server#helm_repository). + + If you are using Google Cloud Storage for your Helm repo, you will see a **Base Path** setting for the bucket. See [Google Cloud Storage (GCS)](https://harness.helpdocs.io/article/whwnovprrb-cloud-providers#google_cloud_storage_gcs) for details on the policies required.1. In **Base Path**, enter the path to the charts' bucket folder or a Workflow variable expression. + 1. If you use a charts' bucket folder, simply enter the name of the folder. Whether you need to specify a single folder (e.g. `charts`) a folder path (e.g. `helm/charts`) depends on the Helm Chart Repository you added as a Harness Artifact Server. + 2. If you use a Workflow variable expression, you can enter in the expression as part of the path. For example, `/Myservice/Chart/${workflow.variables.branchName}/` or simply `${workflow.variables.chartFolder}`.For more information, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) and [Add Workflow Variables](https://docs.harness.io/article/m220i1tnia-workflow-configuration#add_workflow_variables). + 1. If the chart is in the **root** folder of the repository location set in the Helm Chart Repository you added as a Harness Artifact Server, leave **Base Path** empty. +2. In **Chart Name**, enter the name of the chart in that repo. +3. In **Chart Version**, enter the chart version to use. This is found in the **Chart.yaml** **version** label. If you leave this field empty Harness gets the latest chart. + +Here are a couple of examples using GCS and S3: + +![](./static/2-helm-services-41.png) + +Here is an example using a Workflow variable expression. You can see the variable created in the Workflow's **Workflow Variables** section, referenced using an expression in **Chart Specification**, and then a value provided for the variable in the deployment dialog that matches the chart folder's name. + +![](./static/2-helm-services-42.png) + +1. To use Helm command flags, click **Enable Command Flags**, and then enter the command to use. +2. Click **SUBMIT**. The chart specification is added to the service. + +![](./static/2-helm-services-43.png) + +When you deploy a Workflow using a Harness Kubernetes Service set up with a Helm Repository, you will see Harness fetch the chart: + +[![](./static/2-helm-services-44.png)](./static/2-helm-services-44.png) + +Next, you will see Harness initialize using the chart: + +[![](./static/2-helm-services-46.png)](./static/2-helm-services-46.png) + +#### Release Name Required + +Ensure that the workload-related manifests include a Release Name in their metadata. + +See [Release Name Required](#release_name_required). + +#### Other Options + +You can specify other options in the **Chart Specification** dialog by choosing **None** in the **Helm Repository** field, and the using the remaining fields as described in the following table. + +These options are provided for backwards-compatibility and it is preferable that you use either the **Helm Repository** or **Source Repository** options. + +| | | | +| --- | --- | --- | +| **Installation Method** | **Example** | **Field Values** | +| Chart reference. | `helm install stable/nginx` | **Chart Name:** stable/nginx | +| Path to a packaged chart.In this method, the chart file is located on the same pod as the Harness Delegate.You can add a Delegate Profile that copies the chart from a repo to the pod. For more information, see [Delegate Profiles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles). | `helm install ./nginx-1.2.3.tgz` | **Chart Name:** *dir\_path\_to\_delegate*/nginx | +| Path to an unpacked chart directory.In this method, the chart file is located on the same pod as the Harness delegate.You can add a Delegate Profile that copies the chart from a repo to the pod. For more information, see [Delegate Profiles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles). | `helm install ./nginx` | **Chart Name:** *dir\_path\_to\_delegate*/nginx | +| Absolute URL. | `helm install https://example.com/charts/nginx-1.2.3.tgz` | **Chart Name:** https://example.com/charts/nginx-1.2.3.tgz | + +For Helm, that's it. You don't have to do any further configuration to the service. Harness will use the chart you specified to configure the Kubernetes cluster. + +File-based repo triggers are a powerful feature of Harness that lets you set a Webhook on your repo to trigger a Harness Workflow or Pipeline when a Push event occurs in the repo. For more information, see [File-based Repo Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2#file_based_repo_triggers).Now you can define the deployment environment and workflow for the deployment. + +### Option: Helm Command Flags + +You can extend the Helm commands that Harness runs when deploying your Helm chart. + +Use **Enable Command Flags** to have Harness run Helm-specific Helm commands and their options as part of preprocessing. All the commands you select are run before `helm install/upgrade`. + +Click Enable Command Flags, and then select commands from the **Command Flag Type** dropdown. + +Next, in **Input**, add any options for the command. + +The `--debug` option is not supported.For Native Helm deployments using Helm charts (as opposed to Kubernetes deployments using Helm charts), multiple commands are supported. + +You will see the outputs for the commands you select in the Harness deployment logs. The output will be part of pre-processing and appear before `helm install/upgrade`. + +If you use Helm commands in the Harness Service and in a Workflow deploying that Service, the Helm commands in the Harness Service override the commands in the Workflow. + +#### Harness Variable Expressions are Supported + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in any of the command options settings. For example, [Service Config variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) and [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +### Spec Requirements for Steady State Check and Verification + +Harness requires that the `release: {{ .Release.Name }}` label be used in **every** Kubernetes spec to ensure that Harness can identify a release, check its steady state, and perform verification on it. + +Ensure that the `release: {{ .Release.Name }}` label is in every Kubernetes object's manifest. If you omit the `release: {{ .Release.Name }}` label from a manifest, Harness cannot track it.The [Helm built-in Release object](https://helm.sh/docs/chart_template_guide/#built-in-objects) describes the release and allows Harness to identify each release. For this reason, the `release: {{ .Release.Name }}` label must be used in your Kubernetes spec. For example: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: {{ template "todolist.fullname" . }} + namespace: {{ .Values.namespace }} + labels: + app: {{ template "todolist.name" . }} + chart: {{ template "todolist.chart" . }} + **release: {{ .Release.Name }}** heritage: {{ .Release.Service }} +... +``` +You will provide a name to be used in place of `{{ .Release.Name }}` in the Helm Workflow you create in Harness. The Workflow contains the **Helm Deploy** step, where you can enter in a release name to replace `{{ .Release.Name }}` at runtime: + +![](./static/2-helm-services-48.png) + +For information on the Helm Deploy step, see [Helm Deploy](4-helm-workflows.md). + +### Values YAML Override + +In the **Values YAML Override** section, you can enter the YAML for your values.yaml file. The values.yaml file contains the default values for a chart. You will typically have this file in the repo for your chart, but you can add it in the Harness service instead. + +The **Values YAML** dialog has the following placeholders that save you from having to enter in some variables: + +* **${NAMESPACE}** - Replaced with the Kubernetes namespace, such as **default**. You will specify the namespace of the cluster in the **Namespace** setting when adding the Harness Environment infrastructure details later, and the placeholder will be replaced with that namespace. +* **${DOCKER\_IMAGE\_NAME}** - Replaced with the Docker image name. +* **${DOCKER\_IMAGE\_TAG}** - Replaced with the Docker image tag. + +For information about how values from different source are compiled at runtime, see [Helm Values Priority](4-helm-workflows.md#helm-values-priority). + +You can override the values using the **Inline**, **Remote**, or **From Helm Repository** options. + +#### Inline Override + +Enter the YAML you want to use to override the Values YAML file used in **Chart Specification**. + +#### Remote Override + +In **Configuration**, in **Values YAML Override**, click the edit icon. + +For **Remote**, do the following: + +1. In **Source Repository**, select the Git repo you added as a [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +2. For **Commit ID**, select either **Latest from Branch** and enter in the branch name, or **Specific Commit ID** and enter in the **commit ID**. +3. In **File path**, enter the path to the values.yaml file in the repo, including the repo name, like **helm/values.yaml**. + +#### From Helm Repository Override + +If you are using the Helm Chart from Helm Repository option in **Manifests**, you can override the chart in **Manifests** using one or more values YAML files inside the Helm chart. + +In **Configuration**, in **Values YAML Override**, click the edit icon. + +In **Store Type**, select **From Helm Repository**. + +In **File Path(s)**, enter the file path to the override values YAML file. + +Multiple files can be used. When you enter the file paths, separate the paths using commas. + +The latter paths are given higher priority. + +![](./static/2-helm-services-49.png) + +### Next Step + +* [3 - Helm Environments](3-helm-environments.md) + diff --git a/docs/first-gen/continuous-delivery/helm-deployment/3-helm-environments.md b/docs/first-gen/continuous-delivery/helm-deployment/3-helm-environments.md new file mode 100644 index 00000000000..ba5068d9c8e --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/3-helm-environments.md @@ -0,0 +1,79 @@ +--- +title: 3 - Helm Environments +description: Add a Harness Environment where Harness will deploy your Docker image. +sidebar_position: 40 +helpdocs_topic_id: 134kx1k89d +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at).After you have set up the Harness [Service](2-helm-services.md) for your Helm deployment, you can add a Harness Environment that lists the Cloud Provider and Kubernetes cluster where Harness will deploy your Docker image. + +Harness includes both Kubernetes and Helm deployments, and you can Helm charts in both. Harness [Kubernetes Deployments](../kubernetes-deployments/kubernetes-deployments-overview.md) allow you to use your own Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. See [Helm Charts](https://docs.harness.io/article/t6zrgqq0ny-kubernetes-services#helm_charts). + +### Create a New Harness Environment + +You will define the Infrastructure Definition for Harness to use when deploying the Docker image. + +The following procedure creates a Harness environment where you can deploy your Docker image. For this guide, we will be using a Kubernetes cluster on Google Cloud Platform for the deployment environment. + +To add a Harness environment, do the following: + +1. In your Harness application, click **Environments**. The **Environments** page appears. +2. Click **Add Environment**. The **Environment** settings appear. +![](./static/3-helm-environments-08.png) +3. In **Name**, enter a name that describes the deployment environment, for example, **GCP-K8S-Helm**. +4. In **Environment Type**, select **Non-Production**. +5. Click **SUBMIT**. The new Environment page appears. + +![](./static/3-helm-environments-09.png) + +### Add an Infrastructure Definition + +​Infrastructure Definitions specify the target deployment infrastructure for your Harness Services, and the specific infrastructure details for the deployment, like VPC settings.  + +You define the Kubernetes cluster to use for deployment as an ​Infrastructure Definition. For this guide, we will use the GCP Cloud Provider you added and the Kubernetes cluster with Helm installed. + +To add the Infrastructure Definition, do the following: + +1. In the Harness Environment, click **Add Infrastructure Definition**. The **Infrastructure Definition** dialog appears.![](./static/3-helm-environments-10.png) +2. In **Name**, enter the name you will use to select this Infrastructure Definition when you create a Workflow. +3. In **Cloud Provider Type**, select **Kubernetes Cluster**. +4. In **Deployment Type**, select **Helm**. +5. Click **Use Already Provisioned Infrastructure**. If you were using a Harness [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner), you would select **Map Dynamically Provisioned Infrastructure**. +6. In **Cloud Provider**, select the Cloud Provider you added earlier, such as **Google Cloud Platform**, etc. +7. In **Cluster Name**, select the Kubernetes cluster where you want to deploy. This list is populated using the Cloud Provider you selected. +8. In **Namespace**, enter the name of the cluster namespace you want to use. As we noted in [Values YAML Override](2-helm-services.md#values-yaml-override), you can enter a `${NAMESPACE}` variable in your Service and Harness will replace it with the value you enter in **Namespace** at runtime. +9. In **Scope to specific Services**, select the Harness Service you created earlier. + +The Infrastructure Definition will look something like this:![](./static/3-helm-environments-11.png) +10. Click **Submit**. The new Infrastructure Definition is added to the Harness Environment. + +![](./static/3-helm-environments-12.png) + +That is all you have to do to set up the deployment Environment in Harness. + +Now that you have the Service and Environment set up. Now you can create the deployment Workflow in Harness. + +### Override Service Helm Values + +In the Harness environment, you can override Helm values specified in the Harness service. This can ensure that the Helm values used in the environment are consistent even if multiple services with different Helm values use that environment. + +To override a service Helm value, do the following: + +1. In the Harness environment, click **Add Configuration Overrides**. The **Service Configuration Override** dialog appears. +2. In **Service**, click the name of the service you want to override, or click **All Services**. The dialog changes to provide **Override Type** options. +3. Click **Values YAML**. For **Local**, a text area appears where you can paste an entire value.yaml file or simply override one or more values, such as a Docker image name. For the variables you can use in the text, see [Values YAML Override](2-helm-services.md#values-yaml-override). + +For **Remote**, do the following: + 1. In **Source Repository**, select the Git repo you added as a [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + 2. For **Commit ID**, select either **Use** l**atest from Branch** and enter in the branch name, or **Use** **specific commit ID** and enter in the **commit ID**. + 3. In **File path**, enter the path to the values.yaml file in the repo, including the repo name, like **helm/values.yaml**. + +For more information, see [Helm Values Priority](4-helm-workflows.md#helm-values-priority). + +### Next Step + +* [4 - Helm Workflows and Deployments](4-helm-workflows.md) + diff --git a/docs/first-gen/continuous-delivery/helm-deployment/4-helm-workflows.md b/docs/first-gen/continuous-delivery/helm-deployment/4-helm-workflows.md new file mode 100644 index 00000000000..ebe14487ecc --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/4-helm-workflows.md @@ -0,0 +1,442 @@ +--- +title: 4 - Helm Workflows and Deployments +description: Create a Workflow for a Helm deployment. +sidebar_position: 50 +helpdocs_topic_id: m8ra49bqd5 +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at).Once you have added the Harness Service and Environment for Helm, you can add a Harness Workflow to manage how your Harness Service is deployed, verified, and rolled back, among other important phases. + +Harness includes both Kubernetes and Helm deployments, and you can Helm charts in both. Harness [Kubernetes Deployments](../kubernetes-deployments/kubernetes-deployments-overview.md) allow you to use your own Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller (for Helm v2) needing to be installed in the target cluster. See [Helm Charts](https://docs.harness.io/article/t6zrgqq0ny-kubernetes-services#helm_charts). + +### Create the Workflow + +Helm deployments use a Basic workflow that simply puts the Docker image on the Kubernetes cluster built using the Helm chart. + +For more information about workflows, see [Add a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration). + +To add the workflow, do the following: + +1. In your Harness application, click **Workflows**. + + ![](./static/4-helm-workflows-18.png) + +2. On the **Workflows** page, click **Add Workflow**. The **Workflow** dialog appears. + ![](./static/4-helm-workflows-19.png) +3. In **Name**, give your workflow a name that describes its purpose, such as **NGINX-K8s-Helm**. +4. In **Workflow Type**, select **Basic Deployment**. Helm deployments are Basic deployments, unlike Canary or Blue/Green. They are single-phase deployments where each deployment is installed or upgraded. You can create multiple Helm deployments and add them to a Harness pipeline. For more information, see [Add a Pipeline](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration). +5. In **Environment**, select the environment you created earlier in this guide. +6. In **Service**, select the service you added earlier in this guide. +7. In Infrastructure Definition, select the Infrastructure Definition you created earlier in this guide. When you are done, the dialog will look something like this: + + ![](./static/4-helm-workflows-20.png) + +8. Click **SUBMIT**. The workflow is displayed. + +![](./static/4-helm-workflows-21.png) + +Harness creates all the steps needed to deploy the service to the target infrastructure. + +You can see that one workflow step, **Helm Deploy**, is incomplete. + +![](./static/4-helm-workflows-22.png) + +Steps are marked incomplete if they need additional input from you. To complete this step, see the following section. + +#### Helm Deploy Step + +In your workflow, click the **Helm Deploy** step. The **Configure** **Helm Deploy** settings appear. + +The **Helm Deploy** step has a few options that you can use to manage how Helm is used in the deployment. + +You can also templatize **Git Connector**,**Branch Name** and **File Path** settings. To templatize, perform the following steps: + +1. Click the **[T]** icon next to the setting. The field values are replaced by variables. +2. Click **Submit**. The new variables are displayed under **Workflow Variables**. +3. To see how the Workflow variables are used, click **Deploy**. The **Start New Deployment** dialog appears, displaying the variables you created in the **Workflow Variables** section. + +##### Helm Release Name + +Harness requires that the `release: {{ .Release.Name }}` label be used in every Kubernetes spec to ensure that Harness can identify a release, check its steady state, and perform verification on it. For details, see [Spec Requirements for Steady State Check and Verification](2-helm-services.md#spec-requirements-for-steady-state-check-and-verification).In the **Helm Deploy Step**, you need to add a Helm release name. During deployment, this release name replaces the value of the `release: {{ .Release.Name }}` label in the Kubernetes spec: + +![](./static/4-helm-workflows-23.png) + +What is a release name? From the [Helm docs](https://docs.helm.sh/using_helm/#using-helm): + + +> A Release is an instance of a chart running in a Kubernetes cluster. One chart can often be installed many times into the same cluster. And each time it is installed, a new release is created. Consider a MySQL chart. If you want two databases running in your cluster, you can install that chart twice. Each one will have its own release, which will in turn have its own release name. + +Since Helm requires release names to be unique across the cluster, Harness generates a unique identifier with the variable `${infra.helm.shortId}`. You can use this variable as a prefix or suffix for the release name. We recommend the following release name: + + +``` +${service.name}-${env.name}-${infra.helm.shortId} +``` +If the service name is **NGINX** and the environment name is **GCP-K8s-Helm**, then the release name will be **nginx-gcp-k8s-helm-rjogwmu**, where **rjogwmu** is generated by `${infra.helm.shortId}`. + +##### Command Flags (Deprecated) + +Previously, in **Helm Deploy**, you could enter Helm command flag(s) that you wanted applied to **every Helm command** executed at deployment runtime. + +Now this feature has been expanded and migrated to the Harness Kubernetes or Native Helm Service. + +If you used this feature in Helm Deploy previously, your Helm command flags have been migrated to the Native Helm Service used by the Workflow.For steps on using these commands, see the following topics: + +* Kubernetes: + + [Use a Helm Repository with Kubernetes](../kubernetes-deployments/use-a-helm-repository-with-kubernetes.md) + + [Link Resource Files or Helm Charts in Git Repos](../kubernetes-deployments/link-resource-files-or-helm-charts-in-git-repos.md) +* Native Helm: + + [Helm Services](2-helm-services.md) + +##### Deployment Steady State Timeout + +For **Deployment steady state timeout**, you can leave the default of **10** minutes. It is unlikely that deployment would ever take longer than 10 minutes. + +##### Git Connector + +For **Git connector**, you can specify values or a full **values.yaml** file in Git repo and Harness will fetch the values during runtime. + +This will override the values or values.yaml used in the Service. + +For information on how Harness merges values from different source for the values.yaml, see [Helm Values Priority](#helm_values_priority). + +File-based repo triggers are a powerful feature of Harness that lets you set a Webhook on your repo to trigger a Harness Workflow or Pipeline when a Push event occurs in the repo. For more information, see [File-based Repo Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2#file_based_repo_triggers).To use a Git connector, you need to add a Git repo as a Harness Source Repo provider. For more information, see [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +To use a **Git connector** in the **Helm Deploy** step, do the following: + +1. In **Git connector**, select the Git repo you added as a Source Repo. +2. Select either **Use specific commit ID** and enter in the **commit ID**, or select **Use latest commit from branch** and enter in the **branch name**. +3. In **File path**, enter the path to the values.yaml file in the repo, including the repo name, like **helm/values.yaml**. + +Here's an example of what the Git connector might look like: + +![](./static/4-helm-workflows-24.png) + +##### Completed Helm Deploy Step + +When you are done, the typical **Helm Deploy** dialog will look something like this: + +![](./static/4-helm-workflows-25.png) + +Only the **Release Name** is required. + +Click **SUBMIT** and **your workflow is complete.** You can look or modify the default rollback steps and other deployment strategies in the workflow (for more information, see [Add a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration)), but for this guide, the workflow is complete and you can now deploy it. See the next section for deployment steps. + +### Helm Deployments + +The following procedure deploys the workflow you created in this guide. + +Before deploying the workflow, ensure all Harness delegates that can reach the resources used in the workflow are running. In **Harness**, click **Setup**, and then click **Harness Delegates**.To deploy your workflow, do the following: + +1. In your workflow, click **Deploy**.![](./static/4-helm-workflows-26.png)![](./static/4-helm-workflows-27.png) +2. In **Notes**, enter information about the deployment that others should know. Harness records all the important details, and maintains the records of each deployment, but you might need to share some information about your deployment. +3. Click **SUBMIT**. The **Deployments** page appears, and displays the deployment in real time. + +![](./static/4-helm-workflows-28.png) + +**The deployment was successful!** Now let's look further at the Helm deployment. + +Click **Phase 1**. You will the details of the phase, including the workflow entities, listed. + +![](./static/4-helm-workflows-29.png) + +Click **Phase 1** to expand it and see **Deploy Containers**. Expand **Deploy Containers** and click the **Helm Deploy** step you set up in the workflow. The details for the step are displayed, along with the command output: + +![](./static/4-helm-workflows-30.png) + +#### Viewing Deployment in the Log + +Let's look through the deployment log and see how your Docker image was deployed to your cluster using Helm. + +First, we check to see if the Helm chart repo has already been added and, if not, add it from **https://charts.bitnami.com/bitnami**. + + +``` +INFO 2018-10-09 16:59:51 Adding helm repository https://charts.bitnami.com/bitnami +INFO 2018-10-09 16:59:51 Checking if the repository has already been added +INFO 2018-10-09 16:59:51 Repository not found +INFO 2018-10-09 16:59:51 Adding repository https://charts.bitnami.com/bitnami with name examplefordoc-nginx +INFO 2018-10-09 16:59:51 Successfully added repository https://charts.bitnami.com/bitnami with name examplefordoc-nginx +``` +Next, we look to see if a release with the same release name exists: + + +``` +INFO 2018-10-09 16:59:51 Installing +INFO 2018-10-09 16:59:51 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu +INFO 2018-10-09 16:59:51 Release: "nginx-gcp-k8s-helm-rjogwmu" not found +``` +This is the release name generated from our Helm Deploy step name of **${service.name}-${env.name}-${infra.helm.shortId}**. + +Since this is the first deployment, an existing release with that name is not found, and a new release occurs. + + +``` +INFO 2018-10-09 16:59:52 No previous deployment found for release. Installing chart +INFO 2018-10-09 16:59:54 NAME: nginx-gcp-k8s-helm-rjogwmu +INFO 2018-10-09 16:59:54 LAST DEPLOYED: Tue Oct 9 23:59:53 2018 +INFO 2018-10-09 16:59:54 NAMESPACE: default +INFO 2018-10-09 16:59:54 STATUS: DEPLOYED +``` +You can see the Kubernetes events in the logs as the cluster is created. + + +``` +INFO 2018-10-09 16:59:54 NAME READY STATUS RESTARTS AGE +INFO 2018-10-09 16:59:54 nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs 0/1 ContainerCreating 0 0s +INFO 2018-10-09 16:59:54 +INFO 2018-10-09 16:59:54 Deployed Controllers [2]: +INFO 2018-10-09 16:59:54 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1) +INFO 2018-10-09 16:59:54 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1) +INFO 2018-10-09 16:59:54 +INFO 2018-10-09 16:59:54 **** Kubernetes Controller Events **** +... +INFO 2018-10-09 16:59:54 Desired number of pods reached [1/1] +... +INFO 2018-10-09 16:59:54 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1] +INFO 2018-10-09 16:59:54 Waiting for pods to be running [0/1] +INFO 2018-10-09 17:00:05 +... +INFO 2018-10-09 17:00:05 **** Kubernetes Pod Events **** +INFO 2018-10-09 17:00:05 Pod: nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs +INFO 2018-10-09 17:00:05 - pulling image "docker.io/bitnami/nginx:1.14.0-debian-9" +INFO 2018-10-09 17:00:05 - Successfully pulled image "docker.io/bitnami/nginx:1.14.0-debian-9" +INFO 2018-10-09 17:00:05 - Created container +INFO 2018-10-09 17:00:05 - Started container +INFO 2018-10-09 17:00:05 +INFO 2018-10-09 17:00:05 Pods are running [1/1] +INFO 2018-10-09 17:00:05 Waiting for pods to reach steady state [0/1] +``` +Lastly, but most importantly, confirm the **steady state** for the pods to ensure deployment was successful. + + +``` +INFO 2018-10-09 17:00:20 Pods have reached steady state [1/1] +INFO 2018-10-09 17:00:20 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8 +INFO 2018-10-09 17:00:20 Command finished with status SUCCESS +``` +### Helm Rollbacks + +Harness adds a revision number for each deployment. If a new deployment fails, Harness rolls back to the previous deployment revision number. You can see the revision number in the log of a deployment. Here is sample from a log after an upgrade: + + +``` +INFO 2018-10-11 14:43:09 Installing +INFO 2018-10-11 14:43:09 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu +INFO 2018-10-11 14:43:09 REVISION UPDATED STATUS CHART DESCRIPTION +INFO 2018-10-11 14:43:09 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete +INFO 2018-10-11 14:43:09 2 Thu Oct 11 18:27:35 2018 SUPERSEDED nginx-1.0.1 Upgrade complete +INFO 2018-10-11 14:43:09 3 Thu Oct 11 21:30:24 2018 DEPLOYED nginx-1.0.1 Upgrade complete +``` +The **REVISION** column lists the revision number. Note the revision number **3** as the last successful version deployed. We will now fail a deployment that would be revision **4** and you will see Harness roll back to number **3**. + +Here is an example where a failure has been initiated using an erroneous HTTP call (Response Code 500) to demonstrate the rollback behavior: + +![](./static/4-helm-workflows-31.png) + +To experiment with rollbacks, you can simply add a step to your workflow that will fail.The failed deployment section is red, but the **Rollback Phase 1** step is green, indicating that rollback has been successful. If we expand **Rollback Phase 1**, we can see the rollback information in the **Helm Rollback** step details: + +![](./static/4-helm-workflows-32.png) + +The failed version is **Release Old Version** **4** and the **Release rollback Version** is revision **3**, the last successful version. The rollback version now becomes the new version, **Release New Version 5**. + +Let's look at the log of the rollback to see Harness rolling back successfully. + + +``` +INFO 2018-10-11 14:43:22 Rolling back +INFO 2018-10-11 14:43:23 Rollback was a success! Happy Helming! +INFO 2018-10-11 14:43:23 +INFO 2018-10-11 14:43:24 Deployed Controllers [2]: +INFO 2018-10-11 14:43:24 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1) +INFO 2018-10-11 14:43:24 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1) +INFO 2018-10-11 14:43:26 Desired number of pods reached [1/1] +INFO 2018-10-11 14:43:26 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1] +INFO 2018-10-11 14:43:26 Pods are running [1/1] +INFO 2018-10-11 14:43:26 Pods have reached steady state [1/1] +INFO 2018-10-11 14:43:28 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8 +INFO 2018-10-11 14:43:28 Command finished with status SUCCESS +``` +When the next deployment is successful, you can see a record of the rollback release: + + +``` +INFO 2018-10-11 15:38:16 Installing +INFO 2018-10-11 15:38:16 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu +INFO 2018-10-11 15:38:16 REVISION UPDATED STATUS CHART DESCRIPTION +INFO 2018-10-11 15:38:16 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete +INFO 2018-10-11 15:38:16 2 Thu Oct 11 18:27:35 2018 SUPERSEDED nginx-1.0.1 Upgrade complete +INFO 2018-10-11 15:38:16 3 Thu Oct 11 21:30:24 2018 SUPERSEDED nginx-1.0.1 Upgrade complete +INFO 2018-10-11 15:38:16 4 Thu Oct 11 21:43:12 2018 SUPERSEDED nginx-1.0.1 Upgrade complete +INFO 2018-10-11 15:38:16 5 Thu Oct 11 21:43:22 2018 DEPLOYED nginx-1.0.1 Rollback to 3 +``` +The **Description** for the last release, **Revision 5**, states that it was a **Rollback to 3**. + +#### Helm Rollback Step + +You can add a **Helm Rollback** step to your Workflow to perform the aforementioned rollback sequence at a specific point in your deployment. + +![](./static/4-helm-workflows-33.png) + +The **Helm Rollback** step rolls back all the deployed objects to the previous version. + +### Upgrading Deployments + +When you run a Helm deployment a second time it will upgrade your Kubernetes cluster. The upgrade is performed in a rolling fashion that does not cause downtime. Essentially, and gracefully, the upgrade deletes old pods and adds new pods with new version of artifacts. + +Let's look at the deployment log from an upgrade to see how Harness handles it. + +First, Harness looks for all existing Helm chart releases with the same name and upgrades them: + + +``` +INFO 2018-10-11 14:30:22 Installing +INFO 2018-10-11 14:30:22 List all existing deployed releases for release name: nginx-gcp-k8s-helm-rjogwmu +INFO 2018-10-11 14:30:24 REVISION UPDATED STATUS CHART DESCRIPTION +INFO 2018-10-11 14:30:24 1 Tue Oct 9 23:59:53 2018 SUPERSEDED nginx-1.0.1 Install complete +INFO 2018-10-11 14:30:24 2 Thu Oct 11 18:27:35 2018 DEPLOYED nginx-1.0.1 Upgrade complete +INFO 2018-10-11 14:30:24 +INFO 2018-10-11 14:30:24 Previous release exists for chart. Upgrading chart +INFO 2018-10-11 14:30:25 Release "nginx-gcp-k8s-helm-rjogwmu" has been upgraded. Happy Helming! +INFO 2018-10-11 14:30:25 LAST DEPLOYED: Thu Oct 11 21:30:24 2018 +INFO 2018-10-11 14:30:25 NAMESPACE: default +INFO 2018-10-11 14:30:25 STATUS: DEPLOYED + +``` +Then it upgrades the cluster pods with the new Docker image of NGINX: + + +``` +INFO 2018-10-11 14:30:25 Deployed Controllers [2]: +INFO 2018-10-11 14:30:25 Kind:Deployment, Name:nginx-gcp-k8s-helm-rjogw (desired: 1) +INFO 2018-10-11 14:30:25 Kind:ReplicaSet, Name:nginx-gcp-k8s-helm-rjogw-565bc8495f (desired: 1) +INFO 2018-10-11 14:30:25 Desired number of pods reached [1/1] +INFO 2018-10-11 14:30:25 Pods are updated with image [docker.io/bitnami/nginx:1.14.0-debian-9] [1/1] +INFO 2018-10-11 14:30:25 Pods are running [1/1] +INFO 2018-10-11 14:30:25 Pods have reached steady state [1/1] +INFO 2018-10-11 14:30:27 Pod [nginx-gcp-k8s-helm-rjogw-565bc8495f-w5tzs] is running. Host IP: 10.128.0.24. Pod IP: 10.60.1.8 +INFO 2018-10-11 14:30:27 Command finished with status SUCCESS +``` +### Helm Values Priority + +Typically, the **values.yaml** applied to your Kubernetes cluster is a single file from the Helm chart repo. + +In Harness, you can use values from different sources and Harness will pass them in a pre-defined order. Helm itself resolves the overlapping key-value pairs. + +This enables a key in values.yaml to get updated with different, and likely more current, values. + +You can simply use a values.yaml in the Helm chart repo. There is no requirement to use multiple sources.Values for the values.yaml can be specified in the following sources: + +* Harness Service. +* Harness Environment. +* Harness Workflow via a Git connector. +* The values.yaml file in the Helm chart repo. + +The Helm deployer extracts the values.yaml from the chart and passes it as an explicit parameter to the Helm command. + +The reason for this extraction for the main chart is that if the chart has some Service variables, Harness needs to render them. + +In case of conflicts, values will be overridden. Here is how values are overridden, from least to highest priority: + +1. Chart repo values.yaml has the **least** priority. This is the values.yaml in the chart repo you specify in the **Chart Specifications** in the Harness service. +2. Harness Service values override chart repo values. These are values specified the **Values YAML** in the Harness Service. +3. Harness Environment values override Harness Service values. These are the values you specify in the **Add Configuration Overrides** in a Harness Environment. +4. Harness Workflow values added via a Git connector have the **highest** priority. + +### Do it All in YAML + +All of the Harness configuration steps in this guide can be performed using code instead of the Harness user interface. You can view or edit the YAML for any Harness configuration by clicking the **YAML** button on any page. + +![](./static/4-helm-workflows-34.png) + +When you click the button, the Harness code editor appears: + +![](./static/4-helm-workflows-35.png) + +You can edit YAML and click **Save** to change the configuration. + +For example, here is the YAML for the workflow we set up in this guide. + + +``` +harnessApiVersion: '1.0' +type: BASIC +envName: GCP-K8S-Helm +failureStrategies: +- executionScope: WORKFLOW + failureTypes: + - APPLICATION_ERROR + repairActionCode: ROLLBACK_WORKFLOW + retryCount: 0 +notificationRules: +- conditions: + - FAILED + executionScope: WORKFLOW + notificationGroupAsExpression: false + notificationGroups: + - Account Administrator +phases: +- type: HELM + computeProviderName: Harness Sample K8s Cloud Provider + daemonSet: false + infraMappingName: Kubernetes Cluster_ Harness Sample K8s Cloud Provider_DIRECT_Kubernetes_default + name: Phase 1 + phaseSteps: + - type: HELM_DEPLOY + name: Deploy Containers + steps: + - type: HELM_DEPLOY + name: Helm Deploy + properties: + steadyStateTimeout: 10 + gitFileConfig: null + helmReleaseNamePrefix: ${service.name}-${env.name}-${infra.helm.shortId} + stepsInParallel: false + - type: VERIFY_SERVICE + name: Verify Service + stepsInParallel: false + - type: WRAP_UP + name: Wrap Up + stepsInParallel: false + provisionNodes: false + serviceName: Docker-Helm + statefulSet: false +rollbackPhases: +- type: HELM + computeProviderName: Harness Sample K8s Cloud Provider + daemonSet: false + infraMappingName: Kubernetes Cluster_ Harness Sample K8s Cloud Provider_DIRECT_Kubernetes_default + name: Rollback Phase 1 + phaseNameForRollback: Phase 1 + phaseSteps: + - type: HELM_DEPLOY + name: Deploy Containers + phaseStepNameForRollback: Deploy Containers + statusForRollback: SUCCESS + steps: + - type: HELM_ROLLBACK + name: Helm Rollback + stepsInParallel: false + - type: VERIFY_SERVICE + name: Verify Service + phaseStepNameForRollback: Deploy Containers + statusForRollback: SUCCESS + stepsInParallel: false + - type: WRAP_UP + name: Wrap Up + stepsInParallel: false + provisionNodes: false + serviceName: Docker-Helm + statefulSet: false +templatized: false +``` +### Next Steps + +* [5 - Helm Troubleshooting](5-helm-troubleshooting.md) +* [Pipelines](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) +* [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) +* [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) + diff --git a/docs/first-gen/continuous-delivery/helm-deployment/5-helm-troubleshooting.md b/docs/first-gen/continuous-delivery/helm-deployment/5-helm-troubleshooting.md new file mode 100644 index 00000000000..5eeeb58dbcb --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/5-helm-troubleshooting.md @@ -0,0 +1,56 @@ +--- +title: 5 - Helm Troubleshooting +description: General troubleshooting steps for Helm deployments. +sidebar_position: 60 +helpdocs_topic_id: lajo3sdfgq +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at).The following troubleshooting information should help you diagnose common problems: + +* [Failed to Find the Previous Helm Release Version](5-helm-troubleshooting.md#failed-to-find-the-previous-helm-release-version) +* [Helm Install/Upgrade Failed](5-helm-troubleshooting.md#helm-install-upgrade-failed) +* [First Helm Deployment Goes to Upgrade Path](5-helm-troubleshooting.md#first-helm-deployment-goes-to-upgrade-path) +* [Tiller and Helm in Different Namespaces](5-helm-troubleshooting.md#tiller-and-helm-in-different-namespaces) +* [Next Steps](5-helm-troubleshooting.md#next-steps) + +### Failed to Find the Previous Helm Release Version + +Make sure that the Helm client and Tiller are installed. Do the following: + +* Verify that Helm is installed. +* Check if the Git connector being used in the Workflow and the Delegate can connect to the Git repo. Check in the Delegate logs for Git connectivity issues. + +### Helm Install/Upgrade Failed + +Likely, there is an incompatible Helm client or Tiller. The Helm client needs to be lesser or equal to the Tiller version: + +![](./static/5-helm-troubleshooting-00.png) + +To fix this, upgrade Tiller: + +`helm init --upgrade` + +### First Helm Deployment Goes to Upgrade Path + +In some cases, the first Helm deployment goes to the upgrade path even though the Helm version is working fine. + +This is the result of a Helm issue: . + +The issue happens between Helm client versions 2.8.2 to 2.9.1. To fix this, upgrade the Helm client to the version after 2.9.1. + +### Tiller and Helm in Different Namespaces + +A Helm install/upgrade can fail because Tiller is deployed in a namespace other than `kube-system`. + +To fix this, pass the`--tiller-namespace `as command flag in the Workflow **Helm Deploy** step. + +![](./static/5-helm-troubleshooting-01.png) + +### Next Steps + +* **Pipeline and Triggers** - Once you have a successful workflow, you can experiment with a Harness pipeline, which as a collection of one or more workflows, and Harness triggers, which enable you to execute a workflow or pipeline deployment using different criteria, such as when a new artifact is added to an artifact source. For more information, see [Add a Pipeline](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) and [Add a Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2). +* **Continuous Verification** - Add verification steps using Splunk, SumoLogic, Elk, AppDynamics, New Relic, DynaTrace, and others to your workflow. For more information, see [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list). + diff --git a/docs/first-gen/continuous-delivery/helm-deployment/_category_.json b/docs/first-gen/continuous-delivery/helm-deployment/_category_.json new file mode 100644 index 00000000000..03f6101eb97 --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/_category_.json @@ -0,0 +1 @@ +{"label": "Native Helm Deployments", "position": 60, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Native Helm Deployments"}, "customProps": { "helpdocs_category_id": "7gqn6m2t46"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/helm-deployment/add-packaged-helm-charts.md b/docs/first-gen/continuous-delivery/helm-deployment/add-packaged-helm-charts.md new file mode 100644 index 00000000000..520b42a8973 --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/add-packaged-helm-charts.md @@ -0,0 +1,229 @@ +--- +title: Custom Fetching and Preprocessing of Helm Charts +description: Use a script to pulls the Helm chart package and extracts its contents and/or performs processing. +sidebar_position: 70 +helpdocs_topic_id: xxwna12fso +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at).Currently, this feature is behind the Feature Flag `CUSTOM_MANIFEST`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Typically, you add Helm Charts and manifests to a Harness Kubernetes or Native Helm Service from a Helm or Source Repository by simply pointing at the chart repo. + +You can see how this is done using Harness Kubernetes integration in [Helm Quickstart](https://docs.harness.io/article/2aaevhygep-helm-quickstart), or using Harness Native Helm integration in [Helm Native Deployment Guide Overview](helm-deployments-overview.md). + +Harness also supports less common use cases: + +* **Packaged charts:** your charts, manifests, templates, etc are in a packaged archive and you simply wish to extract them and use then at runtime. +* **Preprocessing charts:** you want to perform some preprocessing or manipulation on the fetched files at runtime. +* **Custom chart storage:** your chart is stored using a custom method and you want a generic interface to fetch and pass files to Harness. +* **3rd party tools:** you use 3rd party tooling to fetch the files and simply want to integrate in with Harness. + +For these less common use cases, you can use the **Custom Remote Manifests** setting in a Harness Native Helm Service. You add a script to the Service that pulls the package and extracts its contents or performs whatever processing you require. Next, you simply supply the path to the Helm chart to Harness. + +**Looking for other methods?** See [Define Kubernetes Manifests](../kubernetes-deployments/define-kubernetes-manifests.md) and [Add Packaged Kubernetes Manifests](../kubernetes-deployments/deploy-kubernetes-manifests-packaged-with-artifacts.md). + +### Before You Begin + +* [Helm Native Deployment Guide Overview](helm-deployments-overview.md) +* [Helm Quickstart](https://docs.harness.io/article/2aaevhygep-helm-quickstart): this is not a Native Helm Quickstart. This quickstart shows you how to use your own Kubernetes manifests or a Helm chart (remote or local), and have Harness execute the Kubernetes kubectl calls to build everything without Helm and Tiller needing to be installed in the target cluster. + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +* Harness certifies its Helm support using [Helm 3.1.2](https://github.com/helm/helm/releases/tag/v3.1.2). +* Helm chart dependencies are not supported in Git source repositories (Harness [Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers)). Helm chart dependencies are supported in [Helm Chart Repositories.](https://docs.harness.io/article/0hrzb1zkog-add-helm-repository-servers) + +### Limitations + +* **Custom Remote Manifests** scripts use Bash only. +* Native Helm deployments in Harness use the Basic Workflow strategy only. +* The Delegate that runs the script must have all the software needed for the scripts to execute. You can use [Delegate Profiles](https://docs.harness.io/article/yd4bs0pltf-run-scripts-on-the-delegate-using-profiles) to add software to Delegates from Harness. +Typically, when you perform a Native Helm deployment in Harness, Harness checks its Delegates to see which Delegates have Helm installed. +For custom fetching and preprocessing of Helm charts described in this topic, Harness does not perform this check. Use [Delegate Profiles](https://docs.harness.io/article/yd4bs0pltf-run-scripts-on-the-delegate-using-profiles) and the **Delegate Selector** option described below to ensure that your deployment uses a Delegate running Helm. + +### Review: What Workloads Can I Deploy? + +Harness Native Helm supports Deployment, StatefulSet, or DaemonSet as **managed** workloads, but not Jobs. + +### Option: Add Secrets for Script + +You might pull your chart by simply using cURL, like this: + + +``` +git clone https://github.com/johndoe/Helm-Chart.git +``` +In some cases, your script to pull the remote package will use a user account. For example: + + +``` +curl -sSf -u "johndoe:mypwd" -O 'https://mycompany.jfrog.io/module/example/chart.zip' +``` +You can use Harness secrets for the username and password in your script. For example: + + +``` +curl -sSf -u "${secrets.getValue("username")}:${secrets.getValue("password")}" -O 'https://mycompany.jfrog.io/module/example/chart.zip' +``` +For more information, see [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +### Step 1: Create a Harness Native Helm Service + +Create a Harness Native Helm Service. + +In Harness, click **Setup**, and then click **Add Application**. + +Enter a name for the Application and click **Submit**. + +Click **Services**, and then click **Add Service**. The **Add Service** settings appear. + +In **Name**, enter a name for the Service. + +In **Deployment Type**, select Native Helm, and then ensure **Enable Helm V3** is selected. + +Click **Submit**. The new Harness Native Helm Service is created. + +### Step 2: Use Custom Remote Manifests + +In your Harness Native Helm Service, in **Chart Specification**, click more options (︙) and select **Custom Remote Manifests**. + +In **Manifest Format**, you can see **Helm Charts**. + +Now you can add your custom script to fetch and/or preprocess your Helm chart and files. + +### Step 3: Add Script for Chart File + +In **Script**, enter the script that pulls the chart files or package containing your chart file. If you are pulling a package, your script must also extract the files from the package. For example: + + +``` +curl -sSf -u "${secrets.getValue("username")}:${secrets.getValue("password")}" -O 'https://mycompany.jfrog.io/module/example/chart.zip' + +unzip chart.zip +``` +You can use Harness Service, Workflow, secrets, and built-in variables in the script. + +The script is run on the Harness Delegate selected in **Delegate Selector** or, if you leave this option empty, the Delegate used by the Kubernetes Cluster Cloud Provider in the Workflow's Infrastructure Definition. + +Harness creates a temporary working directory on the Delegate host for the downloaded files/package. You can reference the working directory in your script with `WORKING_DIRECTORY=$(pwd)` or `cd $(pwd)/some/other/directory`. + +### Step 4: Add Path to Helm Charts + +Once you have a script that fetches your files and, if needed, extracts your package, you provide Harness with the path to the Helm chart in the expanded folders and files. + +You can enter a path to the chart folder or to the chart file. For example, here is a repo with the chart and related files and how it is referenced in Harness: + +![](./static/add-packaged-helm-charts-13.png) + +You can use Harness Service, Workflow, and built-in variables in the path. + +For example, here is the same script using a Service Config Variable: + +![](./static/add-packaged-helm-charts-14.png) + +As you can see, there are also Service Config Variables for values.yaml overrides. You can reference these in Harness Environments **Service Configuration Overrides**. This is discussed later in this topic. + +### Option: Delegate Selector + +Typically, when you perform a Native Helm deployment in Harness, Harness checks its Delegates to see which Delegates have Helm installed. + +For custom fetching and preprocessing of Helm charts described in this topic, Harness does not perform this check. Use [Delegate Profiles](https://docs.harness.io/article/yd4bs0pltf-run-scripts-on-the-delegate-using-profiles) and the **Delegate Selector** option described below to ensure that your deployment uses a Delegate running Helm. + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting. + +### Option: Override Values.yaml in Service + +You can override the values.yaml in the remote folder you specified in **Path to Helm Charts** using the **Values YAML Override** option in **Configuration**. + +In **Values YAML Override**, click edit. The **Values YAML Override** settings appear. + +For Custom Remote Manifests, in **Store Type**, click **Custom**. + +Choose from the following options. + +#### Inherit Script from Service + +Select this option if you want to use an alternative values.yaml file from the one in the folder/package you pulled in **Custom Remote Manifest**. + +Enter the path to the override values.yaml from the root of the source repo. + +You can use Harness Service, Workflow, and built-in variables in the path. For example, `${serviceVariable.overridesPath}/values-production.yaml`. + +You can enter multiple values separated by commas. + +#### Define new Script + +Enter a script to override the script entered in **Custom Remote Manifest**. The new script can download and extract a different package. + +In **Path to Values YAML**, provide the path to the override values.yaml file. + +You can use Harness Service, Workflow, and built-in variables in the script and path. You can enter multiple values separated by commas. + +### Option: Override Values.yaml in Environment + +You can override Harness Service settings at the Harness Environment level using **Service Configuration Overrides**. See [Helm Environments](3-helm-environments.md). + +In the Harness Environment, in **Service Configuration Override**, click **Add Configuration Overrides**. The Service Configuration Override settings appear. + +In **Service**, select the Native Helm Service using the values.yaml you want to override. + +In **Override Type**, select **Values YAML**. + +Click the **Custom** option. + +The **Custom Manifest Override Configuration** section follows the same guidelines as overriding settings using the Service's **Values YAML Override** section: **Inherit Script from Service** and **Define new Script**. + +For **Inherit Script from Service**, Harness will use the script you entered to fetch your folder/package and then override the values.yaml file in the folder/package using the value you enter in **Inherit Script from Service**. You can use values or variables. + +Here's an example overriding the values.yaml using a Service **Config Variable** from the Service you selected: + +![](./static/add-packaged-helm-charts-15.png) + +### Option: Use a Harness Artifact Source + +The values.yaml file you use can have the image and/or dockercfg values hardcoded. In this case, you don't need to use a Harness Artifact Source in your Harness Service. + +If you want to use a Harness Service Artifact Source, simply add the artifact in **Artifact Source** as described in [Helm Services](2-helm-services.md). + +In your values.yaml, you must reference the Harness Artifact Source using the Harness built-in variable: + +* `image: ${artifact.metadata.image}` +* For example, in the values.yaml you would add these variables: + + +``` +... +image: ${artifact.metadata.image} + +createNamespace: true +... +``` +And then in the manifest for a deployment, you would reference these variables: + + +``` +... + spec: + containers: + - name: {{ .Chart.Name }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} +... +``` +### See Also + +* [Deploy Helm Charts](../kubernetes-deployments/deploy-a-helm-chart-as-an-artifact.md) +* [Add Packaged Kubernetes Manifests](../kubernetes-deployments/deploy-kubernetes-manifests-packaged-with-artifacts.md) + diff --git a/docs/first-gen/continuous-delivery/helm-deployment/helm-deployments-overview.md b/docs/first-gen/continuous-delivery/helm-deployment/helm-deployments-overview.md new file mode 100644 index 00000000000..fa11bbdddf4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/helm-deployments-overview.md @@ -0,0 +1,73 @@ +--- +title: Helm Native Deployment Guide Overview +description: Overview of deploying a Docker image to a Kubernetes cluster using a Helm chart. +sidebar_position: 10 +helpdocs_topic_id: ii558ppikj +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at). + +Harness supports Helm 2 and Helm v3. This guide will walk you through deploying a Docker image to a Kubernetes cluster using a Helm chart. This deployment scenario is very popular and a walkthrough of all the steps involved will help you set up this scenario in Harness for your own microservices and apps. + +Harness includes both Kubernetes and Helm deployments, and you can use Helm charts in both. Harness [Kubernetes Deployments](../kubernetes-deployments/kubernetes-deployments-overview.md) allow you to use your own Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller (for Helm v2) needing to be installed in the target cluster. See [Helm Charts](https://docs.harness.io/article/t6zrgqq0ny-kubernetes-services#helm_charts). + +### Harness Helm or Kubernetes Deployments? + +You can also use Helm with Harness Kubernetes Service and deployments and take advantage of Harness advanced Kubernetes features. See [Kubernetes Deployments Overview](../kubernetes-deployments/kubernetes-deployments-overview.md). + +The main difference is that the Helm deployment performed using the Harness Helm Service and described in this guide uses Tiller. If you use Harness Kubernetes Service for deployment, you do not need to use Tiller. + +### Blog Post + +The following blog post walks you through creating a Helm 3 deployment from scratch using Harness, including a video walkthrough: + +[Welcome to the Harness Family, Helm V3!](https://harness.io/2020/02/welcome-to-the-harness-family-helm-v3/?wvideo=1adpr2fxl1) + +### Introduction + +You can perform all of the steps in this guide using free accounts. You will need a Docker Hub account and a Google Cloud Platform account. Both offer free accounts. + +This document covers Harness Helm implementation. For Kubernetes implementation, see [Kubernetes Deployments](https://docs.harness.io/category/kubernetes-deployments). + +#### Intended Audience + +* Developers and DevOps with a working knowledge of Docker, Kubernetes, and Helm. +* Harness users with a working knowledge of the Harness Delegate. For information, see [Delegate Installation](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation), [Harness Requirements](https://docs.harness.io/article/70zh6cbrhg-harness-requirements), and [Connectivity and Permissions Requirements](https://docs.harness.io/article/11hjhpatqz-connectivity-and-permissions-requirements). + +If you are entirely new to Harness, please see the [Quick Start Setup Guide](https://docs.harness.io/article/9hd68pg5rs-quick-start-setup-guide). + +#### What Are We Going to Do? + +This guide walks you through deploying a publicly available Docker image of NGINX to a Google Cloud Platform (GCP) Kubernetes cluster using a publicly available Bitnami Helm chart. Basically, we do the following: + +* **Docker** - Pull a Docker image of NGINX from Docker Hub. +* **Helm** - Use a Bitnami Helm chart for NGINX from their Github repo and define the Kubernetes service and deployment rules. +* **Kubernetes** - Deploy to a GCP Kubernetes cluster that is configured with Helm and Tiller. + +Sound fun? Let's get started. + +#### What Are We Not Going to Do? + +This is a simple guide that covers the basics of deploying Docker images to Kubernetes using Helm. It does not cover the following: + +* **Ingress Rules** - Harness supports Ingress Rules for Kubernetes deployments. You can learn how to use Ingress Rules in [Ingress Rules](https://docs.harness.io/article/fc3nlsr0hh-ingress-rules). For a Harness deployment using Helm, you can add Ingress rules in a Helm chart file (**kind: Ingress**) and Harness will use those during deployment. For information about Ingress rules and Helm, see [Secure Kubernetes Services With Ingress, TLS And LetsEncrypt](https://docs.bitnami.com/kubernetes/how-to/secure-kubernetes-services-with-ingress-tls-letsencrypt/) from Bitnami. + +#### What Harness Needs Before You Begin + +The following are required to deploy to Kubernetes using Helm via Harness: + +* **An account with a Docker Artifact Server** you can connect to Harness, such as Docker Hub. +* **An account with a Kubernetes provider** you can connect to Harness, such as Google Cloud Platform. +* Kubernetes Cluster with **Helm and Tiller** installed and running on one pod. +* **Helm chart** hosted on a server accessible with anonymous access. +* **Harness Delegate** installed that can connect to your Artifact Server and Cloud Provider. + +We will walk you through the process of setting up Harness with connections to the artifact server and cloud provider, specifications for the Kubernetes cluster, commands for setting up Helm and Tiller on your Kubernetes cluster, and provide examples of a working Helm chart template. + +### Next Step + +* [1 - Delegate, Providers, and Helm Setup](2-connectors-providers-and-helm-setup.md) + diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-02.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-02.png new file mode 100644 index 00000000000..e793cc68687 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-02.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-03.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-03.png new file mode 100644 index 00000000000..09c729572bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-03.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-04.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-04.png new file mode 100644 index 00000000000..7b6e2a630aa Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-04.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-05.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-05.png new file mode 100644 index 00000000000..b48a3d0a5bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-05.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-06.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-06.png new file mode 100644 index 00000000000..891a521c803 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-06.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-07.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-07.png new file mode 100644 index 00000000000..9046aaea89a Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-connectors-providers-and-helm-setup-07.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-36.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-36.png new file mode 100644 index 00000000000..61b98010dfa Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-36.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-37.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-37.png new file mode 100644 index 00000000000..dc2c927fa98 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-37.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-38.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-38.png new file mode 100644 index 00000000000..214f6110822 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-38.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-39.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-39.png new file mode 100644 index 00000000000..a01223a9592 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-39.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-40.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-40.png new file mode 100644 index 00000000000..6e744a41685 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-40.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-41.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-41.png new file mode 100644 index 00000000000..563dc367ea6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-41.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-42.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-42.png new file mode 100644 index 00000000000..9bbf1a93284 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-42.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-43.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-43.png new file mode 100644 index 00000000000..2854dcfb034 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-43.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-44.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-44.png new file mode 100644 index 00000000000..f651046281c Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-44.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-45.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-45.png new file mode 100644 index 00000000000..f651046281c Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-45.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-46.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-46.png new file mode 100644 index 00000000000..a8e78b3d157 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-46.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-47.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-47.png new file mode 100644 index 00000000000..a8e78b3d157 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-47.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-48.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-48.png new file mode 100644 index 00000000000..e1da86b5dac Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-48.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-49.png b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-49.png new file mode 100644 index 00000000000..9580a8ace5a Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/2-helm-services-49.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-08.png b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-08.png new file mode 100644 index 00000000000..70207c91107 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-08.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-09.png b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-09.png new file mode 100644 index 00000000000..8e7fff0192c Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-09.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-10.png b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-10.png new file mode 100644 index 00000000000..f583e73f5e9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-10.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-11.png b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-11.png new file mode 100644 index 00000000000..bc64ec30c06 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-11.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-12.png b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-12.png new file mode 100644 index 00000000000..c29a6de284a Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/3-helm-environments-12.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-18.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-18.png new file mode 100644 index 00000000000..153ff8b7c26 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-18.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-19.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-19.png new file mode 100644 index 00000000000..fbd2184cbbe Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-19.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-20.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-20.png new file mode 100644 index 00000000000..5b8449423b8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-20.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-21.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-21.png new file mode 100644 index 00000000000..f05b1858a5c Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-21.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-22.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-22.png new file mode 100644 index 00000000000..5dbd35af145 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-22.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-23.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-23.png new file mode 100644 index 00000000000..d080f67e048 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-23.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-24.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-24.png new file mode 100644 index 00000000000..3bc5d926d18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-24.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-25.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-25.png new file mode 100644 index 00000000000..0277c3c16fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-25.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-26.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-26.png new file mode 100644 index 00000000000..6e246824009 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-26.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-27.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-27.png new file mode 100644 index 00000000000..7f96f3bfb27 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-27.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-28.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-28.png new file mode 100644 index 00000000000..d066abcd620 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-28.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-29.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-29.png new file mode 100644 index 00000000000..f35bb11d878 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-29.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-30.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-30.png new file mode 100644 index 00000000000..3f388003402 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-30.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-31.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-31.png new file mode 100644 index 00000000000..8e89dbf5945 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-31.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-32.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-32.png new file mode 100644 index 00000000000..4df30b62542 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-32.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-33.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-33.png new file mode 100644 index 00000000000..e5e6706a38b Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-33.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-34.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-34.png new file mode 100644 index 00000000000..e7f019052aa Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-34.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-35.png b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-35.png new file mode 100644 index 00000000000..8d377662ec1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/4-helm-workflows-35.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/5-helm-troubleshooting-00.png b/docs/first-gen/continuous-delivery/helm-deployment/static/5-helm-troubleshooting-00.png new file mode 100644 index 00000000000..8b1ed76844e Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/5-helm-troubleshooting-00.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/5-helm-troubleshooting-01.png b/docs/first-gen/continuous-delivery/helm-deployment/static/5-helm-troubleshooting-01.png new file mode 100644 index 00000000000..d1faaafa10e Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/5-helm-troubleshooting-01.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-13.png b/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-13.png new file mode 100644 index 00000000000..7aa5bd1f712 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-13.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-14.png b/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-14.png new file mode 100644 index 00000000000..b8c25da13c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-14.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-15.png b/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-15.png new file mode 100644 index 00000000000..8672c1bf685 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/add-packaged-helm-charts-15.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/upgrade-native-helm-2-deployments-to-helm-3-16.png b/docs/first-gen/continuous-delivery/helm-deployment/static/upgrade-native-helm-2-deployments-to-helm-3-16.png new file mode 100644 index 00000000000..769df0a62dd Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/upgrade-native-helm-2-deployments-to-helm-3-16.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/static/upgrade-native-helm-2-deployments-to-helm-3-17.png b/docs/first-gen/continuous-delivery/helm-deployment/static/upgrade-native-helm-2-deployments-to-helm-3-17.png new file mode 100644 index 00000000000..1102214fd24 Binary files /dev/null and b/docs/first-gen/continuous-delivery/helm-deployment/static/upgrade-native-helm-2-deployments-to-helm-3-17.png differ diff --git a/docs/first-gen/continuous-delivery/helm-deployment/upgrade-native-helm-2-deployments-to-helm-3.md b/docs/first-gen/continuous-delivery/helm-deployment/upgrade-native-helm-2-deployments-to-helm-3.md new file mode 100644 index 00000000000..0533899d02a --- /dev/null +++ b/docs/first-gen/continuous-delivery/helm-deployment/upgrade-native-helm-2-deployments-to-helm-3.md @@ -0,0 +1,82 @@ +--- +title: Upgrade Native Helm 2 Deployments to Helm 3 +description: Upgrade native Helm 2 deployments to Helm 3. +sidebar_position: 70 +helpdocs_topic_id: cqidzwbzaa +helpdocs_category_id: 7gqn6m2t46 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/lbhf2h71at). + +For Kubernetes deployments that use Helm charts, see [Upgrade to Helm 3 Charts in Kubernetes Services](../kubernetes-deployments/upgrade-to-helm-3-charts-in-kubernetes-services.md).When you create your native [Helm deployments](helm-deployments-overview.md) in Harness, you can choose to use Helm 2 or [Helm 3](https://helm.sh/blog/helm-3-released/). + +If you have already created native Helm 2 deployments, you can upgrade your deployments to Helm 3 by following the steps in this topic. + +**What's a native Helm deployment in Harness?** Harness provides Kubernetes deployments that use Helm charts without requiring Helm or Tiller be installed in your target environment. These are called Harness Kubernetes deployments. This is the recommended method. If you want to deploy to a Kubernetes cluster using Helm explicitly, you can use native Helm deployments. You simply choose **Helm** as the **Deployment Type** when you create a Harness Service. + + +### Before You Begin + +* [Helm Deployments Overview](helm-deployments-overview.md) +* [Kubernetes Deployments Overview](../kubernetes-deployments/kubernetes-deployments-overview.md) + +### Blog Post + +The following blog post walks you through creating a Helm 3 deployment from scratch using Harness, including a video walkthrough: + +[Welcome to the Harness Family, Helm V3!](https://harness.io/2020/02/welcome-to-the-harness-family-helm-v3/?wvideo=1adpr2fxl1) + +### Optional: Migrate Your Release History From Tiller + +This section is not necessary for Harness Kubernetes deployments that use Helm charts. For Kubernetes Services, see [Upgrade to Helm 3 Charts in Kubernetes Services](../kubernetes-deployments/upgrade-to-helm-3-charts-in-kubernetes-services.md).Helm 3 uses a new data data model that impacts your native Helm deployment release history. If you upgrade Harness native Helm 2 deployments to Helm 3 without migrating your target cluster to Helm 3, your Helm 3 deployments will not include your Helm 2 release history. + +If maintaining continuity between your Helm 2 and Helm 3 releases is not required, you do not need to migrate your release history to Helm 3. Simply move onto the next steps. + +If you do not migrate your release history to Helm 3, the first time you deploy to it using a Harness native Helm 3 deployment, Harness cannot perform rollback because there will no release history available.1. If you want to maintain release continuity between Helm 2 and Helm 3, migrate your release history to Helm 3 using the steps in [How to migrate from Helm v2 to Helm v3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) from Helm. + +In particular, pay attention to the steps in the [Migrate Helm v2 Releases](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/#migrate-helm-v2-releases). + +Once you are done migrating, follow the steps below. + +### Step 1: Add a new Delegate with Helm 3 Installed + +If you are upgrading your native Helm deployments to Helm 3, you will need to add a new Harness Delegate. + +1. Install and run a new Kubernetes Cluster Delegate or Helm Delegate in your target cluster, or install a new Helm Delegate using the Kubernetes management platform, Rancher. For steps on setting up a new Delegate, use one of the following: +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Using the Helm Delegate](https://docs.harness.io/article/6n7fon8rit-using-the-helm-delegate) + +You do not need to add a Delegate Profile for Helm 3. Harness includes Helm 3 support in any Delegate that can connect to the target Kubernetes cluster. + +Harness adds Helm within the Delegate directory path, in **/client-tools/helm/v3.0.2**. + +### Step 2: Enable Helm 3 on Harness Services + +1. Log into Harness. +2. Click **Setup**, and open your Harness Application. +3. Open the Harness Service you use for native Helm 2 deployments. +4. In the Harness Service, click vertical ellipsis (**︙**) and then click **Edit**:![](./static/upgrade-native-helm-2-deployments-to-helm-3-16.png) +5. Select the **Enable Helm V3** setting and click **Submit**. + +![](./static/upgrade-native-helm-2-deployments-to-helm-3-17.png) + +That's it. Now your Harness Service is upgraded for Helm 3 and you can start using Helm 3 charts. + +### Notes + +#### Custom Helm Binaries and Delegates + +Harness ships Helm binaries with all Harness Delegates. + +If you want the Delegate to use a specific Helm binary, see the steps in [Use Custom Helm Binaries on Harness Delegates](https://docs.harness.io/article/ymw96mf8wy-use-custom-helm-binaries-on-harness-delegates). + +### Troubleshooting + +If your deployment uses Helm 3, Harness will select a Delegate that has Helm 3 installed. You do not need to make any changes. + +However, if the Infrastructure Definition used by the Workflow is configured with a Cloud Provider that uses the Delegate Selector of a Delegate that is running Helm 2, your Helm 3 deployment might fail. + +After you have installed and run your new Delegate, and installed and run Helm 3 on it, add a Selector to your new Delegate and change your Cloud Provider to use its Delegate Selector. + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/_category_.json b/docs/first-gen/continuous-delivery/kubernetes-deployments/_category_.json new file mode 100644 index 00000000000..24c14d90676 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "Kubernetes Deployments", "position": 80, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Kubernetes Deployments"}, "customProps": { "helpdocs_category_id": "n03qfofd5w"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/add-container-images-for-kubernetes-deployments.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/add-container-images-for-kubernetes-deployments.md new file mode 100644 index 00000000000..1ef2bae62fc --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/add-container-images-for-kubernetes-deployments.md @@ -0,0 +1,66 @@ +--- +title: Add Container Images for Kubernetes Deployments +description: Add containers to Harness for your Kubernetes deployments. +sidebar_position: 20 +helpdocs_topic_id: 6ib8n1n1k6 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +To add container images to Harness for your Kubernetes deployments, you add a Harness **Artifact Server**. The Artifact Server uses your container registry account to connect to your container registry (Docker registry, AWS ECR, Google GCR, Azure Container Registry, etc). + +Once you have the Artifact Server set up, you add the container image artifact using a Harness Service **Artifact Source**. + + +### Before You Begin + +Ensure you have reviewed and set up the following: + +* [Connect to Your Target Kubernetes Platform](connect-to-your-target-kubernetes-platform.md). You must have a Harness Kubernetes Delegate running in your target Kubernetes cluster. +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) + +### Step 1: Add the Artifact Server + +Harness supports all of the popular container registries. You add your container registry account as a Harness Artifact Server. + +For steps on setting up each container registry, see [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +In the following step, we use the Docker Registry Artifact Server. + +### Step 2: Create the Harness Kubernetes Service + +1. In Harness, click **Setup**, and then click **Add Application**. +2. Enter a name for the Application and click **Submit**. +3. Click **Services**, and then click **Add Service**. The **Add Service** settings appear. + + ![](./static/add-container-images-for-kubernetes-deployments-137.png) + +4. In **Name**, enter a name for the Service. +5. In **Deployment Type**, select **Kubernetes**, and then ensure **Enable Kubernetes V2** is selected. +6. Click **Submit**. The new Harness Kubernetes Service is created. + +### Step 3: Add the Artifact Source + +To demonstrate how to add the Artifact Source, we use a Docker Registry Artifact Server. + +For the settings for all Artifact Sources, see [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +1. In the Harness Kubernetes Service, click **Add** **Artifact Source**, and select **Docker Registry**. The **Docker Registry** settings appear. Enter the following settings: + * In **Name**, let Harness generate a name for the source or enter a custom name. + * In **Source Server**, select the Artifact Server. In this example, we are using a Docker Registry with a connection to Docker Hub. + * In **Docker Image Name**, enter the image name. Official images in public repos such as Docker Hub need the label **library**. For example, **library/nginx**. +2. Click **SUBMIT**. The Artifact Source is added. + +**Recommended** — View the build history for the artifact by clicking **Artifact History**, and then using **Manually Pull Artifact** to pull the artifact. + +[![](./static/add-container-images-for-kubernetes-deployments-138.png)](./static/add-container-images-for-kubernetes-deployments-138.png) + +In addition to artifact sources taken from Artifact Servers, you can use a Shell Script to query a custom artifact repository. See [Custom Artifact Source](https://docs.harness.io/article/jizsp5tsms-custom-artifact-source). + +### Next Steps + +* [Pull an Image from a Private Registry for Kubernetes](pull-an-image-from-a-private-registry-for-kubernetes.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/adding-and-editing-inline-kubernetes-manifest-files.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/adding-and-editing-inline-kubernetes-manifest-files.md new file mode 100644 index 00000000000..855268fcb70 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/adding-and-editing-inline-kubernetes-manifest-files.md @@ -0,0 +1,56 @@ +--- +title: Adding and Editing Inline Kubernetes Manifest Files +description: Manage files in your Harness Kubernetes Service. +sidebar_position: 70 +helpdocs_topic_id: pfexttk6dr +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +Harness provides full file management for your Kubernetes configuration files. You can add, edit, and manage all of the files in your Harness Kubernetes Service. + + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Upload Kubernetes Resource Files](upload-kubernetes-resource-files.md) + +### Step 1: Manually Add Configuration Files + +For information on uploading files, see [Upload Kubernetes Resource Files](upload-kubernetes-resource-files.md). Once the files are uploaded into Harness, you can create more files inline, as described below. + +1. In your Harness Kubernetes Service, click the more options button (**︙**) next to any existing file or folder. + [![](./static/adding-and-editing-inline-kubernetes-manifest-files-55.png)](./static/adding-and-editing-inline-kubernetes-manifest-files-55.png) + The **Add File** dialog appears. + [![](./static/adding-and-editing-inline-kubernetes-manifest-files-57.png)](./static/adding-and-editing-inline-kubernetes-manifest-files-57.png) +2. Enter a file name and click **Submit**. To add a folder at the same time, enter the folder name followed by the file name, such **myfolder/service.yaml**. + +Now you can edit the file and paste in your manifest. + +To add more files to that folder, use the same folder name when you create the files. + +### Step 2: Edit Resource Files + +1. In your Harness Kubernetes Service **Manifests** section, select a file and click the **Edit** button. +2. Enter your YAML, and then click **Save**. + +Harness validates the YAML in the editor at runtime.You can use Go templating for inline manifest files. See [Use Go Templating in Kubernetes Manifests](use-go-templating-in-kubernetes-manifests.md). + +The inline values.yaml file used in a Harness Service does not support Helm templating, only Go templating. Helm templating is fully supported in the remote Helm charts you add to your Harness Service. + +### Step 3: Create and Manage Folders + +1. Click the more options button (︙) and click **Rename File**. The **Rename File** dialog opens. + [![](./static/adding-and-editing-inline-kubernetes-manifest-files-59.png)](./static/adding-and-editing-inline-kubernetes-manifest-files-59.png) +2. Enter a folder name before the file name, including a forward slash, such as **myfolder/**. +3. Click **SUBMIT**. The folder is created and the file is added to it. + +To add other existing files to that folder, rename them and use the same folder name. + +### Next Steps + +* [Upload Kubernetes Resource Files](upload-kubernetes-resource-files.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/connect-to-your-target-kubernetes-platform.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/connect-to-your-target-kubernetes-platform.md new file mode 100644 index 00000000000..4ddc4f6ed1a --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/connect-to-your-target-kubernetes-platform.md @@ -0,0 +1,156 @@ +--- +title: Connect to Your Target Kubernetes Platform +description: Connect Harness to your target Kubernetes cluster. +sidebar_position: 10 +helpdocs_topic_id: m383u53mp1 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + + + +To connect Harness to your target Kubernetes cluster, you must first install a Harness Delegate in your target platform or target Kubernetes cluster. + +Next, use the vendor-agnostic Harness Kubernetes Cluster Cloud Provider to connect Harness to your cluster (recommend). You can also use a platform-specific Cloud Provider, such as the AWS, GCP, or Azure Cloud Providers. + +The simplest method is to install the Harness Delegate in your Kubernetes cluster and then set up the Harness Kubernetes Cluster Cloud Provider to use the same credentials as the Delegate. + + +### Before You Begin + +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) +* The target Kubernetes Cluster must meet the following minimum requirements: + + **Number of nodes:** minimum of 3. + + **Machine type:** 4vCPU + + **Memory:** 12GB RAM and 6GB Disk Space. 8GB RAM is for the Delegate. The remaining memory is for Kubernetes and containers. + + **Networking:** Outbound HTTPS for the Harness connection, and to connect to any container image repo. Allow TCP port 22 for SSH. + +### Review: Cluster Connection Options + +This is a brief summary of the ways to connect to your target Kubernetes platform and clusters. + +#### Using Delegates Inside or Outside of the Target Cluster + +Typically, you install the Harness Kubernetes Delegate in your target cluster and then add a Kubernetes Cluster or GCP Cloud Provider that inherits its credentials from the Delegate. + +You can also install the Kubernetes Delegate outside of the target cluster (anywhere in your environment), and use a non-Kubernetes Delegate type (Helm, Docker, Shell Script). + +In this case, the Kubernetes Cluster Cloud Provider will not inherit credentials from the Delegate, but use the cluster master URL and some authentication method (Service Account Token, etc). + +The GCP and Azure Cloud Providers will not inherit credentials from the Delegate (the Azure Cloud Provider never does), but use platform-specific credentials, such as encrypted keys. + +##### Running Scripts with a Delegate Outside of the Target Cluster + +If you use a Delegate installed outside of the target cluster, any scripts in your Pipeline need to use the `${HARNESS_KUBE_CONFIG_PATH}` expression to reference the path to a Harness-generated kubeconfig file containing the credentials you provided. + +For example: + + +``` +export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH} kubectl get pods -n pod-test +``` +Now the script will run using the correct credentials. + +#### Target Namespace + +By default the Delegate can deploy to any namespace. To target specific namespaces only, see [Target Delegates to Specific Namespaces](https://docs.harness.io/article/p91u0bxtaf-enable-delegate-to-deploy-to-multiple-kubernetes-namespaces). + +#### Kubernetes Cluster Cloud Provider + +You can use the vendor-agnostic Kubernetes Cluster Cloud Provider for connections to your target cluster on any platform, including AWS EKS. + +If you use the Kubernetes Delegate, you can have the Harness Kubernetes Cluster Cloud Provider inherit its credentials (Service Account). + +The Kubernetes Cluster Cloud Provider can also use other types of credentials like username/password, CA certs, or Service Account Tokens. This is helpful if you use other types of Delegates that run outside Kubernetes (Shell Script, Docker, ECS). + +#### GCP and Azure Cloud Providers + +You can also use the GCP and Azure Cloud Providers to connect Harness with the platform hosting your target cluster. The account you use to connect them must provide the necessary credentials to change the target cluster. + +For GCP, the service account used with the GCP Cloud Provider requires the **Kubernetes Engine Admin** (GKE Admin) role to get the Kubernetes master username and password. Harness also requires **Storage Object Viewer** permissions. + +For Azure Kubernetes Services (AKS), the Client ID (Application ID) must be assigned to a role that has the Owner permission on the AKS cluster. If you are using the Kubernetes Cloud Provider and the Kubernetes Delegate in the AKS cluster, then AKS permissions are not required at all. This is recommended. + +### Step 1: Install the Harness Kubernetes Delegate in Your Target Cluster + +1. In Harness, click **Setup**, and then click **Harness Delegates**. +2. Click **Download Delegate** and then click **Kubernetes YAML**. +3. In the **Delegate Setup** dialog, enter a name for the Delegate, such as **doc-example**, select a Profile (the default is named **Primary**), and click **Download**. The YAML file is downloaded to your machine. +4. Install the Delegate in your cluster. You can copy the YAML file to your cluster any way you choose, but the following steps describe a common method. + 1. In a Terminal, connect to the Kubernetes cluster, and then use the same terminal to navigate to the folder where you downloaded the Harness Delegate YAML file. For example, **cd ~/Downloads**. + 2. Extract the YAML file: `tar -zxvf harness-delegate-kubernetes.tar.gz`. + 3. Navigate to the harness-delegate folder that was created: + ``` + cd harness-delegate-kubernetes + ``` + 4. Paste the following installation command into the Terminal and press enter: + ``` + kubectl apply -f harness-delegate.yaml + ``` + You will see the following output (this Delegate is named **doc-example**): + ``` + namespace/harness-delegate created + + clusterrolebinding.rbac.authorization.doc-example/harness-delegate-cluster-admin created + + statefulset.apps/doc-example-lnfzrf created + ``` + 5. Run this command to verify that the Delegate pod was created: + ``` + kubectl get pods -n harness-delegate + ``` + +You will see output with the status Pending. The Pending status simply means that the cluster is still loading the pod. + +Wait a few moments for the cluster to finish loading the pod and for the Delegate to connect to Harness Manager. + +In Harness Manager, in the **Harness Delegates** page, the new Delegate will appear. You can refresh the page if you like. + +[![](./static/connect-to-your-target-kubernetes-platform-53.png)](./static/connect-to-your-target-kubernetes-platform-53.png) + +Note the **Delegate name**. You will use this name when you set up the Kubernetes Cluster Cloud Provider. + +When you onboard your own applications, you might need to install multiple Delegates, depending on their workloads, network segmentation, and firewall zones. Typically, you will need one Delegate for every 300-500 service instances across your applications, and will need one Delegate in each subnet or zone. + +### Step 2: Choose a Kubernetes Cluster Cloud Provider or Platform Cloud Provider + +The Kubernetes Cluster Cloud Provider is platform-agnostic. Consequently, you can use it to access a cluster on any platform, but it cannot also access platform-specific services and resources. + +For example, if you have a Kubernetes cluster hosted in Google Cloud Platform (GCP), you can use the Kubernetes Cluster Cloud Provider to connect Harness to the cluster, but the Kubernetes Cluster Cloud Provider cannot also access Google Container Registry (GCR). + +In this case, you can use a single Google Cloud Platform Cloud Provider to access the GKE cluster and all other GCP resources you need, such as GCR, or you could set up a Kubernetes Cluster Cloud Provider for the GKE cluster, and a Google Cloud Platform Cloud Provider for all other GCP services and resources. + +No matter which option you choose, the Harness Kubernetes Delegate should be installed in your target cluster first. + +### Option 1: Add a Kubernetes Cluster Cloud Provider + +The following steps describe how to use the credentials of the Harness Kubernetes Delegate installed in your target cluster. If you use this option, the service account you use must have the Kubernetes **cluster-admin** role. + +1. In **Harness Manager**, click **Setup**. +2. Click **Cloud Providers**. On the **Cloud Providers** page, click **Add Cloud Provider**. The Cloud Provider appears. +3. Enter the following settings: + * **Type:** Select Kubernetes Cluster. + * **Display Name:** Enter a name. You will use this name later to select this Cloud Provider when you create a Harness Infrastructure Definition. + * **Inherit from selected Delegate:** Enable this option. Since the Delegate is already installed in the target cluster, you can use the Delegate's credentials for this Cloud Provider. This is the recommended configuration. + * **Delegate Name:** Select the name of the Delegate you installed in your cluster. + * **Select Skip Validation:** Enable this option for the setup of this Cloud Provider. Later, when you create an Infrastructure Definition, Harness will need to validate, so you can disable this setting then. + * **Usage Scope:** Use this setting to limit the use of the Cloud Provider to specific Harness Applications and Environments. +4. Click **Test**. Verify that the `The test was successful message` appears, and then click **Submit**. + +Your Kubernetes Cloud Provider is set up. Now Harness can use the Cloud Provider to perform operations in your cluster. + +### Option 2: Add a Platform-Specific Cloud Provider + +For steps on setting up a Platform-Specific Cloud Provider, see [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +### Notes + +* **Helm Client Only Mode** — When you use a remote Helm chart in your Harness Service, you do not need to have Tiller installed on the Harness Delegate because Harness interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. The Helm client is used to fetch charts from the repository and render a template. Consequently, when you install Helm on the Harness Delegate pod you can use the client-only option, as described in [Common Delegate Profile Scripts](https://docs.harness.io/article/nxhlbmbgkj-common-delegate-profile-scripts). + +### Next Steps + +* [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-blue-green-deployment.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-blue-green-deployment.md new file mode 100644 index 00000000000..5a2d2d7a0ec --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-blue-green-deployment.md @@ -0,0 +1,506 @@ +--- +title: Create a Kubernetes Blue/Green Deployment (FirstGen) +description: Create a Blue/Green Workflow for a Deployment workload. +sidebar_position: 240 +helpdocs_topic_id: ukftzrngr1 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic will walk you through creating a Blue/Green Workflow in Harness for a Deployment workload. + +For information on Blue/Green deployments, see [Deployment Concepts and Strategies](../concepts-cd/deployment-types/deployment-concepts-and-strategies.md). + +### Before You Begin + +Ensure you are familiar with the following: + +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md) + +### Review: What Workloads Can I Deploy? + +Harness Canary and Blue/Green Workflow default steps support a single Deployment workload as a **managed** entity. + +In Harness, a **managed** workload is a Deployment, StatefulSet, or DaemonSet object deployed and managed to steady state. + +Rolling Workflow default steps support Deployment, StatefulSet, or DaemonSet as **managed** workloads, but not Jobs. + +You can deploy any Kubernetes workload in any Workflow type by using a Harness  [annotation](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations#annotations) to make it unmanaged (`harness.io/direct-apply`). + +The [Apply Step](deploy-manifests-separately-using-apply-step.md) can deploy any workloads or objects in any Workflow type as a managed workload. + +**OpenShift:** See [Using OpenShift with Harness Kubernetes](using-open-shift-with-harness-kubernetes.md). + +### Review: Harness Blue Green Deployments + +Here's a quick summary of how Harness performs Blue Green deployments. + +Only one Kubernetes service is mandatory and it doesn’t need any annotations to establish if it is the primary (production) service. + +Here is a very generic service example that uses a values.yaml file for its values: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: {{.Values.name}}-svc +spec: + type: {{.Values.serviceType}} + ports: + - port: {{.Values.servicePort}} + targetPort: {{.Values.serviceTargetPort}} + protocol: TCP + selector: + app: {{.Values.name}} +``` +Note that there are no annotations to indicate that it is the primary service. Harness will add this later. + +If you have more than one service, Harness does not automatically know which is the primary service unless you add the annotations described below. If you use two services, please annotate them as described below. +1. **First deployment:** + 1. Harness creates two services (primary and stage) and one pod set for the app. + 2. The primary service is given this annotation: + `annotations: harness.io/primary-service: "true"` + 3. The stage service is given this annotation: + `annotations: harness.io/stage-service: "true"` + 4. The pod set is given an annotation of `harness.io/color: blue`. + 5. Harness points the stage service at the pod set and verifies that the set reached steady state. + 6. Harness swaps the primary service to pod set. Production traffic now flows to the app. +2. **Second deployment (new version of the same app):** + 1. Harness creates a new pod set for new app version. The pod set is given the annotation `harness.io/color: green`. + 2. Harness points the stage service at new pod set (with new app version) and verifies that the set reached steady state. + 3. Harness swaps the primary service to new pod set, stage service to old pod set. +3. **Third deployment:** + 1. Harness deploy new app version to the pod set not using the primary service. + 2. Harness points the stage service at new pod set (with new app version) and verifies that the set reached steady state. + 3. Harness swaps the primary service to new pod set, stage service to old pod set. + +### Visual Summary + +Here's an example of what your BlueGreen deployment will look like: + +![](./static/create-a-kubernetes-blue-green-deployment-217.gif) + +### Step 1: Create the Harness Kubernetes Service + +A Harness Service is different from a Kubernetes service. A Harness Service includes the manifests and container used for deployment. A Kubernetes service enables applications running in a Kubernetes cluster to find and communicate with each other, and the outside world. To avoid confusion, a Harness Service is always capitalized in Harness documentation. A Kubernetes service is not. + +1. In Harness, click **Setup**, and then click **Add Application**. +2. Enter a name for the Application and click **Submit**. +3. Click **Services**, and then click **Add Service**. The **Add Service** settings appear. + + [![](./static/create-a-kubernetes-blue-green-deployment-218.png)](./static/create-a-kubernetes-blue-green-deployment-218.png) + +4. In **Name**, enter a name for the Service. +5. In **Deployment Type**, select **Kubernetes**, and then ensure **Enable Kubernetes V2** is selected. +6. Click **Submit**. The new Harness Kubernetes Service is created. + +### Step 2: Provide Manifests + +When you create a Harness Service for a Blue/Green deployment, you need to include a manifest for one Kubernetes service. + +You must also provide a manifest for your Kubernetes Deployment object. + +The default manifest provided by Harness will work fine. + +That is all that is needed to set up a simple Harness Service for Kubernetes Blue/Green deployment. + +There are no Harness Infrastructure Definition settings specific to Kubernetes Blue/Green deployment. Create or use the Infrastructure Definition that targets your cluster, as described in [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md). + +### Step 3: Create the Workflow + +When you create a Harness Kubernetes Workflow for Blue/Green deployment, Harness automatically generates the steps for setting up the Kubernetes services you defined in your Harness Service, and for swapping the Kubernetes services between the new and old containers. + +To create a Kubernetes Blue/Green Workflow, do the following: + +1. In your Application, click **Workflows**. +2. Click **Add Workflow**. The **Workflow** dialog appears. +3. In **Name**, enter a name for your Workflow. +4. In **Workflow Type**, select **Blue/Green Deployment**. +5. In **Environment**, select the Environment you created for your Kubernetes deployment. +6. In **Service**, select the Service containing manifests for the primary and stage Kubernetes services. +7. In **Infrastructure Definition**, select the Infrastructure Definition where you want to deploy. As stated earlier, there are no Harness Infrastructure Definition settings specific to Kubernetes Blue/Green deployment. +When you are finished the Workflow dialog will look like this:![](./static/create-a-kubernetes-blue-green-deployment-220.png) +8. Click **SUBMIT**. The new Workflow appears. + +[![](./static/create-a-kubernetes-blue-green-deployment-221.png)](./static/create-a-kubernetes-blue-green-deployment-221.png) + +Let's look at each step in the Workflow and its deployment step logs. + +### Step 4: Stage Deployment Step + +The **Stage Deployment** step is added automatically when you create the Workflow. + +In the Blue/Green Workflow, click the **Stage Deployment** step. + +The **Stage Deployment** step has the following options. + +#### Manifest Options + +##### Export Manifest + +If you enable this option, Harness does the following at runtime: + +* Downloads manifests (if remote). +* Renders manifests in logs. +* Performs a dry run unless the **Skip Dry Run** option is enabled. +* Export the deployment manifests to the variable `${k8sResources.manifests}`. +* **Does not deploy the manifests.** To deploy the manifests, you must add another Kubernetes step of the same type (Canary, Rolling, Apply, Stage Deployment) an enable the **Inherit Manifest** option to deploy a copy of the exported manifests. + +If **Export Manifest** is enabled, the manifests are not deployed. You can use the **Inherit Manifest** option in a subsequent Kubernetes step to deploy a copy of the exported manifests. + +The exported manifests can be written to storage on the Delegate where the step is run. For example, you can add a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step to echo and write the manifest to a file: + + +``` +echo "${k8sResources.manifests}" > /opt/harness-delegate/test/canaryPlan +``` +If you use `${k8sResources.manifests}` in a script ensure that your script expects multiline output. You can use `cat` command to concatenate the lines.If you have the 3rd party tool that check compliance, it can use the exported manifests. + +To deploy the manifests, a copy of the exported manifests can be inherited by the next Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Inherit Manifest** option. + +If **Export Manifest** is enabled in multiple Kubernetes steps of the same type in the same Workflow Phase, the last step overrides the exported manifests. This is important because the next Kubernetes step to inherit a copy of the exported manifests will only use the exported manifests from last Kubernetes step with **Export Manifest** is enabled. + +##### Inherit Manifest + +Enable this option to inherit and deploy a copy of the manifests exported from the previous Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Export Manifest** option. + +The **Inherit Manifest** option will only inherit the exported manifest from the last Kubernetes step of the same type and in the same Workflow Phase. + +For example, if you enable the **Inherit Manifest** option in a **Canary Deployment** step, then it will only inherit a copy of the manifests exported from the last **Canary Deployment** step with the **Export Manifest** option enabled in the same Workflow Phase. + +#### Skip Dry Run + +By default, Harness uses the `--dry-run` flag on the `kubectl apply` command during the **Initialize** step of this command, which prints the object that would be sent to the cluster without really sending it. If the **Skip Dry Run** option is selected, Harness will not use the `--dry-run` flag. + +#### Delegate Selector + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates for Specific Tasks with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.#### Step Deployment + +The **Stage Deployment** step simply deploys the two Kubernetes services you have set up in the Harness Service **Manifests** section. + +[![](./static/create-a-kubernetes-blue-green-deployment-223.png)](./static/create-a-kubernetes-blue-green-deployment-223.png) + +When you look at the **Stage Deployment** step in Harness **Deployments**, you will see the following log sections. + +#### Initialize + +The Initialize stage initializes the two Kubernetes services you have set up in the Harness Service **Manifests** section (displayed earlier), primary and stage, validating their YAML. + + +``` +Initializing.. + +Manifests [Post template rendering] : + +--- +apiVersion: v1 +kind: Service +metadata: + name: harness-example-svc-primary + annotations: + harness.io/primary-service: "true" + labels: + app: bg-demo-app +spec: + type: ClusterIP + ports: + - port: 80 + protocol: TCP + selector: + app: bg-demo-app +--- +apiVersion: v1 +kind: Service +metadata: + name: harness-example-svc-stage + annotations: + harness.io/stage-service: "true" + labels: + app: bg-demo-app +spec: + type: ClusterIP + ports: + - port: 80 + protocol: TCP + selector: + app: bg-demo-app +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: harness-example +spec: + selector: + matchLabels: + app: bg-demo-app + replicas: 3 + template: + metadata: + labels: + app: bg-demo-app + spec: + containers: + - name: my-nginx + image: nginx + ports: + - containerPort: 80 + + +Validating manifests with Dry Run + +kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run +service/harness-example-svc-primary configured (dry run) +service/harness-example-svc-stage configured (dry run) +deployment.apps/harness-example created (dry run) + +Done. +``` +#### Prepare + +Typically, in the **Prepare** section, you can see that each release of the resources is versioned. This is used in case Harness needs to rollback to a previous version. + +See [Kubernetes Rollback](https://docs.harness.io/article/v41e8oo00e-kubernetes-rollback).In the case of Blue/Green, the resources are not versioned because a Blue/Green deployment uses **rapid rollback**: network traffic is simply routed back to the original instances. You do not need to redeploy previous versions of the service/artifact and the instances that comprised their environment. + + +``` +Manifests processed. Found following resources: + +Kind Name Versioned +Service harness-example-svc-primary false +Service harness-example-svc-stage false +Deployment harness-example false + +Primary Service is at color: blue +Stage Service is at color: green + +Cleaning up non primary releases + +Current release number is: 2 + +Versioning resources. + +Workload to deploy is: Deployment/harness-example-green + +Done. +``` +#### Apply + +The Apply section applies a combination of all of the manifests in the Service **Manifests** section as one file using `kubectl apply`. + + +``` +kubectl --kubeconfig=config apply --filename=manifests.yaml --record + +service/harness-example-svc-primary configured +service/harness-example-svc-stage configured +deployment.apps/harness-example-blue configured + +Done. +``` +#### Wait for Steady State + +The Wait for Steady State section displays the blue service rollout event. + + +``` +kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only + +kubectl --kubeconfig=config rollout status Deployment/harness-example-blue --watch=true + +Status : deployment "harness-example-blue" successfully rolled out + +Done. +``` +Next, the **Swap Primary with Stage** Workflow step will swap the blue and green services to route primary network traffic to the new version of the container, and stage network traffic to the old version. + +### Step 5: Swap Primary with Stage Step + +In the Blue/Green Workflow, click the **Swap Primary with Stage** step. + +![](./static/create-a-kubernetes-blue-green-deployment-225.png) + +For the **Delegate Selector** setting, see [Delegate Selector](#delegate_selector) above.You can see that the primary Kubernetes service is represented by the variable `${k8s.primaryServiceName}`, and the stage service by the variable `${k8s.stageServiceName}`. You can see how the swap works in the **Swap Primary with Stage** step in Harness Deployments. + +[![](./static/create-a-kubernetes-blue-green-deployment-226.png)](./static/create-a-kubernetes-blue-green-deployment-226.png) + +Here is the log for the step, where the mandatory Selectors you used in the Harness Service **Manifests** files are used. + + +``` +Begin execution of command Kubernetes Swap Service Selectors + +Selectors for Service One : [name:harness-example-svc-primary] +app: bg-demo-app +harness.io/color: green + +Selectors for Service Two : [name:harness-example-svc-stage] +app: bg-demo-app +harness.io/color: blue + +Swapping Service Selectors.. + +Updated Selectors for Service One : [name:harness-example-svc-primary] +app: bg-demo-app +harness.io/color: blue + +Updated Selectors for Service Two : [name:harness-example-svc-stage] +app: bg-demo-app +harness.io/color: green + +Done +``` +The **Swap Primary with Stage** command is simply the **Swap Service Selectors** command renamed to **Swap Primary with Stage** for this Workflow type. You can use **Swap Service Selectors** to swap the pods referred to by any two Kubernetes services. You simply put the expressions for any two services (`${k8s.primaryServiceName}`, `${k8s.stageServiceName}`) and they will be swapped. For example, you can have a Blue/Green deployment Workflow to swap services and then a separate Workflow that uses the **Swap Service Selectors** command to manually swap back when needed.### Example: Blue/Green Workflow Deployment + +Now that the setup is complete, you can click **Deploy** in the Workflow to deploy the artifact to your cluster. + +[![](./static/create-a-kubernetes-blue-green-deployment-228.png)](./static/create-a-kubernetes-blue-green-deployment-228.png) + +Next, select the artifact build version and click **SUBMIT**. + +[![](./static/create-a-kubernetes-blue-green-deployment-230.png)](./static/create-a-kubernetes-blue-green-deployment-230.png) + +The Workflow is deployed. The swap is complete and the Blue/Green deployment was a success. + +On the Harness **Deployments** page, expand the Workflow steps and click the **Swap Primary with Stage** step. + +[![](./static/create-a-kubernetes-blue-green-deployment-232.png)](./static/create-a-kubernetes-blue-green-deployment-232.png) + +In the **Details** section, click the vertical ellipsis and click **View Execution Context**. + +[![](./static/create-a-kubernetes-blue-green-deployment-234.png)](./static/create-a-kubernetes-blue-green-deployment-234.png) + +You can see that the names and of primary and stage services deployed. + +[![](./static/create-a-kubernetes-blue-green-deployment-236.png)](./static/create-a-kubernetes-blue-green-deployment-236.png) + +Now that you have successfully deployed your artifact to your Kubernetes cluster pods using your Harness Application, look at the completed workload in the deployment environment of your Kubernetes cluster. + +For example, here is the Blue/Green workload in Google Cloud Kubernetes Engine, displaying the blue and green services and Deployment workload: + +[![](./static/create-a-kubernetes-blue-green-deployment-238.png)](./static/create-a-kubernetes-blue-green-deployment-238.png) + +If you click a workload, you will see the pods and service created: + +[![](./static/create-a-kubernetes-blue-green-deployment-240.png)](./static/create-a-kubernetes-blue-green-deployment-240.png) + +### Option: Scale Down Old Version + +A great benefit of a Blue/Green deployment is rapid rollback: rolling back to the old version of a service/artifact is simple and reliable because network traffic is simply routed back to the original instances. You do not need to redeploy previous versions of the service/artifact and the instances that comprised their environment. + +#### Scale Down Example + +If you would like to scale down the old version **for one service**, add a [Shell Script step](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) to the Post-deployment steps of your Workflow, for example: + + +``` +kubectl scale deploy -n ${infra.kubernetes.namespace} $(kubectl get deploy -n ${infra.kubernetes.namespace} -o jsonpath='{.items[?(@.spec.selector.matchLabels.harness\.io/color=="'$(kubectl get service/${k8s.stageServiceName} -n ${infra.kubernetes.namespace} -o jsonpath='{.spec.selector.harness\.io/color}')'")].metadata.name}') --replicas=0 +``` +If you use a Delegate installed outside of the target cluster, any scripts in your Pipeline need to use the `${HARNESS_KUBE_CONFIG_PATH}` expression to reference the path to a Harness-generated kubeconfig file containing the credentials you provided (`export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH}`). + +For example: + + +``` +export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH} kubectl scale deploy -n ${infra.kubernetes.namespace} $(kubectl get deploy -n ${infra.kubernetes.namespace} -o jsonpath='{.items[?(@.spec.selector.matchLabels.harness\.io/color=="'$(kubectl get service/${k8s.stageServiceName} -n ${infra.kubernetes.namespace} -o jsonpath='{.spec.selector.harness\.io/color}')'")].metadata.name}') --replicas=0 +``` +This example does not apply to scaling down multiple deployments in the same namespace. If you use the example and you have multiple deployments in the same namespace it will impact multiple deployments. You should also include a label (or another matchSelector) specific to the particular deployment, so it doesn’t scale down all the blue deployments in the namespace. For example, match `blue` and `my-specific-app`.### Option: Using the Horizontal Pod Autoscaler (HPA) + +If you are using the Horizontal Pod Autoscaler with your deployment, create a `blue` and `green` HPA configuration that will point at your deployments. + +templates/hpa-blue.yaml: + + +``` +apiVersion: autoscaling/v2beta2 +kind: HorizontalPodAutoscaler +metadata: + name: {{.Values.name}}-blue + labels: + harness.io/color: blue +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{.Values.name}}-blue + minReplicas: {{ .Values.autoscaling.minReplicas }} + maxReplicas: {{ .Values.autoscaling.maxReplicas }} + metrics: + {{- toYaml .Values.autoscaling.metrics | nindent 4 }} +``` +templates/hpa-green.yaml: + + +``` +apiVersion: autoscaling/v2beta2 +kind: HorizontalPodAutoscaler +metadata: + name: {{.Values.name}}-green + labels: + harness.io/color: green +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{.Values.name}}-green + minReplicas: {{ .Values.autoscaling.minReplicas }} + maxReplicas: {{ .Values.autoscaling.maxReplicas }} + metrics: + {{- toYaml .Values.autoscaling.metrics | nindent 4 }} +``` +You can add your scaling configuration to your manifest (or share it if you are using a Helm chart): + + +``` +autoscaling: + minReplicas: 1 + maxReplicas: 5 + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: 20 + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: 20 +``` +When using this with a traffic splitting strategy, your pods will scale automatically as your new pods begin receiving heavier loads. + +### Kubernetes Rollback + +See [Kubernetes Rollback](https://docs.harness.io/article/v41e8oo00e-kubernetes-rollback). + +### Notes + +* **Blue/Green Rollback** — A great benefit of a Blue/Green deployment is rapid rollback: rolling back to the old version of a service/artifact is simple and reliable because network traffic is simply routed back to the original instances. You do not need to redeploy previous versions of the service/artifact and the instances that comprised their environment. +* The **Swap Primary with Stage** command is simply the **Swap Service Selectors** command renamed to **Swap Primary with Stage** for this Workflow type. You can use **Swap Service Selectors** to swap the any two Kubernetes services that include the primary and stage selectors. You simply put the expressions for any two services (`${k8s.primaryServiceName}`, `${k8s.stageServiceName}`) in **Swap Service Selectors** and they will be swapped. For example, you can have a Blue/Green deployment Workflow to swap services and then a separate Workflow that uses the **Swap Service Selectors** command to manually swap back when needed. + +### Next Steps + +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-canary-deployment.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-canary-deployment.md new file mode 100644 index 00000000000..b6f63028e92 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-canary-deployment.md @@ -0,0 +1,595 @@ +--- +title: Create a Kubernetes Canary Deployment (FirstGen) +description: Creating a Kubernetes Canary Workflow in Harness. +sidebar_position: 220 +helpdocs_topic_id: 2xp0oyubjj +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic will walk you through creating a Canary Workflow in Harness for a Deployment workload. + +### Before You Begin + +While you can add multiple phases to a Kubernetes Canary Workflow, you should simply use the Canary and Primary Phases generated by Harness when you add the first two phases. Kubernetes deployments have built-in controls for rolling out in a controlled way. The Canary Phase is a way to test the new build, run your verification, then roll out to the Primary Phase.A Harness Canary Workflow for Kubernetes is a little different than a typical [Canary deployment](../concepts-cd/deployment-types/deployment-concepts-and-strategies.md). + +This is a standard Canary deployment: + +![](./static/create-a-kubernetes-canary-deployment-02.png) + +Harness does this a little different: + +![](./static/create-a-kubernetes-canary-deployment-03.png) + +In a typical Canary deployment, all nodes in a single environment are incrementally updated in small phases, with each phase requiring a verification/gate to proceed to the next phase. + +This typical method isn't needed for Kubernetes because Kubernetes includes Rolling Update. Rolling Update is a built-in control for rolling out in a controlled way. It incrementally updates pod instances with new ones. New pods are scheduled on nodes with available resources. + +A Harness Kubernetes Canary Workflow uses two phases, a Canary and a Kubernetes Rolling Update: + +1. **Phase 1:** Harness creates a Canary version of the Kubernetes Deployment object defined in your Service **Manifests** section. Once that Deployment is verified, the Workflow deletes it by default. +Harness provides a Canary Phase as a way to test the new build, run your verification, then rollout to the subsequent Rolling Update phase. +2. **Phase 2:** Run the actual deployment using a Kubernetes Rolling Update with the number of pods you specify in the Service **Manifests** files (for example, `replicas: 3`). + +When you add phases to a Kubernetes Canary Workflow, Harness automatically generates the steps for Canary and Primary phases. You simply need to configure them. + +If you are new to Kubernetes RollingUpdate deployments, see [Performing a Rolling Update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/) from Kubernetes. That guide summaries Rolling Update and provides an interactive, online tutorial.Although it is not covered here, you can also scale your Workloads between the Canary and Rolling Update phases if you like. You simply add a new Phase and use the Scale step. See [Scale Kubernetes Pods](scale-kubernetes-pods.md). + +### Review: What Workloads Can I Deploy? + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh). + +### Review: Manifest and Canary Phases + +Harness pulls the manifests for each Phase in the Canary Workflow. The Canary Phase fetches the manifests, and then when the Primary Phase is initiated the manifests are pulled again. + +To ensure that the identical manifest is deployed in both the Canary and Primary phases, use the **Specific Commit ID** option when selecting manifests. See [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md). + +If you use the **Latest from Branch** option, when Harness fetches the manifest for each phase there is the possibility that the manifest could change between fetches for the Canary and Primary phases. + +### Step 1: Create the Workflow + +Create the Harness Kubernetes Canary Workflow that will preform the Canary deployment. + +1. In your Application, click **Workflows**. +2. Click **Add Workflow**. +3. In **Name**, enter a name for your Workflow. +4. In **Workflow Type**, select **Canary Deployment**. +5. In **Environment**, select the Environment you created for your Kubernetes deployment. You will pick Infrastructure Definitions from this Environment when you create Phases in the Workflow. +6. Click **SUBMIT**. By default, the new Canary Workflow does not have any phases pre-configured. + +### Step 2: Create the Canary Phase + +The **Canary Phase** creates a Canary deployment using your Service **Manifests** files and the number of pods you specify in the Workflow's **Canary Deployment** step. + +To add the Canary Phase, do the following: + +1. In **Deployment Phases**, click **Add Phase**. The **Workflow Phase** dialog appears. + [![](./static/create-a-kubernetes-canary-deployment-04.png)](./static/create-a-kubernetes-canary-deployment-04.png) +2. In **Service**, select the **Service** where you set up your Kubernetes configuration files. +3. In Infrastructure Definition, select the Infrastructure Definition where you want this Workflow Phase to deploy your Kubernetes objects. This is the Infrastructure Definition with the Kubernetes cluster and namespace for this Phase's deployment. +4. In **Service Variable Overrides**, you can add a variable to overwrite any variable in the Service you selected. Ensure that the variable names are identical. This is the same process described for overwriting variables in  [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). +5. Click **SUBMIT**. The new Phase is created. + +You'll notice the Phase is titled **Canary Deployment**. You can change the name of any Phase by editing it and entering a new name in the **Name** setting. + +Let's look at the default settings for this first Phase of a Canary Workflow. + +### Step 3: Canary Deployment Step + +Click the Phase 1 step, named **Canary Deployment**. The **Canary Deployment** step dialog appears. + +![](./static/create-a-kubernetes-canary-deployment-06.png) + +In this step, you will define how many pods are deployed for a Canary test of the configuration files in your Service **Manifests** section. + +1. In **Instance Unit Type**, click **COUNT** or **PERCENTAGE**. +2. In **Instances**, enter the number of pods to deploy. +* If you selected **COUNT** in **Instance Unit Type**, this is simply the number of pods. +* If you selected **PERCENTAGE**, enter a percentage of the pods defined in your Service **Manifests** files to deploy. For example, in you have `replicas: 4` in a manifest in Service, and you enter **50** in **Instances**, then 2 pods are deployed in this Phase step. + +#### Delegate Selector + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you should not add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider is not using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to ensure that they are used to execute the command. For more information, see [Select Delegates for Specific Tasks with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names, such as Environments, Services, etc. It is also a way to template the Delegate Selector setting.#### Manifest Options + +##### Export Manifest + +If you enable this option, Harness does the following at runtime: + +* Downloads manifests (if remote). +* Renders manifests in logs. +* Performs a dry run unless the **Skip Dry Run** option is enabled. +* Export the deployment manifests to the variable `${k8sResources.manifests}`. +* **Does not deploy the manifests.** To deploy the manifests, you must add another Kubernetes step of the same type (Canary, Rolling, Apply, Stage Deployment) an enable the **Inherit Manifest** option to deploy a copy of the exported manifests. + +If **Export Manifest** is enabled, the manifests are not deployed. You can use the **Inherit Manifest** option in a subsequent Kubernetes step to deploy a copy of the exported manifests. + +The exported manifests can be written to storage on the Delegate where the step is run. For example, you can add a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step to echo and write the manifest to a file: + + +``` +echo "${k8sResources.manifests}" > /opt/harness-delegate/test/canaryPlan +``` +If you use `${k8sResources.manifests}` in a script ensure that your script expects multiline output. You can use `cat` command to concatenate the lines.If you have the 3rd party tool that check compliance, it can use the exported manifests. + +To deploy the manifests, a copy of the exported manifests can be inherited by the next Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Inherit Manifest** option. + +If **Export Manifest** is enabled in multiple Kubernetes steps of the same type in the same Workflow Phase, the last step overrides the exported manifests. This is important because the next Kubernetes step to inherit a copy of the exported manifests will only use the exported manifests from last Kubernetes step with **Export Manifest** is enabled. + +##### Inherit Manifest + +Enable this option to inherit and deploy a copy of the manifests exported from the previous Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Export Manifest** option. + +The **Inherit Manifest** option will only inherit the exported manifest from the last Kubernetes step of the same type and in the same Workflow Phase. + +For example, if you enable the **Inherit Manifest** option in a **Canary Deployment** step, then it will only inherit a copy of the manifests exported from the last **Canary Deployment** step with the **Export Manifest** option enabled in the same Workflow Phase. + +##### Inherit Manifest From Canary To Primary Phase + +Currently, this feature is behind the Feature Flag `MANIFEST_INHERIT_FROM_CANARY_TO_PRIMARY_PHASE` and is only applicable to Kubernetes deployments with Canary Workflow phase. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Harness pulls the manifests for each Phase in the Canary Workflow. The Canary Phase fetches the manifests, and then when the Primary Phase is initiated the manifests are pulled again. + +You can select one of the following options for **Commit ID** in the **Remote** **Manifests** settings: + +* **Latest from Branch** - Uses the latest commit id for both Canary and Primary Phases. +* **Specific Commit ID** - Uses the specific commit id for both Canary and Primary Phases. + +![](./static/create-a-kubernetes-canary-deployment-07.png) + +To ensure that the identical manifest is deployed in both the Canary and Primary phases, the commitId recorded in the Canary Phase is used when selecting manifests in the Primary Phase. + +![](./static/create-a-kubernetes-canary-deployment-08.png) + +To inherit manifests from the Canary to the Primary phase, you must select **Manifest Format** using Git Repositories. Following are the Manifest Formats that use Git Repo: + +1. Kubernetes Resource Specs in YAML format +2. Helm Chart from Source Repository +3. Kustomization Configuration +4. OpenShift Template + ![](./static/create-a-kubernetes-canary-deployment-09.png) + +#### Skip Dry Run + +By default, Harness uses the `--dry-run` flag on the `kubectl apply` command during the **Initialize** step of this command, which prints the object that would be sent to the cluster without really sending it. If the **Skip Dry Run** option is selected, Harness will not use the `--dry-run` flag. + +Phase 1 of the Canary Deployment Workflow is complete. Now the Workflow needs a Primary Phase to roll out the objects defined in the Service **Manifests** section. + +#### Canary Delete Step + +See [Delete Kubernetes Resources](delete-kubernetes-resources.md). + +#### Verifications and Canary Deployments + +When you add Harness Continuous Verification steps to a Canary Workflow, add them to the Canary Phase, not the Primary Phase. + +If the Canary Phase is verified, then the Primary Phase will proceed successfully. Adding Continuous Verification steps to the Primary Phase defeats the purpose of Canary Workflows, because the Canary Phase verifies the new deployment against previous deployments. + +#### Do Not Use Multiple Canary Deployment Steps + +Your Phase should only use one Canary Deployment step. If you use multiple Canary Deployment steps, the last step overrides all previous steps, rendering them useless. + +If you want to scale the pods deployed by the Canary Deployment step, use the [Scale](scale-kubernetes-pods.md) step. + +### Step 4: Create the Primary Phase using Rolling Update + +The Primary Phase runs the actual deployment as a rolling update with the number of pods you specify in the Service **Manifests** files (for example, `replicas: 3`). + +Similar to application-scaling, during a rolling update of a Deployment, the Kubernetes service will load-balance the traffic only to available pods (an instance that is available to the users of the application) during the update. + +To add the Primary Phase, do the following: + +1. In your **Workflow**, in **Deployment Phases**, under **Canary**, click **Add Phase**. + [![](./static/create-a-kubernetes-canary-deployment-10.png)](./static/create-a-kubernetes-canary-deployment-10.png) + The **Workflow Phase** dialog appears. +2. In **Service**, select the same **Service** you selected in Phase 1. +3. In Infrastructure Definition, select the same Infrastructure Definition you selected in Phase 1. +4. In **Service Variable Overrides**, you can add a variable to overwrite any variable in the Service you selected. Ensure that the variable names are identical. This is the same process described for overwriting variables in  [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). +5. Click **SUBMIT**. The new Phase is created.[![](./static/create-a-kubernetes-canary-deployment-12.png)](./static/create-a-kubernetes-canary-deployment-12.png) + +The Phase is named **Primary** automatically, and contains one step, **Rollout Deployment**. + +**Rollout Deployment** performs a rolling update. Rolling updates allow an update of a Deployment to take place with zero downtime by incrementally updating pod instances with new ones. The new pods are scheduled on nodes with available resources. The rolling update Deployment uses the number of pods you specified in the Service **Manifests** (number of replicas). + +### Example: Canary Workflow Deployment + +Let's look at how the Workflow steps deploy the workload. + +#### Canary Deployment Step in Deployment + +Let's look at an example where the **Canary Deployment** step is configured to deploy a **COUNT** of **2**. Here is the step in the Harness **Deployments** page: + +[![](./static/create-a-kubernetes-canary-deployment-14.png)](./static/create-a-kubernetes-canary-deployment-14.png) + +You can see **Target Instance Count 2** in the Details section. + +Below Details you can see the logs for the step. + +[![](./static/create-a-kubernetes-canary-deployment-16.png)](./static/create-a-kubernetes-canary-deployment-16.png) + +Let's look at the **Prepare**, **Apply**, and **Wait** **for Steady State** sections of the step's deployment log, with comments added: + +##### Prepare + +Here is the log from the Prepare section: + + +``` +Manifests processed. Found following resources: + +# API objects in manifest file + +Kind Name Versioned +ConfigMap harness-example-config true +Deployment harness-example-deployment false + +# each deployment is versioned, this is the second deployment + +Current release number is: 2 + +Versioning resources. + +# previous deployment + +Previous Successful Release is 1 + +Cleaning up older and failed releases + +# existing number if pods + +Current replica count is 1 + +# Deployment workload executed + +Canary Workload is: Deployment/harness-example-deployment-canary + +# number specified in Canary Deployment step Instance field + +Target replica count for Canary is 2 + +Done. +``` +The name of the Deployment workload in the Service **Manifests** file is **harness-example-deployment** (the name variable is `harness-example`): **.** + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{.Values.name}}-deployment +``` +As you can see, Harness appends the name with **-canary**, **harness-example-deployment-canary**. This is to identify Canary Deployment step workloads in your cluster. + +The next section is **Apply**. + +##### Apply + +Here you will see the manifests in the Service **Manifests** section applied using kubectl as a single file, **manifests.yaml**. + + +``` +# kubectl command to apply manifests + +kubectl --kubeconfig=config apply --filename=manifests.yaml --record + +# ConfigMap object created + +configmap/harness-example-config-2 created + +# Deployment object created + +deployment.apps/harness-example-deployment-canary created + +Done. +``` +Next, Harness logs the steady state of the pods. + +##### Wait for Steady State + +Harness displays the status of each pod deployed and confirms steady state. + + +``` +# kubectl command for get events + +kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only + +# kubectl command for status + +kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment-canary --watch=true + +# status of each pod + +Status : Waiting for deployment "harness-example-deployment-canary" rollout to finish: 0 of 2 updated replicas are available... +Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 MountVolume.SetUp succeeded for volume "default-token-hwzdf" SuccessfulMountVolume +Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 pulling image "registry.hub.docker.com/library/nginx:stable-perl" Pulling +Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 pulling image "registry.hub.docker.com/library/nginx:stable-perl" Pulling +Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled +Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled +Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Created container Created +Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Created container Created +Event : Pod harness-example-deployment-canary-8675b5b8bf-rl2n8 Started container Started +Event : Pod harness-example-deployment-canary-8675b5b8bf-98sf6 Started container Started + +Status : Waiting for deployment "harness-example-deployment-canary" rollout to finish: 1 of 2 updated replicas are available... + +# canary deployment step completed + +Status : deployment "harness-example-deployment-canary" successfully rolled out + +Done. +``` +##### Wrap Up + +The Wrap Up log is long and describes all of the container and pod information for the step, using the kubectl command: + + +``` +kubectl --kubeconfig=config describe --filename=manifests.yaml +``` +#### Primary Step in Deployment + +Let's look at an example where the **Primary** step deploys the Service **Manifests** objects. Here is the step in the Harness **Deployments** page: + +[![](./static/create-a-kubernetes-canary-deployment-18.png)](./static/create-a-kubernetes-canary-deployment-18.png) + +Before we look at the logs, let's look at the Service **Manifests** files it's deploying. + +Here is the values.yaml from our Service **Manifests** section: + + +``` +name: harness-example +replicas: 1 +image: ${artifact.metadata.image} +``` +Here is the spec.yaml from our Service **Manifests** section: + + +``` +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{.Values.name}}-config +data: + key: value +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{.Values.name}}-deployment +spec: + replicas: {{int .Values.replicas}} + selector: + matchLabels: + app: {{.Values.name}} + template: + metadata: + labels: + app: {{.Values.name}} + spec: + containers: + - name: {{.Values.name}} + image: {{.Values.image}} + envFrom: + - configMapRef: + name: {{.Values.name}}-config + ports: + - containerPort: 80 +``` +Let's look at the **Initialize**, **Prepare**, and **Apply** stages of the **Rollout Deployment**. + +##### Initialize + +In the **Initialize** section of the **Rollout Deployment** step, you can see the same object descriptions as the Service **Manifests** section: + + +``` +Initializing.. + +Manifests [Post template rendering] : + +# displays the manifests taken from the Service Manifests section + +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: harness-example-config +data: + key: value +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: harness-example-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: harness-example + template: + metadata: + labels: + app: harness-example + spec: + containers: + - name: harness-example + image: registry.hub.docker.com/library/nginx:stable-perl + envFrom: + - configMapRef: + name: harness-example-config + ports: + - containerPort: 80 + +# Validates the YAML syntax of the manifest with a dry run + +Validating manifests with Dry Run + +kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run +configmap/harness-example-config created (dry run) +deployment.apps/harness-example-deployment configured (dry run) + +Done. + +``` +Now that Harness has ensured that manifests can be used, it will process the manifests. + +##### Prepare + +In the **Prepare** section, you can see that Harness versions the ConfigMap and Secret resources (for more information, see [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations)). + + +``` +Manifests processed. Found following resources: + +# determine if the resources are versioned + +Kind Name Versioned +ConfigMap harness-example-config true +Deployment harness-example-deployment false + +# indicates that these objects have been released before + +Current release number is: 2 + +Previous Successful Release is 1 + +# removed unneeded releases + +Cleaning up older and failed releases + +# identifies new Deployment workload + +Managed Workload is: Deployment/harness-example-deployment + +# versions the new release + +Versioning resources. + +Done. +``` +Now Harness can apply the manifests. + +##### Apply + +The Apply section shows the kubectl commands for applying your manifests. + + +``` +# the Service Manifests section are compiled into one file and applied + +kubectl --kubeconfig=config apply --filename=manifests.yaml --record + +# the objects applied + +configmap/harness-example-config-2 configured +deployment.apps/harness-example-deployment configured + +Done. +``` +Now that the manifests are applied, you can see the container and pod details described in **Wrap Up**. + +##### Wrap Up + +Wrap Up is long and uses a kubectl describe command to provide information on all containers and pods deployed: + + +``` +kubectl --kubeconfig=config describe --filename=manifests.yaml +``` +Here is a sample from the output that displays the Kubernetes RollingUpdate: + + +``` +# Deployment name + +Name: harness-example-deployment + +# namespace from Deployment manifest + +Namespace: default +CreationTimestamp: Wed, 13 Feb 2019 01:00:49 +0000 +Labels: +Annotations: deployment.kubernetes.io/revision: 2 + kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --f... + kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true + +# Selector applied + +Selector: app=harness-example,harness.io/track=stable + +# number of replicas from the manifest + +Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable + +# RollingUpdate strategy + +StrategyType: RollingUpdate +MinReadySeconds: 0 + +# RollingUpdate progression + +RollingUpdateStrategy: 25% max unavailable, 25% max surge + +``` +As you look through the description in **Wrap Up** you can see label added: + + +``` +add label: harness.io/track=stable +``` +You can use the `harness.io/track=stable` label with the values `canary` or `stable` as a selector for managing traffic to these pods, or for testing the pods. For more information, see [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations). + +The Workflow is deployed. + +Now that you have successfully deployed your artifact to your Kubernetes cluster pods using your Harness Application, look at the completed workload in the deployment environment of your Kubernetes cluster. + +For example, here is the Deployment workload in Google Cloud Kubernetes Engine: + +[![](./static/create-a-kubernetes-canary-deployment-20.png)](./static/create-a-kubernetes-canary-deployment-20.png) + +Or you can simply connect to your cluster in a terminal and see the pod(s) deployed: + + +``` +john_doe@cloudshell:~ (project-15454)$ kubectl get pods +NAME READY STATUS RESTARTS AGE +harness-example-deployment-7df7559456-xdwg5 1/1 Running 0 9h +``` +### Kubernetes Rollback + +See [Kubernetes Rollback](https://docs.harness.io/article/v41e8oo00e-kubernetes-rollback). + +### Notes + +* If you are using the **Traffic Split** step or doing Istio traffic shifting using the **Apply step**, move the **Canary Delete** step from **Wrap Up** section of the **Canary** phase to the **Wrap Up** section of the Primary phase. +Moving the Canary Delete step to the Wrap Up section of the Primary phase will prevent any traffic from being routed to deleted pods before traffic is routed to stable pods in the Primary phase. +For more information, see [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md), [Delete Kubernetes Resources](delete-kubernetes-resources.md), and [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). +* Harness does not roll back Canary deployments because your production is not affected during Canary. Canary catches issues before moving to production. Also, you might want to analyze the Canary deployment. The Canary Delete step is useful to perform cleanup when required. +* **Instances Deployed** — In the **Deployments** page, the **Instances Deployed** label shows the total number of pods deployed in the entire deployment, including the Canary and Rollout steps. + +### Next Steps + +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) +* [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md) +* [Delete Kubernetes Resources](delete-kubernetes-resources.md) +* [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md) +* [Traffic Splitting Without Istio](traffic-splitting-without-istio.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-rolling-deployment.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-rolling-deployment.md new file mode 100644 index 00000000000..6498a669973 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-a-kubernetes-rolling-deployment.md @@ -0,0 +1,295 @@ +--- +title: Create a Kubernetes Rolling Deployment (FirstGen) +description: Use the Kubernetes rolling update strategy. +sidebar_position: 230 +helpdocs_topic_id: dl0l34ge8l +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/xsla71qg8t). + +A rolling update strategy updates Kubernetes deployments with zero downtime by incrementally updating pods instances with new ones. New Pods are scheduled on nodes with available resources. + +This method is similar to a standard Canary strategy, but different to Harness Kubernetes Canary strategy. The Harness Kubernetes Canary strategy uses a rolling update as its final phase. See [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) for more information. + +For a detailed explanation of Kubernetes rolling updates, see [Performing a Rolling Update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/) from Kubernetes. + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md) + +### Review: What Workloads Can I Deploy? + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh). + +#### Multiple Managed Workloads + +For Rolling Update deployments, you can deploy multiple managed workloads. + +For Canary and Blue/Green Workflow deployments, only one managed object may be deployed per Workflow by default. You can deploy additional objects using the [Apply Step](deploy-manifests-separately-using-apply-step.md), but it is typically used for deploying Jobs controllers. + +You can specify the multiple workload objects in a single manifest or in individual manifests, or any other arrangement.For example, here is a Service **Manifests** section with two Deployment objects, each in their own manifest: + +![](./static/create-a-kubernetes-rolling-deployment-104.png) + +Here is the log from the deployment, where you can see both Deployment objects deployed: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: anshul-multiple-workloads-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: anshul-multiple-workloads + template: + metadata: + labels: + app: anshul-multiple-workloads + spec: + containers: + - name: anshul-multiple-workloads + image: registry.hub.docker.com/library/nginx:stable + envFrom: + - configMapRef: + name: anshul-multiple-workloads + - secretRef: + name: anshul-multiple-workloads +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: anshul-multiple-workloads-deployment-1 +spec: + replicas: 3 + selector: + matchLabels: + app: anshul-multiple-workloads + template: + metadata: + labels: + app: anshul-multiple-workloads + spec: + containers: + - name: anshul-multiple-workloads + image: registry.hub.docker.com/library/nginx:stable + envFrom: + - configMapRef: + name: anshul-multiple-workloads + - secretRef: + name: anshul-multiple-workloads +``` +### Step 1: Define Rollout Strategy + +There are no mandatory Rolling Update–specific settings for manifests in the Harness Service. You can use any Kubernetes configuration in your Service **Manifests** section. + +The default Rolling Update strategy used by Harness is: + + +``` +RollingUpdateStrategy: 25% max unavailable, 25% max surge +``` +If you want to set a Rolling Update strategy that is different from the default, you can include the strategy settings in your Deployment manifest: + + +``` +strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 1 +``` +For details on the settings, see RollingUpdateDeployment in the [Kubernetes API docs](https://kubernetes.io/docs/concepts/overview/kubernetes-api/). + +### Step 2: Create Workflow + +1. In your Application containing a completed Service and Environment for the Rollout Deployment, click **Workflows**. +2. Click **Add Workflow**. The **Workflow** dialog appears. +3. In **Name**, enter a name for your Workflow. +4. In **Workflow Type**, select **Rolling Deployment**. +5. In **Environment**, select the Environment you created for your Kubernetes deployment. +6. In **Service**, select the Service containing the manifest files you want to use for your deployment. +7. In Infrastructure Definition, select the Infrastructure Definition where you want to deploy. +8. When you are finished, the **Workflow** dialog will look like this example:![](./static/create-a-kubernetes-rolling-deployment-105.png) +9. Click **SUBMIT**. The new Workflow appears. + +[![](./static/create-a-kubernetes-rolling-deployment-106.png)](./static/create-a-kubernetes-rolling-deployment-106.png) + +### Step 3: Rollout Deployment Step + +The Workflow generates the **Rollout Deployment** step automatically. There's nothing to update. You can deploy the Workflow. + +The Rollout Deployment step includes the following options. + +#### Manifest Options + +##### Export Manifest + +If you enable this option, Harness does the following at runtime: + +* Downloads manifests (if remote). +* Renders manifests in logs. +* Performs a dry run unless the **Skip Dry Run** option is enabled. +* Export the deployment manifests to the variable `${k8sResources.manifests}`. +* **Does not deploy the manifests.** To deploy the manifests, you must add another Kubernetes step of the same type (Canary, Rolling, Apply, Stage Deployment) an enable the **Inherit Manifest** option to deploy a copy of the exported manifests. + +If **Export Manifest** is enabled, the manifests are not deployed. You can use the **Inherit Manifest** option in a subsequent Kubernetes step to deploy a copy of the exported manifests. + +The exported manifests can be written to storage on the Delegate where the step is run. For example, you can add a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step to echo and write the manifest to a file: + + +``` +echo "${k8sResources.manifests}" > /opt/harness-delegate/test/canaryPlan +``` +If you use `${k8sResources.manifests}` in a script ensure that your script expects multiline output. You can use `cat` command to concatenate the lines.If you have the 3rd party tool that check compliance, it can use the exported manifests. + +To deploy the manifests, a copy of the exported manifests can be inherited by the next Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Inherit Manifest** option. + +If **Export Manifest** is enabled in multiple Kubernetes steps of the same type in the same Workflow Phase, the last step overrides the exported manifests. This is important because the next Kubernetes step to inherit a copy of the exported manifests will only use the exported manifests from last Kubernetes step with **Export Manifest** is enabled. + +##### Inherit Manifest + +Enable this option to inherit and deploy a copy of the manifests exported from the previous Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Export Manifest** option. + +The **Inherit Manifest** option will only inherit the exported manifest from the last Kubernetes step of the same type and in the same Workflow Phase. + +For example, if you enable the **Inherit Manifest** option in a **Canary Deployment** step, then it will only inherit a copy of the manifests exported from the last **Canary Deployment** step with the **Export Manifest** option enabled in the same Workflow Phase. + +#### Skip Dry Run + +By default, Harness uses the `--dry-run` flag on the `kubectl apply` command during the **Initialize** step of this command, which prints the object that would be sent to the cluster without really sending it. If the **Skip Dry Run** option is selected, Harness will not use the `--dry-run` flag. + +Let's look at what the **Rollout Deployment** step does in the deployment logs. + +#### Delegate Selector + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates for Specific Tasks with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.#### Apply Section in Deployment + +The Apply section deploys the manifests from the Service **Manifests** section as one file. + + +``` +kubectl --kubeconfig=config apply --filename=manifests.yaml --record + +configmap/harness-example-config-3 configured +deployment.apps/harness-example-deployment created + +Done. +``` +#### Wait for Steady State Section in Deployment + +The Wait for Steady State section shows the containers and pods rolled out. + + +``` +kubectl --kubeconfig=config get events --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only + +kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment --watch=true + + +Status : Waiting for deployment "harness-example-deployment" rollout to finish: 0 of 2 updated replicas are available... +Event : Pod harness-example-deployment-5674658766-6b2fw Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled +Event : Pod harness-example-deployment-5674658766-p9lpz Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled +Event : Pod harness-example-deployment-5674658766-6b2fw Created container Created +Event : Pod harness-example-deployment-5674658766-p9lpz Created container Created +Event : Pod harness-example-deployment-5674658766-6b2fw Started container Started +Event : Pod harness-example-deployment-5674658766-p9lpz Started container Started + +Status : Waiting for deployment "harness-example-deployment" rollout to finish: 1 of 2 updated replicas are available... + +Status : deployment "harness-example-deployment" successfully rolled out + +Done. +``` +#### Wrap Up Section in Deployment + +The Wrap Up section shows the Rolling Update strategy used. + + +``` +... +Name: harness-example-deployment +Namespace: default +CreationTimestamp: Sun, 17 Feb 2019 22:03:53 +0000 +Labels: +Annotations: deployment.kubernetes.io/revision: 1 + kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --f... + kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true +Selector: app=harness-example +Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable +StrategyType: RollingUpdate +MinReadySeconds: 0 +RollingUpdateStrategy: 25% max unavailable, 25% max surge +... +NewReplicaSet: harness-example-deployment-5674658766 (2/2 replicas created) +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 8s deployment-controller Scaled up replica set harness-example-deployment-5674658766 to 2 + +Done. +``` +### Example: Rolling Update Deployment + +Now that the setup is complete, you can click **Deploy** in the Workflow to deploy the artifact to your cluster. + +[![](./static/create-a-kubernetes-rolling-deployment-108.png)](./static/create-a-kubernetes-rolling-deployment-108.png) + +Next, select the artifact build version and click **SUBMIT**. + +[![](./static/create-a-kubernetes-rolling-deployment-110.png)](./static/create-a-kubernetes-rolling-deployment-110.png) + +The Workflow is deployed. + +To see the completed deployment, log into your cluster and run `kubectl get all`. The output lists the new Deployment: + + +``` +NAME READY STATUS RESTARTS AGE +pod/harness-example-deployment-5674658766-6b2fw 1/1 Running 0 34m +pod/harness-example-deployment-5674658766-p9lpz 1/1 Running 0 34m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/kubernetes ClusterIP 10.83.240.1 443/TCP 34m + +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +deployment.apps/harness-example-deployment 2 2 2 2 34m + +NAME DESIRED CURRENT READY AGE +replicaset.apps/harness-example-deployment-5674658766 2 2 2 34m +``` +### Kubernetes Rollback + +See [Kubernetes Rollback](https://docs.harness.io/article/v41e8oo00e-kubernetes-rollback). + +You can add a **Rollback Deployment** command to the **Rollback Steps** in your Workflow to roll back the workloads deployed by the **Rollout Deployment** step. + +Simply add this command to the **Rollback Steps** in a Workflow where you want to initiate a rollback. Note that this command applies to the deployments of the Rollout Deployment command, and not the [Apply Step](deploy-manifests-separately-using-apply-step.md) command. + +### Next Steps + +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-crd-deployments.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-crd-deployments.md new file mode 100644 index 00000000000..2547c67d183 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-crd-deployments.md @@ -0,0 +1,447 @@ +--- +title: Deploy Kubernetes Custom Resources using CRDs +description: Harness supports all Kubernetes default resources, such as Pods, Deployments, StatefulSets, DaemonSets, etc. For these resources, Harness supports steady state checking, versioning, displays instance… +sidebar_position: 270 +helpdocs_topic_id: pmmfqqo1uh +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports all Kubernetes default resources, such as Pods, Deployments, StatefulSets, DaemonSets, etc. For these resources, Harness supports steady state checking, versioning, displays instances on Harness dashboards, performs rollback, and other enterprise features. + +In addition, Harness provides many of the same features for Kubernetes [custom resource](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) deployments using Custom Resource Definitions (CRDs). CRDs are resources you create that extend the Kubernetes API to support your application. + +Harness supports CRDs for both Kubernetes and OpenShift. There is no difference in their custom resource implementation. + + +### Before You Begin + +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations) + +### Limitations + +#### Rollbacks + +Harness only performs rollback for the default Kubernetes objects. For failures with CRDs, Harness redeploys the previous successful version. + +Harness redeploys using the last successful release that matches the `release-name` label value (`harness.io/release-name: `), described below. + +#### Rolling Deployment Only + +Blue/Green and Canary deployments are not supported at this time. See [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md). + +#### Versioning + +ConfigMap and Secrets are not versioned. + +### Review: Harness Custom Resource Requirements + +To use a custom resource in Harness, you need to add the following annotations to its manifest: + +#### managed-workload + +`harness.io/managed-workload`: when you set this annotation to `true`, it informs Harness that this is a custom resource. + +Here is an example: + + +``` +... + "harness.io/managed-workload": "true" +... +``` +#### steadyStateCondition + +`harness.io/steadyStateCondition`: since the resource is not a native Kubernetes resource, Harness needs a way to check its steady state. + +Here is an example: + + +``` +apiVersion: samplecontroller.k8s.io/v1alpha1 +kind: Foo +metadata: + name: example-foo-demo + annotations: + "harness.io/managed-workload": "true" + "harness.io/steadyStateCondition": ${json.select("$..status.availableReplicas", response) == json.select("$..spec.replicas", response) && json.select("$..spec.deploymentName", response) == "example-foo-demo"} +spec: + deploymentName: example-foo-demo + replicas: 2 + template: + metadata: + labels: + "harness.io/release-name": {{release}} +... +``` +See Harness support for [JSON and XML Functors](https://docs.harness.io/article/wfvecw3yod-json-and-xml-functors). + +If the `steadyStateCondition` fails, Harness logs the following error message: + + +``` +Status check for resources in namespace [[namespace]] failed. +``` +#### release-name + +`harness.io/release-name: ` in labels: this is required for Harness to track any pods for the custom resource. + +This label is used for redeploys (which Harness performs in place of rollbacks for CRDs). + +In the even of deployment failure, Harness will redeploy the last successful release that matches the `release-name` label value (`harness.io/release-name: `). + +The `` must match the **Release Name** in the Harness Infrastructure Definition. See [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md). + +You declare the release name in the values.yaml, for example `release:release-${infra.kubernetes.infraId}` , and then reference it in the manifest as `{{.Values.release}}`. + +Here is an example: + + +``` +... + labels: + "harness.io/release-name": "{{.Values.release}}" +... +``` +#### Controller Must Add Release Name to Pods + +The CRD controller must add the `harness.io/release-name` label and value from the custom resource manifest to all the pods created for the custom resource. This process sets the label on the resource so Harness can track its releases. + +This must be done programmatically by the controller. + +Here is an example taken from the [Kubernetes sample controller on Github](https://github.com/kubernetes/sample-controller/blob/master/controller.go#L391): + + +``` +func newDeployment(foo *samplev1alpha1.Foo) *appsv1.Deployment { + + labelsFromSpec := foo.Spec.Template.Metadata.Labels + labels := map[string]string{ + "app": "nginx", + "controller": foo.Name, + } + for k, v := range labelsFromSpec { + labels[k] = v + } + + klog.Info("Handle new deployment with labels: ", labelsFromSpec) + return &appsv1.Deployment{ + ObjectMeta: metav1.ObjectMeta{ + Name: foo.Spec.DeploymentName, + Namespace: foo.Namespace, + OwnerReferences: []metav1.OwnerReference{ + *metav1.NewControllerRef(foo, samplev1alpha1.SchemeGroupVersion.WithKind("Foo")), + }, + }, + Spec: appsv1.DeploymentSpec{ + Replicas: foo.Spec.Replicas, + Selector: &metav1.LabelSelector{ + MatchLabels: labels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: labels, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + { + Name: "nginx", + Image: "nginx:latest", + }, + }, + }, + }, + }, + } +} +``` +#### Example Manifest + +Here is an example manifest: + + +``` +apiVersion: samplecontroller.k8s.io/v1alpha1 +kind: Foo +metadata: + name: example-foo-demo + annotations: + "harness.io/managed-workload": "true" + "harness.io/steadyStateCondition": ${json.select("$..status.availableReplicas", response) == json.select("$..spec.replicas", response) && json.select("$..spec.deploymentName", response) == "example-foo-demo"} +spec: + deploymentName: example-foo-demo + replicas: 2 + template: + metadata: + labels: + "harness.io/release-name": {{release}} +``` +As you can see in this example, steady state status is checked by verifying the replicas and name of the deployed custom resource. + +### Step 1: Prepare Target Cluster + +In most cases, the target deployment cluster will have the CustomResourceDefinition object already created. For example: + + +``` +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + name: foos.samplecontroller.k8s.io +spec: + group: samplecontroller.k8s.io + version: v1alpha1 + names: + kind: Foo + plural: foos + scope: Namespaced +``` +After the CustomResourceDefinition object has been created in the cluster, you can create and deploy custom objects using Harness. + +The `kind` field of the custom object comes from the spec of the CustomResourceDefinition object you created in your cluster. + +For example: + + +``` +apiVersion: samplecontroller.k8s.io/v1alpha1 +kind: Foo +metadata: + name: example-foo-demo + labels: + "harness.io/release-name": "{{.Values.release}}" + annotations: + "harness.io/managed-workload": "true" + "harness.io/steadyStateCondition": ${json.select("$..status.availableReplicas", response) == json.select("$..spec.replicas", response) && json.select("$..spec.deploymentName", response) == "example-foo-demo"}spec: + deploymentName: example-foo-demo + replicas: 2 +``` +Ensure your target cluster has the CRD for the custom resource object you will create in your deployment. + +For an example of a simple CRD setup, see [sample-controller](https://github.com/kubernetes/sample-controller) and [Extend the Kubernetes API with CustomResourceDefinitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) from Kubernetes. + +### Step 2: Define Custom Resource in Harness + +You add the manifest for your custom object in a Harness Service, along with the artifact you will deploy. See [Kubernetes Services](https://docs.harness.io/article/t6zrgqq0ny-kubernetes-services). + +1. In your Harness Application, click **Services**, and then **Add Service**. +2. Name your Service. +3. In **Deployment Type**, select **Kubernetes**. +4. Click Submit. The new Kubernetes Service appears. +5. Add an artifact, as described in [Add a Docker Artifact Source](https://docs.harness.io/article/gxv9gj6khz-add-a-docker-image-service). +6. Next, you will add the manifest for the custom object In **Manifests**. +7. You can add your manifest inline, remotely, or by uploading. See [Define Kubernetes Manifests](define-kubernetes-manifests.md). You can also use [Go templating](use-go-templating-in-kubernetes-manifests.md). +8. Ensure your manifest has the required annotations and label, as described in [Required Custom Resource Annotations and Labels](#review_required_custom_resource_annotations_and_labels). + +When you are done your Service will look something like this: + +![](./static/create-kubernetes-crd-deployments-215.png) + +### Step 3: Define Target Cluster + +In the same Harness Application, create your Kubernetes target cluster as described in [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md). + +Ensure that the **Release Name** matches the name in the manifest's label, as described in [Review: Required Custom Resource Annotations and Labels](#review_required_custom_resource_annotations_and_labels): + +![](./static/create-kubernetes-crd-deployments-216.png) + +### Step 4: Create Workflow for Custom Resource Deployment + +Only the Kubernetes Rolling deployment method is supported for CRDs. See [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md). + +1. In your Harness Application, click **Workflows**, and then click **Add Workflow**. +2. Name your Workflow. +3. In **Workflow Deployment**, select **Rolling**. +4. In **Environment**, select the Environment containing the Infrastructure Definition you set up for the target cluster where your CRD is defined. +5. In **Service**, select the Service with your custom object manifest. +6. In **Infrastructure Definition**, select the Infrastructure Definition you set up for the target cluster where your CRD is defined. +7. Click **Submit**. The Rolling deployment Workflow is created. + +There is nothing to configure in this Workflow unless you want to add additional steps. The default **Rollout Deployment** step will deploy your custom object. + +You might want to run a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step to display additional Kubernetes information. See [Kubernetes Workflow Variables and Expressions](workflow-variables-expressions.md) for expressions you can use. + +### Step 5: Deploy Custom Resource + +Let's take a look at the logs from a CRD deployment. + +#### Initialize + +In the Initialize stage you can see that the release name has been added and a dry run was performed. + + +``` +... +--- +apiVersion: samplecontroller.k8s.io/v1alpha1 +kind: Foo +metadata: + name: example-foo-demo + labels: + "harness.io/release-name": "release-66259216-da29-35c4-ad4b-1053ffdaaf55" + annotations: + "harness.io/managed-workload": "true" + "harness.io/steadyStateCondition": ${json.select("$..status.availableReplicas", response) == json.select("$..spec.replicas", response) && json.select("$..spec.deploymentName", response) == "example-foo-demo"} + +spec: + deploymentName: example-foo-demo + replicas: 2 + + +Validating manifests with Dry Run + +kubectl --kubeconfig=config apply --filename=manifests-dry-run.yaml --dry-run + +namespace/default configured (dry run) +secret/harness-example configured (dry run) +configmap/harness-example created (dry run) +service/harness-example-svc configured (dry run) +deployment.apps/harness-example-deployment configured (dry run) +foo.samplecontroller.k8s.io/example-foo-demo configured (dry run) + +``` +#### Prepare + +In the Prepare stage the manifests are processed and the workloads are identified: + + +``` +Manifests processed. Found following resources: + +Kind Name Versioned +Namespace default false +Secret harness-example true +ConfigMap harness-example true +Service harness-example-svc false +Deployment harness-example-deployment false +Foo example-foo-demo false + +Current release number is: 8 + +Previous Successful Release is 7 +... +Found following Managed Workloads: + +Kind Name Versioned +Deployment harness-example-deployment false +Foo example-foo-demo false + +Versioning resources. + +Done +``` +Notice that the custom object is identified by its CRD in **Kind**. + +#### Apply + +The Apply stage runs a [kubectl apply](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#kubectl-apply) using the manifests: + + +``` +kubectl --kubeconfig=config apply --filename=manifests.yaml --record + +namespace/default unchanged +secret/harness-example-8 configured +configmap/harness-example-8 created +service/harness-example-svc unchanged +deployment.apps/harness-example-deployment configured +foo.samplecontroller.k8s.io/example-foo-demo configured + +Done. +``` +As this is the first deployment, the new object is identified as **configured**. + +#### Wait for Steady State + +This stage performs a [kubectl get](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) to display status about the custom resource. + + +``` +... +Status : example-foo-demo "kubernetes.io/change-cause": "kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true" + +Status : example-foo-demo }, + +Status : example-foo-demo "creationTimestamp": "2020-07-21T09:55:42Z", + +Status : example-foo-demo "generation": 11, + +Status : example-foo-demo "name": "example-foo-demo", + +Status : example-foo-demo "namespace": "default", + +Status : example-foo-demo "resourceVersion": "119096798", + +Status : example-foo-demo "selfLink": "/apis/samplecontroller.k8s.io/v1alpha1/namespaces/default/foos/example-foo-demo", + +Status : example-foo-demo "uid": "f2e2847e-bfa8-4242-9354-db8c83c57df1" + +Status : example-foo-demo }, + +Status : example-foo-demo "spec": { + +Status : example-foo-demo "deploymentName": "example-foo-demo", + +Status : example-foo-demo "replicas": 2 + +Status : example-foo-demo }, + +Status : example-foo-demo "status": { + +Status : example-foo-demo "availableReplicas": 2 + +Status : example-foo-demo } + +Status : example-foo-demo } + +Done. +``` +#### Wrap Up + +Finally, the Wrap Up stage shows the deployed custom object: + + +``` +... +Name: example-foo-demo +Namespace: default +Labels: harness.io/release-name: release-66259216-da29-35c4-ad4b-1053ffdaaf55 +Annotations: harness.io/managed-workload: true + harness.io/steadyStateCondition: + ${json.select("$..status.availableReplicas", response) == json.select("$..spec.replicas", response) && json.select("$..spec.deploymentName... + kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"samplecontroller.k8s.io/v1alpha1","kind":"Foo","metadata":{"annotations":{"harness.io/managed-workload":"true","harness.io/... + kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true +API Version: samplecontroller.k8s.io/v1alpha1 +Kind: Foo +Metadata: + Creation Timestamp: 2020-07-21T09:55:42Z + Generation: 11 + Resource Version: 119096798 + Self Link: /apis/samplecontroller.k8s.io/v1alpha1/namespaces/default/foos/example-foo-demo + UID: f2e2847e-bfa8-4242-9354-db8c83c57df1 +Spec: + Deployment Name: example-foo-demo + Replicas: 2 +Status: + Available Replicas: 2 +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Synced 15s (x135 over 98m) sample-controller Foo synced successfully + +Done. +``` +### See Also + +* [Delete Kubernetes Resources](delete-kubernetes-resources.md) +* [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md) +* [Scale Kubernetes Pods](scale-kubernetes-pods.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-namespaces-based-on-infra-mapping.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-namespaces-based-on-infra-mapping.md new file mode 100644 index 00000000000..5aabcd69414 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-namespaces-based-on-infra-mapping.md @@ -0,0 +1,62 @@ +--- +title: Select Kubernetes Namespaces based on InfraMapping +description: Set up a single Harness Kubernetes Service to be used with multiple namespaces. +sidebar_position: 200 +helpdocs_topic_id: 5xm4z4q3d8 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Hardcoding the namespaces in Harness Service manifests forces you to have separate Services for each namespace. + +Instead of setting namespaces in the manifests in a Harness Service, you can use a Harness variable expression to reference Kubernetes namespaces in Harness Infrastructure Definitions. When a Workflow is run, the namespace in the Infrastructure Definition is applied to all manifests in the Service. + +This allows you to set up a *single* Harness Kubernetes Service to be used with multiple namespaces. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Step 1: Add the Namespace Expression](#step_1_add_the_namespace_expression) +* [Step 2: Enter the Namespace in the Infrastructure Definition](#step_2_enter_the_namespace_in_the_infrastructure_definition) +* [Next Steps](#next_steps) + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Create Kubernetes Namespaces based on InfraMapping](create-kubernetes-namespaces-based-on-infra-mapping.md) + +### Step 1: Add the Namespace Expression + +1. In your Harness Kubernetes Service, click **values.yaml**. +2. In the `namespace` key, use the variable expression `${infra.kubernetes.namespace}`: + + +``` +namespace: ${infra.kubernetes.namespace} +``` +In a manifest file for the Kubernetes Namespace object, use the namespace value like this: + + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: {{.Values.namespace}} +``` +If you omit the `namespace` key and value from a manifest in your Service, Harness automatically uses the namespace you entered in the Harness Environment [Infrastructure Definition](https://docs.harness.io/article/n39w05njjv-environment-configuration#add_an_infrastructure_definition) settings **Namespace** field.The Harness variable `${infra.kubernetes.namespace}` refers to the namespace entered in the Harness Environment Infrastructure Definition settings **Namespace** field. + +![](./static/create-kubernetes-namespaces-based-on-infra-mapping-27.png) + +### Step 2: Enter the Namespace in the Infrastructure Definition + +1. In each Infrastructure Definition **Namespace** setting, enter the namespace you want to use. + +When the Service using `${infra.kubernetes.namespace}` is deployed, Harness will replace `${infra.kubernetes.namespace}` with the value entered in the Infrastructure Definition **Namespace** setting, creating a Kubernetes Namespace object using the name. + +Next, Harness will deploy the other Kubernetes objects to that namespace. + +### Next Steps + +* [Create Kubernetes Namespaces with Workflow Variables](create-kubernetes-namespaces-with-workflow-variables.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-namespaces-with-workflow-variables.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-namespaces-with-workflow-variables.md new file mode 100644 index 00000000000..31126ce7de0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/create-kubernetes-namespaces-with-workflow-variables.md @@ -0,0 +1,74 @@ +--- +title: Create Kubernetes Namespaces with Workflow Variables +description: Pass a Kubernetes namespace into a Workflow during deployment. +sidebar_position: 210 +helpdocs_topic_id: nhlzsni30x +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Namespaces for your Kubernetes deployments are typically set up in the Harness Service and Infrastructure Definition. In some cases, you might want to provide the namespace only at the time of deployment. + +You can pass in a Kubernetes namespace as part of a Workflow deployment by using a Workflow variable in the Infrastructure Definition **Namespace** setting. + +A value for the namespace Workflow variable can be provided manually or in response to an event using a Trigger. + + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md) +* [Create Kubernetes Namespaces based on InfraMapping](create-kubernetes-namespaces-based-on-infra-mapping.md) + +### Step 1: Create the Workflow Variable + +1. Create a Workflow variable. For steps on create a Workflow variable, see [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration). + +For example, we'll name the variable `namespace`. and give it three allowed values: `qa,stage,prod`. + +![](./static/create-kubernetes-namespaces-with-workflow-variables-207.png) + +Each time the Workflow is deployed, you manually enter a value for the Workflow namespace variable or use a Trigger to pass in a value, and the `${workflow.variables.namespace}` variable is replaced with a different namespace. + +This can happen simultaneously because a different namespace is used each time. You can even update the variable as part of the Pipeline Stage that executes the Workflow. + +### Step 2: Use the Variable in the Infrastructure Definition + +1. In the Infrastructure Definition used by the Workflow, reference the variable in the **Namespace** setting. + +To reference the variable we created, we use the expression `${workflow.variables.namespace}`: + +![](./static/create-kubernetes-namespaces-with-workflow-variables-208.png) + +### Option 1: Enter a Namespace Manually + +1. In your Workflow, click **Deploy**. +2. For **namespace**, select one of the variable's allowed values. + + ![](./static/create-kubernetes-namespaces-with-workflow-variables-209.png) + +3. Click **Submit**. The Workflow deploys to the namespace you selected. + +### Option 2: Enter a Namespace with a Trigger + +1. Create a Trigger for the Workflow. +2. In the Actions section, select the Workflow with the namespace variable. The namespace variable appears. + + ![](./static/create-kubernetes-namespaces-with-workflow-variables-210.png) + +3. For **namespace**, select one of the variable's allowed values. +If your Workflow variable is not limited to allowed values, you can enter custom values. For more information, see [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows). +4. Click **Submit**. When the Trigger condition is met, the Workflow deploys to the namespace you selected. + +### Example: Trigger Parallel Workflow Executions + +Typically, when a Workflow is triggered multiple times in succession, deploying on the same Infrastructure Definition, the deployment executions are queued automatically. Queuing prevents deployment collision. + +Using the steps in this topic, you can have parallel executions for same Workflow on the same Infrastructure Definition by using Workflow variables to identify separate namespaces. + +### Next Steps + +* [Harness GitOps](https://docs.harness.io/article/khbt0yhctx-harness-git-ops) +* [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/define-kubernetes-manifests.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/define-kubernetes-manifests.md new file mode 100644 index 00000000000..baa57b49401 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/define-kubernetes-manifests.md @@ -0,0 +1,146 @@ +--- +title: Define or Add Kubernetes Manifests +description: A quick overview or some options and steps when using Kubernetes manifests. +sidebar_position: 50 +helpdocs_topic_id: 2j2vi5oxrq +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +Harness provides a simple and flexible way to use Kubernetes manifests. You can add new files or upload existing manifests. You can work on your manifest inline, using the Go templating and Expression Builder features of Harness, or simply link to remote manifests in a Git repo. + +This topics provides a quick overview or some options and steps when using Kubernetes manifest, with links to more details. + +### Before You Begin + +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) +* [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md) + +### Review: What Workloads Can I Deploy? + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh) + +### Limitations + +A values.yaml file can use [flat or nested values](https://helm.sh/docs/chart_best_practices/values/#flat-or-nested-values). Harness supports nested values only. + +### Step 1: Create the Harness Kubernetes Service + +1. In Harness, click **Setup**, and then click **Add Application**. +2. Enter a name for the Application and click **Submit**. +3. Click **Services**, and then click **Add Service**. The **Add Service** settings appear. + + ![](./static/define-kubernetes-manifests-180.png) + +5. In **Name**, enter a name for the Service. +6. In **Deployment Type**, select **Kubernetes**, and then ensure **Enable Kubernetes V2** is selected. +7. Click **Submit**. The new Harness Kubernetes Service is created. + +### Option: Edit Inline Manifest Files + +When you create your Harness Kubernetes Service, several default files are added. + +For example, the **Manifests** section has the following default files: + +* **values.yaml** - This file contains the data for templated files in **Manifests**, using the [Go text template package](https://godoc.org/text/template). This is described in greater detail below. + +The only mandatory file and folder requirement in **Manifests** is that **values.yaml** is located at the directory root. The values.yaml file is required if you want to use Go templating. It must be named **values.yaml** and it must be in the directory root.* **deployment.yaml** - This manifest contains three API object descriptions, ConfigMap, Secret, and Deployment. These are standard descriptions that use variables in the values.yaml file. + +Manifest files added in **Manifests** are freeform. You can add your API object descriptions in any order and Harness will deploy them in the correct order at runtime.1. Add or edit the default files with your own Kubernetes objects. + +### Option: Add or Upload Local Manifest Files + +You can add manifest files in the following ways: + +* Manually add a file using the Manifests Add File dialog. +* Uploading local files. + +See [Upload Kubernetes Resource Files](upload-kubernetes-resource-files.md). + +### Step 2: Use Go Templating and Harness Variables + +You can use [Go templating](https://godoc.org/text/template) and Harness built-in variables in combination in your **Manifests** files. + +See [Use Go Templating in Kubernetes Manifests](use-go-templating-in-kubernetes-manifests.md). + +The inline values.yaml file used in a Harness Service does not support Helm templating, only Go templating. Helm templating is fully supported in the remote Helm charts you add to your Harness Service.Harness [variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) may be added to values.yaml, not the manifests themselves. This provides more flexibility. + +### Step 3: Expression Builder + +When you edit manifests in the Harness Service, you can enter expressions by entering `{{.` and Harness will fetch the values available in the values.yaml file. + +![](./static/define-kubernetes-manifests-181.png) + +This expression builder helps to ensure that you do not accidentally enter an incorrect value in your manifests. + +### Option: Use Remote Manifests and Charts + +You can use your Git repo for the configuration files in **Manifests** and Harness uses them at runtime. You have the following options for remote files: + +* **Kubernetes Specs in YAML format** - These files are simply the YAML manifest files stored on a remote Git repo. See [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md). +* **Helm Chart from Helm Repository** - Helm charts files stored in standard Helm syntax in YAML on a remote Helm repo. See [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md). +* **Helm Chart Source Repository** - These are Helm chart files stored in standard Helm syntax in YAML on a remote Git repo or Helm repo. See [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md). +* **Kustomization Configuration** — kustomization.yaml files stored on a remote Git repo. See [Use Kustomize for Kubernetes Deployments](use-kustomize-for-kubernetes-deployments.md). +* **OpenShift Template** — OpenShift params file from a Git repo. See [Using OpenShift with Harness Kubernetes](using-open-shift-with-harness-kubernetes.md). +* **Files in a packaged archive** — In some cases, your manifests, templates, etc are in a packaged archive and you simply wish to extract then and use then at runtime. You can use a packaged archive with the **Custom Remote Manifests** setting in a Harness Kubernetes Service. See [Add Packaged Kubernetes Manifests](deploy-kubernetes-manifests-packaged-with-artifacts.md). + +Remote files can also use Go templating. + +### Option: Deploy Helm Charts + +In addition to the Helm options above, you can also simply deploy the Helm chart without adding your artifact to Harness. + +Instead, the Helm chart identifies the artifact. Harness installs the chart, gets the artifact from the repo, and then installs the artifact. We call this a *Helm chart deployment*. + +See [Deploy Helm Charts](deploy-a-helm-chart-as-an-artifact.md). + +### Best Practice: Use Readiness Probes + +Kubernetes readiness probes indicate when a container is ready to start accepting traffic. If you want to start sending traffic to a pod only when a probe succeeds, specify a readiness probe. For example: + + +``` +... + spec: + {{- if .Values.dockercfg}} + imagePullSecrets: + - name: {{.Values.name}}-dockercfg + {{- end}} + containers: + - name: {{.Values.name}} + image: {{.Values.image}} + {{- if or .Values.env.config .Values.env.secrets}} + readinessProbe: + httpGet: + path: / + port: 3000 + timeoutSeconds: 2 + envFrom: + {{- if .Values.env.config}} + - configMapRef: + name: {{.Values.name}} + {{- end}} + {{- if .Values.env.secrets}} + - secretRef: + name: {{.Values.name}} + {{- end}} + {{- end}} +... +``` +See [When should you use a readiness probe?](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-readiness-probe) from [Kubernetes and Kubernetes best practices: Setting up health checks with readiness and liveness probes](https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes) from GCP. + +In this example. kubelet will not restart the pod when the probe exceeds two seconds. Instead, it cancels the request. Incoming connections are routed to other healthy pods. Once the pod is no longer overloaded, kubelet will start routing requests back to it (as the GET request now does not have delayed responses). + +### Secrets in values.yaml + +If you use [Harness secrets](https://docs.harness.io/article/au38zpufhr-secret-management) in a values.yaml and the secret cannot be resolved by Harness during deployment, Harness will throw an exception. + +An exception is thrown regardless of whether the secret is commented out. + +### Next Steps + +* [Use Go Templating in Kubernetes Manifests](use-go-templating-in-kubernetes-manifests.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/define-your-kubernetes-target-infrastructure.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/define-your-kubernetes-target-infrastructure.md new file mode 100644 index 00000000000..de8acd0ee7c --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/define-your-kubernetes-target-infrastructure.md @@ -0,0 +1,201 @@ +--- +title: Define Your Kubernetes Target Infrastructure +description: Specify the Kubernetes cluster you want to target for deployment. +sidebar_position: 180 +helpdocs_topic_id: u3rp89v80h +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Infrastructure Definitions specify the target deployment infrastructure for your Harness Services, and the specific infrastructure details for the deployment, like cluster settings. + +Specify the Kubernetes cluster you want to target for deployment as a Harness Infrastructure Definition. + +For Amazon Elastic Kubernetes Service (Amazon EKS) and OpenShift, use [Option 1: Specify a Vendor Agnostic Kubernetes Cluster](#option_1_specify_a_vendor_agnostic_kubernetes_cluster). + + +### Before You Begin + +* [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + +### Step 1: Create an Environment + +Environments represent one or more of your deployment infrastructures, such as Dev, QA, Stage, Production, etc. Use Environments to organize your target cluster Infrastructure Definitions. + +1. In your Harness Application, click **Environments**. The **Environments** page appears. +2. Click Add Environment. The **Environment** settings appear. +3. In **Name**, enter a name that describes this group of target clusters, such as QA, Stage, Prod, etc. +4. In **Environment Type**, select **Non-Production** or **Production**. +5. Click **SUBMIT**. The new **Environment** page appears. +6. Click  **Add Infrastructure Definition**. The following section provide information on setting up different Add Infrastructure Definitions for different target clusters. + +### Option 1: Specify a Vendor Agnostic Kubernetes Cluster + +Currently, Harness connects to Amazon Elastic Kubernetes Service (Amazon EKS) and OpenShift using the Kubernetes Cluster Cloud Provider.If you are using a Harness Kubernetes Cluster Cloud Provider to connect to your target cluster, enter the following settings: + +#### Name + +Enter a name that describes the target cluster, such as **checkout**, **orders**, etc. + +#### Cloud Provider Type + +Select **Kubernetes Cluster**. + +#### Deployment Type + +Select **Kubernetes**. + +The **Helm** option is only if you are deploying to a Harness native Helm Service. See [Helm Deployments Overview](../helm-deployment/helm-deployments-overview.md). + +#### Cloud Provider + +Select the **Kubernetes Cluster Cloud Provider** that connects to your target cluster. All Kubernetes Cluster Cloud Providers are prefaced with **Kubernetes Cluster:**. + +![](./static/define-your-kubernetes-target-infrastructure-170.png) + +#### Namespace + +Select the namespace of the target Kubernetes cluster. Typically, this is `default`. + +The namespace must already exist during deployment. Harness will not create a new namespace if you enter one here. + +You can use Harness variables to reference the name here in the Service Manifests files. See [Create Kubernetes Namespaces based on InfraMapping](create-kubernetes-namespaces-based-on-infra-mapping.md). + +If you omit the `namespace` key and value from a manifest in your Service, Harness automatically uses the namespace you entered in the Harness Environment  **Infrastructure Definition** settings **Namespace** field. + +#### Release Name + +Harness requires a Kubernetes release name for tracking. + +The release name must be unique across the cluster. + +The Harness-generated unique identifier `release-${infra.kubernetes.infraId}` can be used to ensure a unique release name. + +The `${infra.kubernetes.infraId}` expression is a unique identifier that identifies the combination of Service and Infrastructure Definition. + +In the Infrastructure Definition **Service Infrastructure Mapping** below each listing has a unique identifier that can be referenced using `${infra.kubernetes.infraId}`: + +![](./static/define-your-kubernetes-target-infrastructure-171.png) + +Use `release-${infra.kubernetes.infraId}` for the **Release Name** instead of just `${infra.kubernetes.infraId}`. Kubernetes service and pod names follow RFC-1035 and must consist of lowercase alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character. Using `release-` as a prefix will prevent any issues.Here is an example of how `${infra.kubernetes.infraId}` is used and how the ID is output as the **Release Name**: + +![](./static/define-your-kubernetes-target-infrastructure-172.png) + +See [Built-in Variables List](https://docs.harness.io/article/aza65y4af6-built-in-variables-list) for more expressions. + +The release name is not incremented with each release. It identifies releases so that Harness knows which release is being replaced with a new version. + +See [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations). + +##### Release Name is Reserved for Internal Harness ConfigMap + +The release name you enter in **Release Name** is reserved for the internal Harness ConfigMap used for tracking the deployment. + +Do not create a ConfigMap that uses the same name as the release name. Your ConfigMap will override the Harness internal ConfigMap and cause a NullPointerException. + +#### Scope to specific Services + +To limit this Infrastructure Definition to specific Harness Services, select the Services in **Scope to specific Services**. + +If you leave this setting empty, the Infrastructure Definition is available to all Workflows deploying Services with the Deployment Type of this Infrastructure Definition. + +#### Example + +![](./static/define-your-kubernetes-target-infrastructure-173.png) + +### Option 2: Specify a GCP or Azure Kubernetes Cluster + +If you are using a Harness Google Cloud Platform or Azure Cloud Provider to connect to your target cluster, enter the following settings: + +#### Name + +Enter a name that describes the target cluster, such as **checkout**, **orders**, etc. + +#### Cloud Provider Type + +Select **Google Cloud Platform** or **Microsoft Azure**. + +The only differences in settings are the **Azure Subscription** and **Resource Group** settings, described below. + +#### Deployment Type + +Select **Kubernetes**. + +The **Helm** option is only if you are deploying to a Harness native Helm Service. See [Helm Deployments Overview](../helm-deployment/helm-deployments-overview.md). + +#### Use Already Provisioned Infrastructure + +To manually define the target cluster, select **Use Already Provisioned Infrastructure**. + +#### Map Dynamically Provisioned Infrastructure + +To use a Harness Infrastructure Provisioner, select **Map Dynamically Provisioned Infrastructure**. For details on provisioning your cluster, See [Provision Kubernetes Infrastructures](provision-kubernetes-infrastructures.md). + +#### Cloud Provider + +Select the **Google Cloud Platform** or **Azure Cloud Provider** that connects to your target cluster. + +All Google Cloud Platform Cloud Providers are prefaced with **Google Cloud Platform:**. + +All Azure Cloud Providers are prefaced with **Azure:**. + +#### Azure: Subscription + +Select the Azure subscription to use. + +When you set up the [Azure Cloud Provider](https://docs.harness.io/article/whwnovprrb-cloud-providers) in Harness, you entered the **Client/Application ID** for the Azure App registration. To access resources in your Azure subscription, you must assign the Azure app using this Client ID to a role in that subscription. + +In this Azure Infrastructure Definition, you select the subscription. If the Azure App registration using this Client ID is not assigned a role in a subscription, no subscriptions will be available. + +#### Azure: Resource Group + +Select the resource group where your VM is located. + +#### Cluster Name + +Select the cluster you created for this deployment. + +![](./static/define-your-kubernetes-target-infrastructure-174.png) + +If the cluster name is taking a long time to load, check the connectivity of the host running the Harness Delegate. + +#### Namespace + +Enter the namespace of the target Kubernetes cluster. Typically, this is `default`. + +The namespace must already exist during deployment. Harness will not create a new namespace if you enter one here. + +You can use Harness variables to reference the name here in the Service Manifests files. See [Create Kubernetes Namespaces based on InfraMapping](create-kubernetes-namespaces-based-on-infra-mapping.md). + +If you omit the `namespace` key and value from a manifest in your Service, Harness automatically uses the namespace you entered in the Harness Environment  **Infrastructure Definition** settings **Namespace** field. + +#### Release Name + +Harness requires a Kubernetes release name for tracking. + +The release name must be unique across the cluster. + +The Harness-generated unique identifier `release-${infra.kubernetes.infraId}` can be used to ensure a unique release name. + +Use `release-${infra.kubernetes.infraId}` for the **Release Name** instead of just `${infra.kubernetes.infraId}`. Kubernetes service and pod names follow DNS-1035 and must consist of lowercase alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character. Using `release-` as a prefix will prevent any issues.#### Scope to specific Services + +To limit this Infrastructure Definition to specific Harness Services, select the Services in **Scope to specific Services**. + +If you leave this setting empty, the Infrastructure Definition is available to all Workflows deploying Services with the Deployment Type of this Infrastructure Definition. + +#### Example + +Here is an example of a cluster targeted using a Google Cloud Platform Cloud Provider: + +![](./static/define-your-kubernetes-target-infrastructure-175.png) + +Here is an example of a cluster targeted using a Azure Cloud Provider: + +![](./static/define-your-kubernetes-target-infrastructure-176.png) + +### Next Steps + +* [Provision Kubernetes Infrastructures](provision-kubernetes-infrastructures.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/delete-kubernetes-resources.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/delete-kubernetes-resources.md new file mode 100644 index 00000000000..b1de9ba7f88 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/delete-kubernetes-resources.md @@ -0,0 +1,177 @@ +--- +title: Delete Kubernetes Resources +description: Remove any deployed Kubernetes resources with the Delete step. +sidebar_position: 340 +helpdocs_topic_id: 78oginrhsh +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Remove any deployed Kubernetes resources with the Delete step. + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) + +### Step 1: Add Delete Step + +In your Harness Workflow, click **Add Step**, and select **Delete**. The Delete settings appear: + +![](./static/delete-kubernetes-resources-113.png) + +You can add a Delete step anywhere in your Workflow, but typically it is added in the **Wrap Up** section. + +Wherever you add a Delete step, the resource you want to delete must already exist in the cluster. For example, if the resource is added in Phase 2 of a Canary Workflow, the Delete step should not be placed in Phase 1. + +### Step 2: Select Resources to Delete + +In **Resources**, enter the resources to be deleted. + +There are a few ways to specify the resource to be removed. + +#### Harness Built-in Variables + +Using the Harness built-in variable, `${k8s.canaryWorkload}`. + +At runtime, this will resolve to something like: + + +``` +Deployment/harness-example-deployment-canary +``` +The deployed Kubernetes object must reach steady state for Harness to be able to resolve the `${k8s.canaryWorkload}` expression. Consequently, if the Canary Deployment step fails to deploy the workload to steady state, Harness cannot set the `${k8s.canaryWorkload}` expression and it will not execute successfully. + +#### Resource Name + +Using a resource name in the format `[namespace]/Kind/Name`, with `namespace` being optional. + +You must add a `Kind` before the resource name, like `Deployment` in this example: + +`Deployment/harness-example-deployment-canary` + +#### Multiple Resources + +Using a comma-separated list to delete multiple resources. For example: + +`Deployment/harness-example-deployment-canary,ConfigMap/harness-example-config` + +#### All Resources + +Enter an asterisk (\*) in **Resources**. + +You cannot use the asterisk as a wildcard to match arbitrary resources. It is simply used to indicate all resources.![](./static/delete-kubernetes-resources-114.png) + +Using an asterisk (\*) deletes all of the releases specified in the Infrastructure Definition **Release Name** setting used by the Workflow. The namespace is not deleted. + +##### Delete Namespaces + +If you want to delete the namespace(s) defined in the **Manifests** section of the Harness Service used in this deployment, click the **Delete all namespaces defined in the Manifests section of the Harness Service used in this deployment** checkbox. + +[![](./static/delete-kubernetes-resources-115.png)](./static/delete-kubernetes-resources-115.png) + +Ensure that you are not deleting a namespace that is used by other deployments. + +### Option: Enter the Path and Name of the Manifest + +The Delete step will delete any resource in a Service **Manifest** section, Helm Source Repository, or Helm Repository explicitly. + +The Delete step does not support resources in Kustomize or OpenShift Templates. + +#### Service Manifest + +Select the **Use File Paths** to enable this option. + +You must provide the path and name of the file in **File Paths**, and Harness will delete the resource. + +For resources in the Service **Manifests** section, enter the folder name and the file name of the manifest in the Harness Service deployed by this Workflow. For example, **templates/jobs.yaml**. + +![](./static/delete-kubernetes-resources-117.png) + +You can include multiple resource files by separating them with commas, for example: + +**templates/jobs.yaml, templates/statefulSet.yaml**. + +If you apply the ignore comment `# harness.io/skip-file-for-deploy` to a resource but do not use the resource in an Kubernetes Apply step, the resource is never deployed and does not need to be deleted. + +#### Helm Source Repository + +For resources in a Helm chart, provide the path and name of the file from the root folder of the repo. + +For example, the following Service uses a remote manifest that points to a Helm chart at **https://github.com/helm/charts.git/stable/chartsmuseum**. In the chart's **templates** folder, there is a **deployment.yaml** file. In **File Path**, you reference **templates/deployment.yaml**. + +![](./static/delete-kubernetes-resources-118.png) + +#### Helm Chart Repository + +For a Helm Chart Repository, you cannot see the resources as easily as a Helm Source Repository, but you can view the resources in the chart by extracting it or by viewing them in a deployment log. + +For example, here is a deployment log showing the chart resources in **Fetch Files**, a Service **Remote Manifests** using the chart as a Helm Chart Repository, and Delete step deleting the **deployment.yaml** resource: + +![](./static/delete-kubernetes-resources-119.png) + +### Option: Delegate Selector + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.### Example 1: Deleting ${k8s.canaryWorkload} + +Here is an example of the log from a Delete command: + + +``` +Initializing.. +... +Resources to delete are: +- Deployment/harness-example-deployment-canary +Done. +``` +### Example 2: Deleting All Resources and Namespaces + +Here is an example using **\*** and the **Delete all namespaces defined in the Manifests section of the Harness Service used in this deployment** setting: + + +``` +All Resources are selected for deletion +Delete Namespace is set to: true +Fetching all resources created for release: release-44e74aca-279f-3b4a-bb15-06d750393a8d + +Resources to delete are: +- adwait-12/Deployment/harness-example-deployment +- adwait-12/Service/harness-example-svc +- adwait-12/ConfigMap/release-44e74aca-279f-3b4a-bb15-06d750393a8d +- adwait-12/ConfigMap/harness-example-2 +- adwait-12/ConfigMap/harness-example-1 +- adwait-12/Secret/harness-example-2 +- adwait-12/Secret/harness-example-1 +- adwait-12/Namespace/adwait-12 +Done. +``` +### Notes + +* **Canary Delete and Traffic Management** —If you are using the **Traffic Split** step or doing Istio traffic shifting using the **Apply step**, move the **Canary Delete** step from **Wrap Up** section of the **Canary** phase to the **Wrap Up** section of the Primary phase. +Moving the Canary Delete step to the Wrap Up section of the Primary phase will prevent any traffic from being routed to deleted pods before traffic is routed to stable pods in the Primary phase. See [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) and [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md). + +### Next Steps + +* [Scale Kubernetes Pods](scale-kubernetes-pods.md) +* [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md) +* [Kubernetes Workflow Variable Expressions](workflow-variables-expressions.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-a-helm-chart-as-an-artifact.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-a-helm-chart-as-an-artifact.md new file mode 100644 index 00000000000..a6ba322cb8b --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-a-helm-chart-as-an-artifact.md @@ -0,0 +1,396 @@ +--- +title: Deploy Helm Charts (FirstGen) +description: Typically, Harness Kubernetes deployments using Helm charts involve adding your artifact (image) to Harness in addition to your chart. The chart refers to the artifact you added to Harness (via its v… +sidebar_position: 260 +helpdocs_topic_id: p5om530pe0 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Typically, Harness Kubernetes deployments using Helm charts involve adding your artifact (image) to Harness in addition to your chart. The chart refers to the artifact you added to Harness (via its values.yaml). During deployment, Harness deploys the artifact you added to Harness and uses the chart to manage it. + +For the standard Harness Kubernetes and Native Helm deployments using Helm charts, see [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md) and [Helm Quickstart](https://docs.harness.io/article/2aaevhygep-helm-quickstart).In addition to this method, you can also simply deploy the Helm chart without adding your artifact to Harness. Instead, the *Helm chart is the artifact*. The Helm chart you provide contains the hardcoded link to the artifact. + +Harness installs the chart, gets the artifact from the repo, and then installs the artifact. We call this a *Helm chart deployment*. + +This topic covers the second method: a Helm chart deployment. + +Looking for the API? You can use the Harness GraphQL to run a Helm chart deployment. See [Deploy Helm Charts Using the API](https://docs.harness.io/article/sbvn6uwcq1-deploy-helm-charts-using-api).New to Harness Kubernetes and Native Helm Deployments?Harness includes both [Kubernetes](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) and [Native Helm](../concepts-cd/deployment-types/helm-deployments-overview.md) deployments, and you can use Helm charts in both. Here's the difference: +• **Harness Kubernetes Deployments** allow you to use your own Kubernetes manifests or a Helm chart (remote or local), and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. +• Harness Kubernetes deployments also support all deployment strategies (Canary, Blue/Green, Rolling, etc). +• For **Harness Native Helm Deployments**, you must always have Helm and Tiller running on one pod in your target cluster. Tiller makes the API calls to Kubernetes in these cases. +• Harness Native Helm deployments only support Basic deployments. + +### Before You Begin + +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) +* [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md) +* [Native Helm Deployments Overview](../concepts-cd/deployment-types/helm-deployments-overview.md) +* [The Chart Template Developer's Guide](https://helm.sh/docs/chart_template_guide/) from Helm. + +### Limitations and Requirements + +* Harness does not support AWS cross-account access for [ChartMuseum](https://chartmuseum.com/) and AWS S3. For example, if the Harness Delegate used to deploy charts is in AWS account A, and the S3 bucket is in AWS account B, the Harness Cloud Provider that uses this Delegate in A cannot assume the role for the B account. +* The Helm Chart must have the [appVersion](https://helm.sh/docs/topics/charts/#the-appversion-field) defined in its chart.yaml (for example, `appVersion: 1.16.0`). + +### Permissions Required + +The Harness User Group must have the following [Application Permissions](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) for the Service (or All Services): + +* Create +* Read +* Update +* Delete + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +#### ChartMuseum Binaries + +Many Helm Chart users use ChartMuseum as their Helm Chart Repository server. + +* **ChartMuseum binary v0.8.2:** the default ChartMuseum binary used by Harness is v0.8.2. +* **ChartMuseum binary v0.12.0:** to use ChartMuseum binary v0.12.0 you must enable the feature flag `USE_LATEST_CHARTMUSEUM_VERSION`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +### Review: Chart Collection + +Charts are added as a Manifest Source in a Harness Service using a Helm repository. + +Once you add a chart, if you then go and make changes to the source other than a change in credentials, Harness will delete any charts it has already collected and initiate a new collection. + +#### Random Chart Collection for Incorrect or Non-existing Chart Version + +When Harness fetches the chart, it runs the `helm pull` command and fetches and unpacks the chart in the chart directory you specify. + +Harness then compares the chart version from the Chart.yaml file (present in specified chart directory) with the version you specify in **Remote Manifests** > **Manifest Format** > **Helm Chart from Helm Repository** > **Version**. + +![](./static/deploy-a-helm-chart-as-an-artifact-36.png) + +If an incorrect or non-existing chart version is provided, Helm (and, consequently, Harness) fetches a random chart. + +For example, suppose we have a repo with the following chart versions:                                 + + + +| | | +| --- | --- | +| NAME | CHART VERSION | +| `latest/nginx-with-version-page` | `0.1.0+master-34263821-ed3f337` | +| `latest/nginx-with-version-page` | `0.1.0+master-24263821-ed3f337` | +| `latest/nginx-with-version-page` | `0.1.0+main-24263821-ed3f337` | + +Currently, if you enter an incorrect version like `0.1.0+master-34263821-wrong` Helm will fetch one of the two existing versions and deploy it. + +#### Strict Chart Matching during Collection + +When Harness fetches the chart, it runs the `helm pull` command and fetches and unpacks the chart in the chart directory you specify. + +Harness then compares the chart version from the Chart.yaml file (present in specified chart directory) with the version you specify in **Remote Manifests** > **Manifest Format** > **Helm Chart from Helm Repository** > **Version**. + +![](./static/deploy-a-helm-chart-as-an-artifact-37.png) + +If the version you entered does not match the chart Harness pulls, Harness will fail deployment. + +If no version is entered, Harness does not check for a match. + +### Step 1: Add a Helm Repository Artifact Server + +You connect Harness to a Helm Chart Repository as a Harness Artifact Server and then use it in Kubernetes and Native Helm Services **Manifest Source** settings. + +To add the Helm Repository Artifact Server for your chart repo, follow the steps in [Add Helm Repository Artifact Servers](https://docs.harness.io/article/0hrzb1zkog-add-helm-repository-servers). + +### Step 2: Create Service and Select Artifact from Manifest + +Deploying Helm charts is supported in both Harness Kubernetes or Native Helm deployments. + +In your Harness Application, click **Services**, and then click **Add Service**. + +Enter a name for the Service. + +In **Deployment Type**, select either **Kubernetes** or **Native Helm**. If you select Native Helm, you can enable Helm v3. + +Enable the **Artifact from Manifest** setting. This setting tells Harness that you will use the **Manifest Source** in the Service to link to your remove chart. + +![](./static/deploy-a-helm-chart-as-an-artifact-38.png) + +Click **Submit**. The new Service is created. + +Normally, there would be a **Manifests** section in your Kubernetes Service or a **Chart Specification** section in your Native Helm Service; however, since you are using the **Manifest Source** for the chart, those sections are omitted. + +![](./static/deploy-a-helm-chart-as-an-artifact-39.png) + +### Step 3: Add the Helm Chart + +The Helm chart is added the same way in both the Harness Kubernetes or Native Helm Services. + +The artifact is specified in your Helm values.yaml using `image` parameters just as you normally configure your Helm charts. + +In your Harness Service, click **Add Manifest Source**. + +The steps are the same for Kubernetes and Native Helm. The only difference is that you select Helm Version in the Kubernetes **Manifest Source**. For the Native Helm Service, the Helm Version was already selected when you created the Service. + +Here's the Remote Manifests settings from the Kubernetes Service **Manifest Source**: + +![](./static/deploy-a-helm-chart-as-an-artifact-40.png) + +Do the following: + +In **Manifest Format**, select **Helm Chart from Helm Repository**. + +Chart polling is only available for Helm charts in a Helm repository, and not for Helm charts is a source (Git) repository.In **Helm Repository**, select the Helm Repository Artifact Server you set up. + +In **Chart Name**, enter the name of the Helm chart to deploy. + +Leave **Chart Version** empty. Harness will poll all versions of the chart so you can select which version you want to deploy. + +Enable the **Skip Versioning for Service** option to skip versioning of ConfigMaps and Secrets deployed into Kubernetes clusters. + +When you are done, it will look something like this: + +![](./static/deploy-a-helm-chart-as-an-artifact-41.png) + +Click **Submit**. + +The remote Helm chart repository and chart is listed as a **Manifest Source**. + +![](./static/deploy-a-helm-chart-as-an-artifact-42.png) + +### Option: Multiple Manifest Sources + +You can add multiple charts by adding multiple Manifest Sources. + +![](./static/deploy-a-helm-chart-as-an-artifact-43.png) + +When you deploy the Service, you can specify which chart to use. + +### Option: Pull a Specific Chart Version + +When you add a Helm chart as a Manifest Source to the Service, Harness will pull all the chart and version history metadata. You can see the results of the pull in Manifest History and manually select a specific chart version for the deployment using **Manually pull Manifest**. + +To view the manifest history,  do the following: + +1. Click **Manifest History**. This assistant lists the chart names and versions Harness has pulled. +2. In the **Manifest History** assistant, click **Manually pull Manifest**. The **Manually Select A Manifest** dialog appears. + ![](./static/deploy-a-helm-chart-as-an-artifact-44.png) +3. In **Manifest Source**, click the Manifest Source you added to the Service. +4. In **Manifest**, select a manifest version, and then click **SUBMIT**. +5. Click **Manifest History** to view the history. + +Now all available manifest charts and version history metadata are displayed. + +![](./static/deploy-a-helm-chart-as-an-artifact-45.png) + +### Option: Values YAML Override + +In the **Values YAML Override** section, you can enter the YAML for your values.yaml file. The values.yaml file contains the default values for a chart. You will typically have this file in the repo for your chart, but you can add it in the Harness service instead. + +The values.yaml added in the Harness Service will override any matching `key:value` pairs values.yaml in your remote repo. + +If you add a values.yaml in **Values YAML Override** and you have a values.yaml in your chart already, then Harness will merge them at runtime. The values.yaml in **Values YAML Override** overrides at a `key:value` level, not a file level. If you have a values.yaml in your chart with `key:value` pairs you do not want, you must remove them from that file or override them in the values.yaml in **Values YAML Override**. See .See [Override Values YAML Files](override-values-yaml-files.md) and [Helm Services](../helm-deployment/2-helm-services.md). + +The **Values YAML Override** settings can be overwritten by Harness Environments **Service Configuration Overrides**. See below. + +### Step 4: Define the Infrastructure Definition + +There is nothing unique about defining the target cluster Infrastructure Definition for a Helm chart deployment. It is the same process as a typical Harness Kubernetes or Native Helm deployment. + +Follow the steps in [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md) or [Helm Environments](../helm-deployment/3-helm-environments.md). + +### Option: Override Values YAML in Environment + +Harness Service **Values YAML Override** settings can be overwritten by Harness Environments **Service Configuration Overrides**. + +This enables you to have a Service keep its settings but change them when the Service is deployed to different Environments. + +For example, you might have a single Service but an Environment for QA and an Environment for Production, and you want to overwrite the `namespace` setting in the Service's **Values YAML Override** values.yaml depending on the Environment. + +You can also overwrite Service variables at the Phase-level of a multiple Phase Workflow.If you add a values.yaml in **Service Configuration Overrides** and you have a values.yaml in your chart already, then Harness will merge them at runtime. The values.yaml in **Service Configuration Overrides** overrides at a `key:value` level, not a file level. If you have a values.yaml in your chart with `key:value` pairs you do not want, you must remove them from that file or override them in the values.yaml in **Service Configuration Overrides**. See [Variable Override Priority](https://docs.harness.io/article/benvea28uq-variable-override-priority).For details, see [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md) or [Helm Environments](../helm-deployment/3-helm-environments.md). + +1. In the Harness Environment, in **Service Configuration Overrides**, click **Add Configuration Overrides**. +2. In **Service**, select the Harness Service where you added your remote Helm chart. +3. In **Override Type**, select **Values YAML**. Click **Local** or **Remote**. + 1. **Local** - Enter in the values.yaml variables and values. + 2. **Remote** - See [Override Remote Values YAML Files](override-values-yaml-files.md). +4. When you are done, click **Submit**. + +### Step 5: Create the Workflow + +Now that your Service and Infrastructure Definition are set up, you can create the Workflow for your Helm chart deployment. + +Harness Native Helm deployments only support **Basic** Workflow deployments.You create your Kubernetes or Native Helm Workflows just as you would if the Service you are deploying is using a **Manifest Source**. There is nothing different you need to do when using a Helm chart exclusively. + +For steps on creating Kubernetes Workflows, see: + +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) + +For steps on creating a Native Helm Basic Workflow, see [Helm Workflows and Deployments](../helm-deployment/4-helm-workflows.md). + +### Step 6: Deploy + +Each Helm chart deployment is treated as a release. During deployment, when Harness detects that there is a previous release for the chart, it upgrades the chart to the new release. + +1. In the Workflow, click **Deploy**. The **Start New Deployment** settings appear. Here you will pick the chart version to deploy. +2. In **Version**, select a version of the chart to deploy. +3. Click **Submit**. + +The Helm chart deployment runs. + +For a Kubernetes or Native Helm deployment you will see Harness fetch the Helm chart. Here is an example from a Kubernetes deployment: + + +``` +Fetching files from helm chart repo +Helm repository: Helm Chart Repo +Chart name: chartmuseum +Chart version: 2.14.0 +Helm version: V2 +Repo url: http://storage.googleapis.com/kubernetes-charts/ +Successfully fetched following files: +- helm/repository/repositories.yaml +- helm/repository/local/index.yaml +- helm/repository/cache/local-index.yaml +- helm/repository/cache/b83a7c1d-4cec-3655-9a3c-db9de73510fd-index.yaml +- chartmuseum/values.yaml +- chartmuseum/README.md +- chartmuseum/ci/ingress-values.yaml +- chartmuseum/Chart.yaml +- chartmuseum/templates/ingress.yaml +- chartmuseum/templates/pvc.yaml +- chartmuseum/templates/service.yaml +- chartmuseum/templates/NOTES.txt +- chartmuseum/templates/servicemonitor.yaml +- chartmuseum/templates/serviceaccount.yaml +- chartmuseum/templates/pv.yaml +- chartmuseum/templates/secret.yaml +- chartmuseum/templates/_helpers.tpl +- chartmuseum/templates/deployment.yaml +- chartmuseum/.helmignore + +Done. +``` +Next, Harness will initialize and prepare the workloads, apply the Kubernetes manifests, and wait for steady state. + +In **Wait for Steady State** you will see the Kubernetes workloads deployed and the pods scaled up and running: + + +``` +Deployed Controllers [3]: +Kind:Deployment, Name:doc-chartmuseum (desired: 1) +Kind:ReplicaSet, Name:doc-chartmuseum-75bffc6fc7 (desired: 1) +Kind:ReplicaSet, Name:doc-chartmuseum-766fc995c (desired: 1) + +**** Kubernetes Controller Events **** + Controller: doc-chartmuseum + - Scaled up replica set doc-chartmuseum-75bffc6fc7 to 1 + + +**** Kubernetes Pod Events **** + Pod: doc-chartmuseum-75bffc6fc7-k5tfb + - Successfully assigned default/doc-chartmuseum-75bffc6fc7-k5tfb to aks-delegatepool-37455690-vmss000000 + - Pulling image "chartmuseum/chartmuseum:v0.7.0" + - Successfully pulled image "chartmuseum/chartmuseum:v0.7.0" + - Created container chartmuseum + - Started container chartmuseum + +Waiting for desired number of pods [2/1] +Waiting for desired number of pods [2/1] +Waiting for desired number of pods [2/1] + +**** Kubernetes Controller Events **** + Controller: doc-chartmuseum + - Scaled down replica set doc-chartmuseum-766fc995c to 0 + +Desired number of pods reached [1/1] +Pods are updated with image [chartmuseum/chartmuseum:v0.7.0] [1/1] +Pods are running [1/1] +Pods have reached steady state [1/1] +Pod [doc-chartmuseum-75bffc6fc7-k5tfb] is running. Host IP: 10.240.0.5. Pod IP: 10.244.2.204 + +Done +``` +You deployment is successful. + +#### Versioning and Rollback + +Helm chart deployments support versioning and rollback in the same way as Kubernetes and Native Helm deployments. + +For Helm chart deployments, the Helm chart version is selected at runtime, and versioning is based on that selection. + +### Review: Helm Artifact Variable Expressions + +Harness includes several built-in variable expressions that you can use to output Helm chart deployment information: + +* `${helmChart.description}` - The `description` in the Helm chart. +* `${helmChart.displayName}` - The display `name` of the chart. +* `${helmChart.metadata.basePath}` - The base path used for Helm charts stored in AWS S3 and Google GCS. +* `${helmChart.metadata.bucketName}` - The S3 or GCS bucket name, if used. +* `${helmChart.metadata.repositoryName}` - The name setting for the repo. +* `${helmChart.metadata.url}` - The URL from where the chart was pulled. +* `${helmChart.name}` - The `name` in the chart. +* `${helmChart.version}` - The version of the chart that was deployed. + +You can use these expressions in a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) Workflow step after the deployment step in the Workflow: + + +``` +echo "description: ${helmChart.description}" +echo "displayName: ${helmChart.displayName}" +echo "basePath: ${helmChart.metadata.basePath}" +echo "bucketName: ${helmChart.metadata.bucketName}" +echo "repositoryName: ${helmChart.metadata.repositoryName}" +echo "metadataURL: ${helmChart.metadata.url}" +echo "Chart name: ${helmChart.name}" +echo "Version: ${helmChart.version}" +``` +The output will be something like this: + + +``` +description: Host your own Helm Chart Repository +displayName: chartmuseum-2.14.0 +basePath: null +bucketName: null +repositoryName: Helm Chart Repo +metadataURL: http://storage.googleapis.com/kubernetes-charts/ +Chart name: chartmuseum +Version: 2.14.0 +``` +### Review: Helm Artifacts in Pipelines and Triggers + +You can add your Workflow to a stage in a Harness [Pipeline](https://docs.harness.io/article/zc1u96u6uj-pipeline-configuration) or have it executed by a Harness [Trigger](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2). + +In both cases, you can select the Helm chart version you want to deploy. + +#### Pipeline + +Once you add the Workflow to a stage in a Pipeline, you must select the Helm chart version for deployment when you deploy the Pipeline. + +The steps are the same as when deploying the Workflow. You select a chart version: + +![](./static/deploy-a-helm-chart-as-an-artifact-46.png) + +#### Trigger + +You can create a Trigger for any Workflow or Pipeline to run when a new version of the chart is published. + +1. In your Harness Application, click **Triggers**. +2. Click **Add Trigger**. The **Trigger** settings appear. +3. In **Name**, enter a name for the Trigger. This name will appear in the **Deployments** page to indicate the Trigger that initiated a deployment. +4. Click **Next**. +5. In **Condition**, select **On Manifest Changes**. +6. In **Services**, select the Service using the Helm chart in **Manifest Source**. You can use regex to filter names if needed. +7. Click **Next**. +8. In **Actions**, there are three main settings: + * **From Triggering Manifest:** Select this option to use the chart identified in Service you selected in **Condition**. + * **Last Collected:** Select this option to use the last version collected by Harness in the Harness Service. Chart versions are collected automatically by Harness every minute. + * **Last Successfully Deployed:** The last chart that was deployed by the Workflow/Pipeline you selected. In **Workflow**/**Pipeline**, select the Workflow/Pipeline to run. +9. Click **Submit**. The Trigger is created. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-kubernetes-manifests-packaged-with-artifacts.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-kubernetes-manifests-packaged-with-artifacts.md new file mode 100644 index 00000000000..fa42bffe9d2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-kubernetes-manifests-packaged-with-artifacts.md @@ -0,0 +1,391 @@ +--- +title: Add Packaged Kubernetes Manifests +description: Use manifests packaged along with artifacts by using the Custom Remote Manifests setting in a Harness Kubernetes Service. +sidebar_position: 160 +helpdocs_topic_id: 53qqnebrak +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `CUSTOM_MANIFEST`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.By default, you add Kubernetes and OpenShift files to a Harness Kubernetes Service inline or from a repo as files. + +In some cases, your manifests, templates, etc are in a packaged archive and you simply wish to extract them and use then at runtime. + +You can use a packaged archive with the **Custom Remote Manifests** setting in a Harness Kubernetes Service. You add a script to the Service that pulls the package and extracts its contents. Next, you supply the path to the manifest, template, etc. + +Looking for other methods? See [Define Kubernetes Manifests](define-kubernetes-manifests.md). + +### Before You Begin + +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) +* [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Limitations + +* Custom Remote Manifests scripts use Bash only. +* The Delegate that runs the script must have all the software needed for the scripts to execute. +Currently, you cannot select a specific Delegate to execute the Custom Remote Manifests script. Harness selects the Delegate based on [its standard methods](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#how_does_harness_manager_pick_delegates). You can use [Delegate Profiles](https://docs.harness.io/article/yd4bs0pltf-run-scripts-on-the-delegate-using-profiles) to add software to Delegates from Harness. +If you select a Delegate in the Kubernetes Cluster Cloud Provider used by the Workflow's Infrastructure Definition, then the script is run on that Delegate. + +### Review: What Workloads Can I Deploy? + +See [Kubernetes How-tos](kubernetes-deployments-overview.md). + +### Option: Add Secrets for Script + +Typically, your script to pull the remote package will use a user account. For example: + + +``` +curl -sSf -u "johndoe:mypwd" -O 'https://mycompany.jfrog.io/module/example/manifest.zip' +``` +You can use Harness secrets for the username and password in your script. For example: + + +``` +curl -sSf -u "${secrets.getValue("username")}:${secrets.getValue("password")}" -O 'https://mycompany.jfrog.io/module/example/manifest.zip' +``` +For more information, see [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +### Step 1: Create a Harness Kubernetes Service + +Create a Harness Kubernetes Service. + +In Harness, click **Setup**, and then click **Add Application**. + +Enter a name for the Application and click **Submit**. + +Click **Services**, and then click **Add Service**. The **Add Service** settings appear. + +In **Name**, enter a name for the Service. + +In **Deployment Type**, select **Kubernetes**, and then ensure **Enable Kubernetes V2** is selected. + +Click **Submit**. The new Harness Kubernetes Service is created. + +### Step 2: Use Custom Remote Manifests + +In your Harness Kubernetes Service, in **Manifests**, click more options (︙) and select **Custom Remote Manifests**. + +In **Manifest Format**, select **Kubernetes YAML** or **OpenShift Manifest**. + +Now you can add your script to pull the package containing your manifest. + +### Step 3: Add Script for Remote Package + +In **Script**, enter the script that pulls the package containing your manifest and extracts the manifest from the package. For example: + + +``` +curl -sSf -u "${secrets.getValue("username")}:${secrets.getValue("password")}" -O 'https://mycompany.jfrog.io/module/example/manifest.zip' + +unzip manifest.zip +``` +You can use Harness Service, Workflow, secrets, and built-in variables in the script. + +The script is run on the Harness Delegate selected for deployment. If you selected a Delegate in the Kubernetes Cluster Cloud Provider used by the Workflow's Infrastructure Definition, then the script is run on that Delegate. + +Harness creates a temporary working directory on the Delegate host for the downloaded package. You can reference the working directory in your script with `WORKING_DIRECTORY=$(pwd)` or `cd $(pwd)/some/other/directory`. + +Once you have deployed the Workflow, you can check which Delegate was selected in the **Delegates Evaluated** setting for the Workflow step that used the manifest. + +Look for the **CUSTOM\_MANIFEST\_VALUES\_FETCH\_TASK** task: + +![](./static/deploy-kubernetes-manifests-packaged-with-artifacts-177.png) + +You can also map specific Delegates to specific Harness tasks. See [Delegate Task Category Mapping](https://docs.harness.io/article/nzuhppobyg-map-tasks-to-delegates-and-profiles). + +### Step 4: Add Path to Manifests + +Once you have a script that extracts your package, you provide Harness with the path to the manifest in the expanded folders and files. + +You can use Harness Service, Workflow, and built-in variables in the path. + +#### Kubernetes YAML + +You can enter the path to a manifests folder. + +For example, if your expanded package has this folder structure: + + +``` +manifest: + - values.yaml + - templates + - deployment.yaml + - service.yaml +``` +In this example, you can enter **manifest** and Harness automatically detects the **values.yaml** and the other file (for example, **deployment.yaml** and **service.yaml**). If no values.yaml file is present, Harness will simply use the other files. + +That's all the setup required. You can now deploy the Service and the script is executed at runtime. + +The remainder of this topic covers options for overriding the manifest. + +#### OpenShift Manifest + +Provide the path to the OpenShift template, Kubernetes manifest, or Helm file. For example, **manifest/template.yaml**. + +Do not enter a folder. Harness requires a direct path to the file.That's all the setup required. You can now deploy the Service and the script is executed at runtime. + +The remainder of this topic covers options for overriding the template. + +### Option: Delegate Selector + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting. + +### Option: Override Manifest in Service + +You can override settings in the manifest you unpacked in Custom Remote Manifest using the Service's **Configuration** section. + +In the Harness Service, in **Configuration**, click **Add Values** (Kubernetes) or **Add Param** (OpenShift). + +Choose from the following options. + +#### Inherit Script from Service + +Select this option if you want to use an alternative values.yaml file (Kubernetes) or parameters file (OpenShift) from the package you pulled in **Custom Remote Manifest**. + +You can use Harness Service, Workflow, and built-in variables in the path. For example, `${serviceVariable.overridesPath}/values-production.yaml`. + +You can enter multiple values separated by commas. + +##### Kubernetes + +Enter the path to an alternative values file in the extracted package. + +For example, let's say you entered the folder **manifest** in the **Custom Remote Manifest** path, but you have another values file in a **production** folder in the extracted package. + +You can enter the path to the other values file and Harness will use it instead of the values.yaml file in the **manifest** folder. For example, `production/values-production.yaml`. + +The path must be from the root of the extracted package. + +##### OpenShift + +Enter the path to an alternative parameters file in the extracted package. + +For OpenShift, this is the equivalent of the passing a parameters file in the `oc process` command (`--param-file=parameters/file.env`). + +#### Define new Script + +Enter a script to override the script entered in **Custom Remote Manifest**. The new script can download and extract a different package. + +Provide the path to the new manifest folder (Kubernetes) or template file (OpenShift). + +You can use Harness Service, Workflow, and built-in variables in the script and path. You can enter multiple values separated by commas. + +### Option: Override Manifests in Environment + +You can override Harness Service settings at the Harness Environment level using Service Configuration Overrides. See [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md) and [Override a Service Configuration in an Environment](https://docs.harness.io/article/4m2kst307m-override-service-files-and-variables-in-environments). + +The **Custom Manifest Override Configuration** follows the same guidelines as overriding settings using the Service's **Configuration** sections: **Add Values** (Kubernetes) or **Add Param** (OpenShift). + +Here's an example overriding Service file locations with new file locations: + +![](./static/deploy-kubernetes-manifests-packaged-with-artifacts-178.png) + +You can use Harness Service, Workflow, and built-in variables in the script and path. You can enter multiple values separated by commas. + +### Option: Use a Harness Artifact Source + +Although the **Custom Remote Manifests** option is designed for when the manifest and deployment artifact are in the same package, you can use them separately with **Custom Remote Manifests**. + +Deploying a manifest separately from the deployment artifact is the Harness default setup. Artifacts are added to a Harness Kubernetes Service from a repository and manifests are inline or added from a separate repo. See [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md) and [Define Kubernetes Manifests](define-kubernetes-manifests.md).Simply add the artifact in **Artifact Source** as described in [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md). + +In the values.yaml and manifests that you add using **Custom Remote Manifests**, you must reference the Harness Artifact Source using the Harness built-in variables: + +* `image: ${artifact.metadata.image}` +* `dockercfg: ${artifact.source.dockerconfig}` + +For example, in the values.yaml you would add these variables: + + +``` +name: harness-example +replicas: 1 + +image: ${artifact.metadata.image} +dockercfg: ${artifact.source.dockerconfig} + +createNamespace: true +... +``` +And then in the manifest for a deployment, you would reference these variables: + + +``` +... +spec: + {{- if .Values.dockercfg}} + imagePullSecrets: + - name: {{.Values.name}}-dockercfg + {{- end}} + containers: + - name: {{.Values.name}} + image: {{.Values.image}} +... +``` +### Option: Use Local Script + +You can also use a local script to create your manifest in **Custom Remote Manifests**. + +You can use Harness Service, Workflow, secrets, and built-in variables in the script. + +Here is an example using Service variables in the script and **Path to OpenShift Manifests** setting: + +![](./static/deploy-kubernetes-manifests-packaged-with-artifacts-179.png) + +Here is the script used: + +Example Script +``` +WORKING_DIRECTORY=$(pwd) +MANIFEST_PATH="${serviceVariable.manifestPath}" +OVERRIDES_PATH="${serviceVariable.overridesPath}" + +########################################## +########################################## +## +## TEMPLATE +## +########################################## +########################################## + +mkdir -p "$MANIFEST_PATH" +cd "$MANIFEST_PATH" + +read -r -d '' TEMPLATE_MANIFEST <<- TEMPLATE +apiVersion: v1 +kind: Template +metadata: + name: ${workflow.variables.workloadName}-template + annotations: + description: "Description" +objects: +- apiVersion: v1 + kind: ConfigMap + metadata: + name: \${WORKLOAD_NAME} + data: + value: \${CONFIGURATION} +- apiVersion: v1 + kind: Secret + metadata: + name: \${WORKLOAD_NAME} + stringData: + value: \${SECRET} +- apiVersion: apps/v1 + kind: Deployment + metadata: + name: \${WORKLOAD_NAME}-deployment + labels: + secret: ${secrets.getValue("custom-manifest-validation-test-secret")} + spec: + replicas: 1 + selector: + matchLabels: + app: \${WORKLOAD_NAME} + param: ${workflow.variables.valueOverride} + param1: ${workflow.variables.value1Override} + param2: ${workflow.variables.value2Override} + param3: ${workflow.variables.value3Override} + param4: ${workflow.variables.value4Override} + + template: + metadata: + labels: + app: \${WORKLOAD_NAME} + param: \${PARAM} + param1: \${PARAM1} + param2: \${PARAM2} + param3: \${PARAM3} + param4: \${PARAM4} + spec: + containers: + - name: \${WORKLOAD_NAME} + image: harness/todolist-sample:11 + envFrom: + - configMapRef: + name: \${WORKLOAD_NAME} + - secretRef: + name: \${WORKLOAD_NAME} +parameters: +- name: WORKLOAD_NAME + description: Workload name + value: ${workflow.variables.workloadName} +- name: CONFIGURATION + description: Configuration value + value: Some configuration value +- name: SECRET + description: Secret value + value: Some secret value +- name: PARAM + description: Param value + value: default-override +- name: PARAM1 + description: Param value + value: default-override +- name: PARAM2 + description: Param value + value: default-override +- name: PARAM3 + description: Param value + value: default-override +- name: PARAM4 + description: Param value + value: default-override +TEMPLATE + +echo "$TEMPLATE_MANIFEST" > template.yaml + +########################################## +########################################## +## +## ADDITIONAL OVERRIDES +## +########################################## +########################################## + +cd "$WORKING_DIRECTORY" +mkdir -p "$OVERRIDES_PATH" +cd "$OVERRIDES_PATH" + +read -r -d '' PARAMS_OVERRIDE1 <<- OVERRIDE1 +PARAM1: ${configFile.getAsString("values1-override.txt")} +PARAM2: ${configFile.getAsString("values1-override.txt")} +OVERRIDE1 + +read -r -d '' PARAMS_OVERRIDE2 <<- OVERRIDE2 +PARAM2: ${configFile.getAsString("value2Override")} +PARAM3: values2-override +OVERRIDE2 + +echo "$PARAMS_OVERRIDE1" > params1 +echo "$PARAMS_OVERRIDE2" > params2 + + +``` +### Notes + +* You can use Go templating in your Kubernetes resource files, just as you would for files stored in Git or inline. See [Use Go Templating in Kubernetes Manifests](use-go-templating-in-kubernetes-manifests.md). For OpenShift, you must use OpenShift templating. +* If the artifact you are deploying with your manifest is public (DockerHub) and does not require credentials, you can use the standard public image reference, such as `image: harness/todolist-sample:11`. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-kubernetes-service-to-multiple-clusters-using-rancher.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-kubernetes-service-to-multiple-clusters-using-rancher.md new file mode 100644 index 00000000000..b1dce94ab77 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-kubernetes-service-to-multiple-clusters-using-rancher.md @@ -0,0 +1,193 @@ +--- +title: Deploy Services to Multiple Kubernetes Clusters Simultaneously using Rancher +description: Deploy Kubernetes Services to multiple clusters simultaneously using Rancher and Harness. +sidebar_position: 400 +helpdocs_topic_id: hsc50ny57g +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `RANCHER_SUPPORT`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.You can deploy Kubernetes Services to multiple clusters simultaneously using Rancher and Harness. You use Rancher cluster labels to identify multiple clusters in a Harness Infrastructure Definition and then Harness deploys to each cluster simultaneously. + +**What's Rancher?** Rancher is a centralized control plane for all the Kubernetes clusters running across your company. Rancher centralizes operations like cluster provisioning, upgrades, user management, and policy management. See [Rancher product docs](https://rancher.com/docs/rancher/v2.6/en/).This topic describes how to set up a multiple cluster Infrastructure Definition in Harness for Rancher clusters and then deploy to those clusters using Harness Workflows. + + +You can also deploy to multiple infrastructures without using Rancher. See [Deploy a Workflow to Multiple Infrastructures Simultaneously](../concepts-cd/deployments-overview/deploy-to-multiple-infrastructures.md). + +### Before You Begin + +* This topic assumes you are familiar with Rancher and have set up Kubernetes clusters in its UI. If you are new to Rancher, see [Setting up Kubernetes Clusters in Rancher](https://rancher.com/docs/rancher/v2.5/en/cluster-provisioning/) from Rancher. +* This topic assumes you are familiar with Harness Kubernetes deployments. See [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart). + +### Visual Summary + +The following brief video demonstrates how to deploy Services to multiple Kubernetes clusters simultaneously using Rancher: + + + + +### Limitations + +* Harness supports Rancher version 2.6.3 or later. +* Harness uses v3 APIs to interact with Rancher (`/v3/clusters/{clusterName}?action=generateKubeconfig` and `/v3/clusters`). +* Harness supports the **Kubernetes** Deployment Type for Rancher deployments at this time. Helm will be supported soon. +* Harness supports Rolling, Canary, and Blue Green deployments for multiple clusters using Rancher. +* Harness does not support cluster-level overrides in this scenario. The same manifests and Services are deployed to all eligible clusters. + +### Review: Harness Delegates and Rancher Clusters + +Before setting up a Rancher Cloud Provider, you need to install a Harness Delegate in your environment. + +The Harness Delegate does not need to be a Kubernetes Delegate and it does not need to be installed in a target cluster. + +The Harness Delegate does need to be able to connect to the Rancher URL endpoint and to connect to the target Kubernetes clusters. + +See [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +### Review: Cluster Labels and Harness Infrastructure Definitions + +Harness targets Rancher clusters using cluster labels. When you set up a Rancher Infrastructure Definition in Harness, you will select the target clusters by adding the labels as name:value pairs in **Cluster Selection Criteria**. + +Here's an example where the labels from two clusters are added to **Cluster Selection Criteria**. + +![](./static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-196.png) + +Harness uses labels in the following way: + +* Harness will only target clusters that match the name:value pair you add in the Infrastructure Definition. +* If you add multiple name:value pairs, Harness treats those as AND conditions. Clusters must have all of the name:value pairs as labels to be selected. +* In **Label Values**, you can enter a comma-separated list of values. This list makes that value an OR condition. Labels in **Label Name** can have any of the values in the comma-separated list to match. For example: if the value in **Label Name** is `a, b` the value in a cluster label can be either `a` or `b` and it will be a match. + +### Step 1: Add Labels to Rancher Clusters + +You can use existing cluster labels or add new ones for Harness deployments. + +![](./static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-197.png) + +In Rancher, add labels to the clusters to identify them for the Harness deployment. + +To add labels to an existing cluster in Rancher, select the cluster, click more options (**︙**), and then click **Edit Config**. + +![](./static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-198.png) + +Click **Add Label** to add new labels. + +![](./static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-199.png) + +Now that the cluster has labels, you can identify it in Harness as a target cluster. + +### Step 2: Add a Harness Rancher Cloud Provider + +To connect Harness to your Rancher account, you must set up a Harness Rancher Cloud Provider. + +See [Add Rancher Cloud Providers](https://docs.harness.io/article/dipgqjn5pq-add-rancher-cloud-providers). + +### Step 3: Add a Rancher Infrastructure Definition + +The Rancher Infrastructure Definition targets the Kubernetes clusters for your deployments. You select clusters in the Infrastructure Definition using cluster labels you added in Rancher. + +See [Review: Cluster Labels and Harness Infrastructure Definitions](https://harness.helpdocs.io/article/hsc50ny57g#review_cluster_labels_and_harness_infrastructure_definitions) above. + +In your Harness Application, click **Environments**. + +Click Add **Infrastructure Definition**. + +Enter the following settings: + +* In **Cloud Provider Type**, select **Rancher**. +* In **Deployment Type**, select **Kubernetes**. +* In **Cloud Provider**, select the Rancher Cloud Provider you added using the steps in [Add Rancher Cloud Providers](https://docs.harness.io/article/dipgqjn5pq-add-rancher-cloud-providers). +* In **Namespace**, enter the target namespace for the deployments. + + You can only enter one namespace. When you deploy to multiple clusters, the target namespaces must be the same. + + You can also use a Harness variable expression to reference Kubernetes namespaces in Harness Infrastructure Definitions. When a Workflow is run, the namespace in the Infrastructure Definition is applied to all manifests in the Service. See [Select Kubernetes Namespaces based on InfraMapping](create-kubernetes-namespaces-based-on-infra-mapping.md). +* In **Release Name**, use the default `release-${infra.kubernetes.infraId}` or enter in a Kubernetes-compliant release name. + +In **Cluster Selection Criteria**, you will add the Rancher cluster labels to select the target clusters for this Infrastructure Definition. + +Click **Add**, and then enter the label name and value(s). + +See [Review: Cluster Labels and Harness Infrastructure Definitions](https://harness.helpdocs.io/article/hsc50ny57g#review_cluster_labels_and_harness_infrastructure_definitions) above. + +![](./static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-200.png) + +Click **Submit**. The Infrastructure Definition is added to the Environment. You can now select it in your Harness Workflows. + +### Option: Harness Variables in Infrastructure Definition + +You can use Harness built-in and Workflow variables in **Cluster Selection Criteria**. This allows you to provide the labels and values at runtime. + +See: + +* [Built-in Variables List](https://docs.harness.io/article/aza65y4af6-built-in-variables-list) +* [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) +* [Pass Variables between Workflows](https://docs.harness.io/article/gkmgrz9shh-how-to-pass-variables-between-workflows) +* [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows) + +### Step 4: Create a Workflow + +Harness supports Rolling, Canary, and Blue Green deployments for multiple clusters using Rancher. + +Each deployment type uses different steps for deploying to the clusters selected in the Infrastructure Definition. + +#### Rolling + +Rolling deployment follows the standard Harness Kubernetes Rolling deployment process as described in [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md). The only difference is that the process is performed on multiple clusters simultaneously. + +When you create a Rolling Workflow using your Rancher Infrastructure Definition, Harness populates the Workflow with the following default steps: + +* **Rancher Resolve Clusters:** gets the total list of clusters using the Infrastructure Definition and then uses the **Cluster Selection Criteria** from the Infrastructure Definition to filter the list. +* Here's an example of the log from a deployment: +``` +INFO 2022-02-16 12:46:39 Fetching list of clusters and labels from Rancher: https://rancher-internal.dev.harness.io +INFO 2022-02-16 12:46:39 Fetched clusters list: [cd-play-test-cluster, local] +INFO 2022-02-16 12:46:39 Eligible clusters list after applying label filters: [cd-play-test-cluster, local] +``` +* **Rancher Rollout Deployment:** performs a new Kubernetes rollout deployment for each cluster matching the criteria in **Cluster Selection Criteria**. +* **Rancher Rollback Deployment:** in the case of failures, rolls back each cluster to its previous app version. + +![](./static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-201.png) + +Here's what a successful deployment looks like. You can see that two matching clusters were targeted. + +![](./static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-202.png) + +#### Canary + +Canary deployment follows the standard Harness Kubernetes Canary deployment process as described in [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md). The only difference is that the process is performed on multiple clusters simultaneously. + +When you create a Canary Workflow using your Rancher Infrastructure Definition, Harness populates the Workflow with the following default steps: + +* Canary Phase: + + **Rancher Resolve Clusters:** gets the total list of clusters using the Infrastructure Definition and then uses the **Cluster Selection Criteria** from the Infrastructure Definition to filter the list. + + **Rancher Canary Deployment:** performs a Canary deployment to the number of pods you want as either a count or percentage. + + **Rancher Delete:** deletes the pods used by Rancher Canary Deployment. +* Primary Phase: + + **Rancher Resolve Clusters:** since cluster resolution was performed in the Canary Phase, it is skipped here. In the deployment, you will see a message like `Cluster Resolution is already done. Filtered clusters list: [cd-play-test-cluster, local]. Skipping`. + + **Rancher Rollout Deployment:** performs a new Kubernetes rollout deployment for each cluster matching the criteria in **Cluster Selection Criteria**. + + **Rancher Rollback Deployment:** in the case of failures, rolls back each cluster to its previous version. + +#### Blue Green + +Blue Green deployment follows the standard Harness Kubernetes Blue Green deployment process as described in [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md). The only difference is that the process is performed on multiple clusters simultaneously. + +When you create a Canary Workflow using your Rancher Infrastructure Definition, Harness populates the Workflow with the following default steps: + +* **Rancher Resolve Clusters:** gets the total list of clusters using the Infrastructure Definition and then uses the **Cluster Selection Criteria** from the Infrastructure Definition to filter the list. +* **Rancher Stage Deployment:** standard Harness Blue Green step for Kubernetes where Harness creates two Kubernetes services (primary and stage), one pod set for the new app version, annotates the primary and stage services to identify them, and points the stage service at the new app version pod set. See [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md). +* **Rancher Swap Primary with Stage:** Harness swaps the primary service to the pod set for new app version. Production traffic now flows to the new app version. See [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md). +* Rollback Steps: + + **Rancher Swap Primary with Stage:** the resources are not versioned because a Blue/Green deployment uses **rapid rollback**: network traffic is simply routed back to the original instances. You do not need to redeploy previous versions of the service/artifact and the instances that comprised their environment. + +### Review: Rancher Expressions + +The `${rancher.clusters}` expression can be used anywhere in your Workflow following the **Rancher Resolve Clusters** step. + +The `${rancher.clusters}` expression resolves to a comma-separated list of the clusters used in the deployment. + +### See Also + +* [Deploy a Workflow to Multiple Infrastructures Simultaneously](../concepts-cd/deployments-overview/deploy-to-multiple-infrastructures.md) +* [Select Kubernetes Namespaces based on InfraMapping](create-kubernetes-namespaces-based-on-infra-mapping.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-manifests-separately-using-apply-step.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-manifests-separately-using-apply-step.md new file mode 100644 index 00000000000..5485dda1142 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/deploy-manifests-separately-using-apply-step.md @@ -0,0 +1,214 @@ +--- +title: Deploying Manifests Separately using Apply Step +description: Deploy manifests separately. +sidebar_position: 250 +helpdocs_topic_id: 4vjgmjcj6z +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +By default, the Harness Kubernetes Workflow will deploy all of the resources you have set up in the Service **Manifests** section. + +Deploying Kubernetes Jobs? See [Run Kubernetes Jobs](run-kubernetes-jobs.md).In some cases, you might have resources that you do not want to deploy as part of the main Workflow deployment, but want to apply as another step in the Workflow. For example, you might want to deploy an additional resource only after Harness has verified the deployment of the main resources in the Service **Manifests** section. + +Workflows include an **Apply** step that allows you to deploy any resource you have set up in the Service **Manifests** section. + +### Before You Begin + +* [Ignore a Manifest File During Deployment](ignore-a-manifest-file-during-deployment.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations) + +### Review: What Workloads Can I Deploy? + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh). + +### Step 1: Ignore the Workload + +Typically, you will instruct Harness to ignore the workload that you want to deploy separately using the **Apply Step**. + +To have a Workflow ignore a resource file in a Service Manifest section, you add the comment `# harness.io/skip-file-for-deploy` to the **top** of the file. For example, here is a ConfigMap file using the comment: + +![](./static/deploy-manifests-separately-using-apply-step-188.png) + +Now, when this Service is deployed by a Workflow, this ConfigMap resource will not be applied by default. + +The comment `# harness.io/skip-file-for-deploy` must be at the **top** of the file. If it is on the second line it will not work and the resource will be deployed as part of the main Workflow rollout. + +### Step 2: Add the Apply Step + +In your Kubernetes Workflow, click **Add Step**, and then select **Apply**. + +![](./static/deploy-manifests-separately-using-apply-step-189.png) + +### Step 3: Enter the Path and Name of the Manifest + +The Workflow Apply step will apply any resource in a Service **Manifest** section explicitly. You must provide the path and name of the file in **Apply**, and Harness will deploy the resource. + +For example, the following image shows a Jobs resource in a Service **Manifest** section that uses the ignore comment `# harness.io/skip-file-for-deploy` so that the Workflow does not apply it as part of its main **Deploy** steps, and the **Apply** step that specifies the same Jobs resource: + +[![](./static/deploy-manifests-separately-using-apply-step-190.png)](./static/deploy-manifests-separately-using-apply-step-190.png) + +The **File paths** field in the Apply step must include the folder name and the file name. In the above example, the folder **templates** is included with the file name **jobs.yaml**: `templates/jobs.yaml`. + +You can include multiple resource files in the Apply step **File paths** field by separating them with commas, for example: `templates/jobs.yaml, templates/statefulSet.yaml`: + +![](./static/deploy-manifests-separately-using-apply-step-192.png) + +If you apply the ignore comment `# harness.io/skip-file-for-deploy` to a resource but do not use the resource in an Apply step, the resource is never deployed.If you use a remote manifest in your Harness Service, in **File paths** enter a path relative to the path you specified for the manifest in the Harness Service. + +Harness variables such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) are supported in the **File Paths** setting. + +### Option: Manifest Options + +#### Export Manifest + +If you enable this option, Harness does the following at runtime: + +* Downloads manifests (if remote). +* Renders manifests in logs. +* Performs a dry run unless the **Skip Dry Run** option is enabled. +* Export the deployment manifests to the variable `${k8sResources.manifests}`. +* **Does not deploy the manifests.** To deploy the manifests, you must add another Kubernetes step of the same type (Canary, Rolling, Apply, Stage Deployment) an enable the **Inherit Manifest** option to deploy a copy of the exported manifests. + +If **Export Manifest** is enabled, the manifests are not deployed. You can use the **Inherit Manifest** option in a subsequent Kubernetes step to deploy a copy of the exported manifests. + +The exported manifests can be written to storage on the Delegate where the step is run. For example, you can add a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step to echo and write the manifest to a file: + + +``` +echo "${k8sResources.manifests}" > /opt/harness-delegate/test/canaryPlan +``` +If you use `${k8sResources.manifests}` in a script ensure that your script expects multiline output. You can use `cat` command to concatenate the lines.If you have the 3rd party tool that check compliance, it can use the exported manifests. + +To deploy the manifests, a copy of the exported manifests can be inherited by the next Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Inherit Manifest** option. + +If **Export Manifest** is enabled in multiple Kubernetes steps of the same type in the same Workflow Phase, the last step overrides the exported manifests. This is important because the next Kubernetes step to inherit a copy of the exported manifests will only use the exported manifests from last Kubernetes step with **Export Manifest** is enabled. + +#### Inherit Manifest + +Enable this option to inherit and deploy a copy of the manifests exported from the previous Kubernetes step (Canary, Rolling, Apply, Stage Deployment) using the **Export Manifest** option. + +The **Inherit Manifest** option will only inherit the exported manifest from the last Kubernetes step of the same type and in the same Workflow Phase. + +For example, if you enable the **Inherit Manifest** option in a **Canary Deployment** step, then it will only inherit a copy of the manifests exported from the last **Canary Deployment** step with the **Export Manifest** option enabled in the same Workflow Phase. + +### Option: Delegate Selector + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.### Option: Skip Dry Run + +By default, Harness uses the `--dry-run` flag on the `kubectl apply` command, which prints the object that would be sent to the cluster without really sending it. If the **Skip Dry Run** option is selected, Harness will not use the `--dry-run` flag. + +### Option: Skip Steady State Check + +If you select this, Harness will not check to see if the workload has reached steady state. + +### Option: Skip Rendering of Manifest Files + +By default, Harness uses Go templating and a values.yaml for templating manifest files. See [Use Go Templating in Kubernetes Manifests](use-go-templating-in-kubernetes-manifests.md). + +In some cases, you might not want to use Go templating because your manifests use some other formatting. + +Use the **Skip Rendering K8s manifest files** option if you want Harness to skip rendering your manifest files using Go templating. + +### Option: Override YAML Values + +You can override values in the Values YAML file you are using with this **Apply** step. + +For example, if the Apply step is deploying a Kubernetes Job by referencing the jobs.yaml in the Harness Service used by this Workflow, you can override values in the values.yaml used in the Service. + +![](./static/deploy-manifests-separately-using-apply-step-193.png) + +By adding or overriding values in the values.yaml, you can pass parameters into the Job being deployed. + +Another example is traffic splitting. Let's say your Ingress object uses a values.yaml for the `canary-weight` annotation value. Here you can see the values.yaml `canaryWeight` values referenced: + + +``` +nginx.ingress.kubernetes.io/canary-weight: {{ .Values.canaryWeight }} +``` +In the Apply step, you simply need to override this value to set the weight. For example: + +![](./static/deploy-manifests-separately-using-apply-step-194.png) + +You can override values.yaml values using inline or remote values. + +#### Inline Override + +Enable **Override YAML Values**, and then click **Inline**. + +In **Values YAML**, enter the YAML label and value you want to use. For example: + + +``` +replicas: 2 +``` +You can use [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in the value. For example: + + +``` +replicas: ${workflow.variables.replicas} +``` +#### Remote Override + +Enable **Override YAML Values**, and then click **Remote**. + +In **Git Connector**, select the Harness Git Connector that connects Harness with your Git provider. See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +In **Repo Name**, enter the name of the repo in your Git account. + +Select a branch or commit Id. + +In **Branch Name**, enter the repo branch to use. In **Commit ID**, enter the commit Id to use. + +In **File Path**, enter the path to the values file that contains the values you want to use. + +Here's an example: + +![](./static/deploy-manifests-separately-using-apply-step-195.png) + +You can use [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in the value. For example: + + +``` +replicas: ${workflow.variables.replicas} +``` +### Apply Step Examples + +The Apply Step is primarily used for deploying Jobs controllers, but it can be used for other resources. Typically, when you want to deploy multiple workloads (Deployment, StatefulSet, and DaemonSet) you will use separate Workflows for each.Deploying a resource out of sync with the main resource deployment in a Workflow can be useful if a specific resource requires some external service processing that is orchestrated around your main rollout, such as database migration. + +One reason why a Job controller object is a good use of the Apply step is that represents a finite task that runs to completion rather than managing an ongoing desired state. You can run a Job to do perform work outside of the primary object deployment, such as large computation and batch-oriented tasks. + +In another example, let's say you have two services, serviceA calls serviceB to populate a product page. The main Workflow rollout deploys serviceB successfully and then the Apply step deploys serviceA next, ensuring serviceA only calls serviceB after serviceB is deployed successfully. + +Another example of the use of the Apply step is service mesh traffic shifting. Your main Workflow rollout can deploy your services and then an Apply step can apply the resource that modifies the service mesh for the deployed services (for example, in an Istio-enabled cluster, `VirtualService`). + +### Notes + +* The **Apply** step applies to Service Manifests that are local or added remotely using the **Kubernetes Resource Specs in YAML format** option. See [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md). +* The Apply step does not version ConfigMap and Secret objects. ConfigMap and Secret objects are overwritten on each deployment. This is the same as when ConfigMap and Secret objects are marked as unversioned in typical rollouts (`harness.io/skip-versioning: "true"`). See [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations). +* You can use Harness variables in your manifests and values.yaml files. For example, Harness [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). +* In some cases, you might want to deploy a Service but skip its versioning. By default, release history is stored in the Kubernetes cluster in a ConfigMap. This ConfigMap is essential for release tracking, versioning, and rollback. If you want Harness to skip versioning for the Service, use the **Skip Versioning for Service** option in **Remote Manifests**. See [Option: Skip Versioning for Service](link-resource-files-or-helm-charts-in-git-repos.md#option-skip-versioning-for-service). + +### Next Steps + +* [Delete Kubernetes Resources](delete-kubernetes-resources.md) +* [Ignore a Manifest File During Deployment](ignore-a-manifest-file-during-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/ignore-a-manifest-file-during-deployment.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/ignore-a-manifest-file-during-deployment.md new file mode 100644 index 00000000000..ecd7178fb23 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/ignore-a-manifest-file-during-deployment.md @@ -0,0 +1,61 @@ +--- +title: Ignore a Manifest File During Deployment +description: Ignore manifests or then apply them separately using the Harness Apply step. +sidebar_position: 170 +helpdocs_topic_id: vv25jkq4d7 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You might have manifest files for resources that you do not want to deploy as part of the main deployment. + +Instead, you tell Harness to ignore these files and then apply them separately using the Harness Apply step. + +Or you can simply ignore them until you wish to deploy them as part of the main deployment. + + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) +* [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations) + +### Visual Summary + +The following image shows how you can ignore a Jobs manifest and then apply it separately using the Apply step. + +![](./static/ignore-a-manifest-file-during-deployment-162.png) + +### Step 1: Ignore a Manifest + +To have a Workflow ignore a resource file in a Service **Manifests** section, you add the comment `# harness.io/skip-file-for-deploy` to the **top** of the file. + +For more information on `harness.io/skip-file-for-deploy`, see [Kubernetes Versioning and Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations).For example, here is a ConfigMap file using the comment: + +![](./static/ignore-a-manifest-file-during-deployment-163.png) + +Now, when this Service is deployed by a Workflow, this ConfigMap resource will not be applied. + +The comment `# harness.io/skip-file-for-deploy` must be at the **top** of the file. If it is on the second line it will not work and the resource will be deployed as part of the main Workflow rollout. + +### Option 1: Apply Ignored Resource + +The Workflow Apply step will apply any resource in a Service **Manifest** section explicitly. You must provide the path and name of the file in **Apply**, and Harness will deploy the resource. + +For details on the Apply Step, see [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +For example, the following image shows a Jobs resource in a Service **Manifest** section that uses the ignore comment `# harness.io/skip-file-for-deploy` so that the Workflow does not apply it as part of its main **Deploy** steps, and the **Apply** step that specifies the same Jobs resource: + +![](./static/ignore-a-manifest-file-during-deployment-164.png) + +The **File paths** field in the Apply step must include the folder name and the file name. In the above example, the folder **templates** is included with the file name **jobs.yaml**: `templates/jobs.yaml`. + +You can include multiple resource files in the Apply step **File paths** field by separating them with commas, for example: `templates/jobs.yaml, templates/statefulSet.yaml`. + +If you apply the ignore comment `# harness.io/skip-file-for-deploy` to a resource but do not use the resource in an Apply step, the resource is never deployed. + +### Next Steps + +* [Delete Kubernetes Resources](delete-kubernetes-resources.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/kubernetes-deployments-overview.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/kubernetes-deployments-overview.md new file mode 100644 index 00000000000..99934345dc9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/kubernetes-deployments-overview.md @@ -0,0 +1,32 @@ +--- +title: Kubernetes How-tos +description: List of common Kubernetes How-tos. +sidebar_position: 1 +helpdocs_topic_id: pc6qglyp5h +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +The following How-tos guide you through some common Kubernetes tasks. + +* [Connect to Your Target Kubernetes Platform](connect-to-your-target-kubernetes-platform.md) +* [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md) +* [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md) +* [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md) +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) + +For more How-tos, see [Kubernetes How-tos](https://docs.harness.io/category/kubernetes-deployments). + +To see a summary of the changes in Harness Kubernetes Deployment Version 2, see [Harness Kubernetes V2 Changes](https://docs.harness.io/article/g3bzgg4rsw-summary-of-changes-in-kubernetes-deployments-version-2). + +### Review: What Workloads Can I Deploy? + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh). + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/link-resource-files-or-helm-charts-in-git-repos.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/link-resource-files-or-helm-charts-in-git-repos.md new file mode 100644 index 00000000000..89106c83eca --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/link-resource-files-or-helm-charts-in-git-repos.md @@ -0,0 +1,151 @@ +--- +title: Link Resource Files or Helm Charts in Git Repos +description: Use your Git repo for configuration files and Helm charts. +sidebar_position: 100 +helpdocs_topic_id: yjkkwi56hl +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +You can use your Git repo for the configuration files in **Manifests** and Harness will use them at runtime. You have two options for remote files: + +* **Standard Kubernetes Resources in YAML** - These files are simply the YAML manifest files stored on a remote Git repo. +* **Helm Chart from Source Repository** - These are Helm chart files stored in standard Helm syntax in YAML on a remote Git repo. + +For steps on other options, see: + +* All options — See [Define Kubernetes Manifests](define-kubernetes-manifests.md). +* **Helm Chart from Helm Repository** — See [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md). +* **Kustomization Configuration** — See [Use Kustomize for Kubernetes Deployments](use-kustomize-for-kubernetes-deployments.md). +* **OpenShift Template** — See [Using OpenShift with Harness Kubernetes](using-open-shift-with-harness-kubernetes.md). + + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md) + +You can also use a Git repo for your entire Harness Application, and sync it unidirectionally or bidirectionally. For more information, see  [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code). There is no conflict between the Git repo used for remote **Manifests** files and the Git repo used for the entire Harness Application. + +### Step 1: Add a Source Repo Provider + +To use a remote Git repo for your resource files or Helm charts, you must set up a Harness Source Repo Provider to connect to your repo. To set up the connection, see one of the following: + +* [Add a GitHub Repo](https://docs.harness.io/article/sip9rr6ogy-add-github-repo) +* [Add a GitLab Repo](https://docs.harness.io/article/od1u7t4vgq-add-a-gitlab-repo) +* [Add a Bitbucket Repo](https://docs.harness.io/article/etl0yejzsm-add-bitbucket-repo) + +### Step 2: Link Remote Manifests + +In your Harness Kubernetes Service, in **Manifests**, click the vertical ellipsis and click **Link Remote Manifests**. + +![](./static/link-resource-files-or-helm-charts-in-git-repos-203.png) + +The **Remote Manifests** dialog appears. + +![](./static/link-resource-files-or-helm-charts-in-git-repos-204.png) + +### Step 3: Select a Manifest Format + +In **Manifest Format**, select one of the following options: + +* **Kubernetes Resource Specs in YAML format** — Use any manifest and values files from a Git repo. +* **Helm Chart from Source Repository** — Use a Helm chart stored in a Git repo. + +Helm Dependencies are supported with charts in [Helm Chart from Helm Repository](use-a-helm-repository-with-kubernetes.md) (see below), not with **Helm Chart from Source Repository**.For steps on the remaining options: + +* **Helm Chart from Helm Repository** — See [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md). +* **Kustomization Configuration** — See [Use Kustomize for Kubernetes Deployments](use-kustomize-for-kubernetes-deployments.md). +* **OpenShift Template** — See [Using OpenShift with Harness Kubernetes](using-open-shift-with-harness-kubernetes.md). + +### Step 4: Configure the Repo Settings + +In **Source Repository**, select a SourceRepo Provider for the Git repo you added to your Harness account. For more information, see  [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +In **Commit ID** , select **Latest from Branch** or **Specific Commit ID**. + +For Canary deployments: to ensure that the identical manifest is deployed in both the Canary and Primary phases, use **Specific Commit ID**. If you use **Latest from Branch**, when Harness fetches the manifest for each phase there is the possibility that the manifest could change between fetches for the Canary and Primary phases.In **Branch/Commit ID** (required), enter the branch or commit ID for the remote repo. + +In **File/Folder path(s)**,  enter the repo file and folder path. + +If you want to use Go templating in your remote repo for your configuration files in **Manifests**, ensure that the **values.yaml** file is at the root of the folder path you select.When the remote manifests are added, the **Manifests** section displays the connection details. + +### Option: Skip Versioning for Service + +By default, Harness versions ConfigMaps and Secrets deployed into Kubernetes clusters. In some cases, you might want to skip versioning. + +Typically, to skip versioning in your deployments, you add the annotation `harness.io/skip-file-for-deploy` to your manifests. See [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +In some cases, such as when using public manifests or Helm charts, you cannot add the annotation. Or you might have 100 manifests and you only want to skip versioning for 50 of them. Adding the annotation to 50 manifests is time-consuming. + +Instead, enable the **Skip Versioning for Service** option in **Remote Manifests**. + +![](./static/link-resource-files-or-helm-charts-in-git-repos-205.png) + +When you enable **Skip Versioning for Service**, Harness will not perform versioning of ConfigMaps and Secrets for the Service. + +If you have enabled **Skip Versioning for Service** for a few deployments and then disable it, Harness will start versioning ConfigMaps and Secrets. + +### Option: Helm Command Flags + +You can extend the Helm commands that Harness runs when deploying your Helm chart. + +Use **Enable Command Flags** to have Harness run Helm-specific Helm commands and their options as part of preprocessing. All the commands you select are run before `helm install/upgrade`. + +Click **Enable Command Flags**, and then select commands from the **Command Flag Type** dropdown. + +Next, in **Input**, add any options for the command. + +The `--debug` option is not supported.For Kubernetes deployments using Helm charts, the following commands are supported (more might be added): + +* TEMPLATE: `helm template` to render the helm template files. +* VERSION: `helm version` to validate Helm on the Delegate. +* FETCH: `helm fetch` (v1) `helm pull` (v2) to get the Helm chart from its source. + +You will see the outputs for the commands you select in the Harness deployment logs. The output will be part of pre-processing and appear before `helm install/upgrade`. + +If you use Helm commands in the Harness Service and in a Workflow deploying that Service, the Helm commands in the Harness Service override the commands in the Workflow. + +#### Harness Variable Expressions are Supported + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in any of the command options settings. For example, [Service Config variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) and [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +### How Does Harness Use the Remote Files? + +At deployment runtime, the Harness Delegate pulls the remote configuration files from the repo and then uses them to create resources via the Kubernetes API. It does not matter if the Delegate runs in the same Kubernetes cluster as the deployed pods. The Kubernetes API is used by the Delegate regardless of the cluster networking topology. + +When you deploy a Workflow or Pipeline that uses this Service, you can see the Delegate fetch the **Manifests** files from the repo in the **Fetch Files** section of the log Harness **Deployments**: + + +``` +Fetching files from git + +Git connector Url: https://github.com/michaelcretzman/harness-example # remote manifest files + +Branch: example # Git repo branch + +Fetching NGINX/values.yaml # values.yaml file in repo + +Successfully fetched NGINX/values.yaml + +Fetching manifest files at path: NGINX/ # manifest files in repo + +Successfully fetched following manifest [.yaml] files: + +- templates/spec.yaml # manifest file with ConfigMap and Deployment objects + +Done. +``` +If you experience errors fetching the remote files, it is most likely because the wrong branch has been configured in the **Branch/Commit ID**. + +To return to local configuration files, click the vertical ellipsis and select **Use Inline Manifests**. + +Your remote files are not copied locally. You are simply presented with the local configuration files you used last. + +### Next Steps + +* [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/override-harness-kubernetes-service-settings.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/override-harness-kubernetes-service-settings.md new file mode 100644 index 00000000000..63dd0b73b55 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/override-harness-kubernetes-service-settings.md @@ -0,0 +1,135 @@ +--- +title: Override Harness Kubernetes Service Settings +description: Overwrite Harness Kubernetes Service Config Variables, Config Files, Helm charts, and values.yaml settings. +sidebar_position: 140 +helpdocs_topic_id: ycacqs7tlx +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/t57uzu1i41).Harness Kubernetes Service **Config Variables**, **Config Files**, and **values.yaml** settings can be overwritten by Harness Environments **Service Configuration Overrides**. + +This enables you to have a Service keep its settings but change them when the Service is deployed to the Environment. + +For example, you might have a single Service but an Environment for QA and an Environment for Production, and you want to overwrite the `namespace` setting in the Service values.yaml depending on the Environment. + +You can also overwrite Service variables at the Phase-level of a multiple Phase Workflow. + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Using Harness Config Variables in Manifests](using-harness-config-variables-in-manifests.md) +* [Using Harness Config Files in Manifests](using-harness-config-files-in-manifests.md) + +### Step 1: Select the Service to Override + +1. In the Harness Environment, in the **Service Configuration Overrides** section, click **Add Configuration Overrides**. The **Service Configuration Override** settings appear. + + ![](./static/override-harness-kubernetes-service-settings-22.png) + +2. In **Service**, select the Service you are using for your Kubernetes deployment. + +3. Select one of the **Override Type** options. + +### Option: Variable Override + +1. In **Override Type**, select **Variable Override**. The **Variable Override** options appear. +2. In **Configuration Variable**, select a variable configured in the Service's **Config Variables** settings. +3. In **Type**, select **Text** or **Encrypted Text**. +4. In **Override Value**, enter the value to overwrite the variable value in the Service. If you selected **Encrypted Text** in Type, you can select an Encrypted Text values defined in [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + +### Option: File Override + +1. In **Override Type**, select **File Override**. +2. In **File Name**, select a file configured in the Service's **Config Files** settings. +3. In **File**, select the file to overwrite the Service's **Config Files** file. + +### Option: Values YAML + +1. In **Override Type**, select **Values YAML**. Click **Local** or **Remote**. +2. **Local** - Enter in the values.yaml variables and values just as your would in a Service **Manifests** values.yaml. Ensure the name of the variable you want to overwrite is identical. +3. **Remote** - See [Override Remote Values YAML Files](override-values-yaml-files.md). + +### Option: Helm Chart for Specific Service + +1. In **Override Type**, select **Helm Chart**. +2. In **Helm Repository**, select a Helm Chart Repo that you have set up as a [Helm Repository Artifact Server](https://docs.harness.io/article/0hrzb1zkog-add-helm-repository-servers). See [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md). + +#### Notes + +* You can use some overrides together. For example, you can use both Helm Chart Overrides with Values YAML Overrides for the same Harness Service. Harness will merge the overrides. + +### Option: Helm Chart for All Services + +You can use a specific Helm chart to override All Services deployed to this Environment. + +1. In **Service**, select **All Services**. +2. In **Override Type**, select **Helm Chart Repository**. +3. In **Helm Repository**, select the Helm Repository containing the Helm chart that you want to override all Service's Helm Charts, and then click **Submit**. + +![](./static/override-harness-kubernetes-service-settings-23.png) + +### Example + +Here is an example of overwriting a Service values.yaml with a **Service Configuration Override**. + +In the Service values.yaml, we have a variable for `replicas`: + + +``` +replicas: 1 +``` +This is used in the manifest file like this: + + +``` +... +spec: + replicas: {{int .Values.replicas}} +... +``` +Now, in **Service Configuration Override**, you can overwrite the Service values.yaml `replicas` value using the **Local** option: + +![](./static/override-harness-kubernetes-service-settings-24.png)At deployment runtime to this Environment, the overwritten `replicas` values is used: + + +``` +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 +``` +### Option: OpenShift Template for Specific Service + +1. In **Service**, select the specific service that you want to override. +2. In **Override Type**, select **OpenShift Param**. +3. Select **Inline** or **Remote**. + 1. If you select **Inline,** then enter the value inline. If you select **Remote**, perform the following steps.![](./static/override-harness-kubernetes-service-settings-25.png) +4. Select the **Git Connector**. +5. Select either **Use latest commit from branch** or **Use specific commit ID.** +- If you select **Use** **latest commit from branch**, enter the branch name where the file is located, such as `master`, `dev`, or `myAppName`. Do not provide the full URL to the branch. +- If you select **Use specific commit ID**, enter the commit ID. +6. In **File path**, enter the full path to the file in the repo. +7. Click **Submit**. + +### Option: OpenShift Template for All Services + +1. In **Service**, select **All Services**. +2. In **Override Type**, select **OpenShift Param**. +3. Select **Inline** or **Remote**. + 1. If you select **Inline,** then enter the value inline. If you select **Remote**, perform the following steps.![](./static/override-harness-kubernetes-service-settings-26.png) +4. Select the **Git Connector**. +5. Select either **Use latest commit from branch** or **Use specific commit ID.** +- If you select **Use** **latest commit from branch**, enter the branch name where the file is located, such as `master`, `dev`, or `myAppName`. Do not provide the full URL to the branch. +- If you select **Use specific commit ID**, enter the commit ID. +6. In **File path**, enter the full path to the file in the repo. +7. Click **Submit**. + +### Next Steps + +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/override-values-yaml-files.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/override-values-yaml-files.md new file mode 100644 index 00000000000..99d317ee0c0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/override-values-yaml-files.md @@ -0,0 +1,180 @@ +--- +title: Override Values YAML Files +description: Override a remote values.yaml file. +sidebar_position: 130 +helpdocs_topic_id: p453sikbqt +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/t57uzu1i41).You can override the inline or remote values.yaml file(s) used in a Harness Kubernetes Service. + +You can override values.yaml at the Harness Service and Environment levels, and also use Workflow variables to replace values.yaml file names or values at deployment runtime. + +You can override using multiple values.yaml files. This is a common scenario where: + +* You use one values.yaml for defaults and global configuration. +* Use a second values.yaml file for the specific configuration of the service you are deploying. +* Use one or more values.yaml files for deployment environments (QA, PROD, etc). + +This topic will explain how to override using single and multiple values.yaml files at the Harness Service and Environment levels, including the use of Workflow variables, and explain the override hierarchy so that you understand how values.yaml files override each other. + + +### Before You Begin + +* [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md) +* [Upgrade to Helm 3 Charts in Kubernetes Services](upgrade-to-helm-3-charts-in-kubernetes-services.md) + +### Review: values.yaml Support + +Harness Kubernetes Services use values.yaml files in the following scenarios: + +1. **Inline**, as part of Harness native manifest editor. + + ![](./static/override-values-yaml-files-140.png) + +2. **Remotely**, as part of a Kubernetes spec in a Git repo or a Helm Chart in a Helm repository or Git repo: + + ![](./static/override-values-yaml-files-141.png) + +You can override values.yaml files from any of these sources in the Harness Service or in any Harness Environment in the same Harness Application. + +![](./static/override-values-yaml-files-142.png) + +Environment-level values.yaml overrides are set up in the Environments of the same Application: + +![](./static/override-values-yaml-files-143.png) + +Let's look at an advanced scenario where a user needs to override values.yaml files at multiple levels with a specific hierarchy, represented as highest to lowest here: + +1. Environment-level overrides for a specific Service (Service override specific Service 2) +2. Environment-level overrides for a specific Service (Service override specific Service 1) +3. Environment-level overrides (global) (Environment override all Services) +4. Service-level overrides (Service override 2) +5. Service-level override (common values.yaml—Service override 1) +6. Default values.yaml in Helm Charts (default) + +This scenario can be implemented with a combination of Service-level and Environment-level overrides, as described in this topic. + +#### How are Multiple values.yaml Files Managed by Harness? + +Harness performs a diff and merges all values.yaml files, with the last values.yaml applied overwriting the earlier values.yaml. + +The overwriting is granular. So if your first values.yaml file has a `key:value` that no latter values.yaml file has, that `key:value` is not overwritten. + +### Step 1: Pick a Store Type + +1. In the **Values YAML Override** section of a Harness Kubernetes Service, click **Add Values**. +2. In **Store Type**, select **Local** or **Remote**. + + ![](./static/override-values-yaml-files-144.png) + +### Option: Use an Inline Override + +1. Enter the YAML you want to use to override the Values YAML file (values.yaml). + + ![](./static/override-values-yaml-files-145.png) + +2. Enter any new values and click **Submit**. The override is added to **Values YAML Override**. + +### Option: Use a Single Remote Override + +1. In **Source Repository**, select the Git repo you added as a [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +2. For **Commit ID**, select either **Latest from Branch** and enter in the branch name, or **Specific Commit ID** and enter in the **commit ID**. +3. In **File Path(s)**, enter the path to the values.yaml file in the repo, including the repo name, like **helm/values.yaml**. This is a mandatory setting. You cannot leave **File Path(s)** empty. + +Values in Services can also be overwritten in Harness Environments. For more information, see [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). + +### Option: Use Multiple Override Files at the Service Level + +You can specify multiple values.yaml files in a remote repo in your override. + +Let's look at an example: + +1. In **Source Repository**, select the Git repo you added as a [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +2. For **Commit ID**, select either **Latest from Branch** and enter in the branch name, or **Specific Commit ID** and enter in the **commit ID**. +3. In **File Path(s)**, enter the paths to the values.yaml files in the repo, including the repo name, like **k8****s/values.yaml**. This is a mandatory setting. You cannot leave **File Path(s)** empty. + +Multiple files can be applied, separated by commas, with the later ones taking priority. + +For example, let's say you wanted to include the following files with their override hierarchy represented from highest to lowest: + +1. k8s/multiple-overrides/values-override/service-**override3**.yaml (highest) +2. k8s/multiple-overrides/values-override/service-**override2**.yaml +3. k8s/multiple-overrides/values-override/service-**override1**.yaml (lowest) + +In **File Path(s)**, you would enter the following: + +`k8s/multiple-overrides/values-override/service-override1.yaml, k8s/multiple-overrides/values-override/service-override2.yaml, k8s/multiple-overrides/values-override/service-override3.yaml` + +### Option: Use Values YAML from inside the Chart + +Currently, this feature is behind the feature flag `OVERRIDE_VALUES_YAML_FROM_HELM_CHART`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If you are using the Helm Chart from Helm Repository option in **Manifests**, you can override the chart in **Manifests** using one or more values YAML files inside the Helm chart. + +In **Configuration**, in **Values YAML Override**, click the edit icon. + +In **Store Type**, select **From Helm Repository**. + +In **File Path(s)**, enter the file path to the override values YAML file. + +Multiple files can be used. When you enter the file paths, separate the paths using commas. + +The latter paths are given higher priority. + +![](./static/override-values-yaml-files-146.png) + +### Option: Override Files at the Environment Level + +You can override the values.yaml settings of a Service in an Environment's **Service Configuration Overrides** settings. + +1. In an Environment, in **Service Configuration Overrides**, click **Add Configuration Overrides**. +2. In **Service(s)**, click **All Services** or the name of a specific **Service**. +3. In **Override Type**, select **Values YAML**.![](./static/override-values-yaml-files-147.png) +4. Select **Inline** or **Remote**. For **Inline**, enter the YAML you want to use to override the Service-level values.yaml file(s). +5. For **Remote**, in **Git Connector**, select the Git repo you added as a [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +6. Select either **Use** **latest commit from branch** and enter in the branch name, or **Use specific commit ID** and enter in the **commit ID**. +7. In **Branch Name**, enter the branch name where the file(s) is located, such as **master**, **dev**, etc. Do not enter the full URL to the branch. +8. In **File Path(s)**, enter the path(s) to the values.yaml file(s) in the repo, including the repo name, like **k8****s/values.yaml**. Multiple files can be applied, separated by commas, with the later ones taking priority. +This is a mandatory setting. You cannot leave **File Path(s)** empty. + +When you're done, the override will look something like this: + +![](./static/override-values-yaml-files-148.png) + +Click **Submit**. The override is listed in the **Service Configuration Overrides** section: + +![](./static/override-values-yaml-files-149.png) + +### Option: Use Variable Expressions in Override File Settings + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables), such as [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template), in the names and values of the values.yaml files you use as overrides. + +For example, let's say you have listed three values.yaml files in the Service Configuration **Values YAML Override**. + +In **File Path(s)**, you entered the following: + +`k8s/multiple-overrides/values-override/service-override1.yaml, k8s/multiple-overrides/values-override/service-override2.yaml, k8s/multiple-overrides/values-override/service-override3.yaml` + +But now you want to provide the path for the second file (k8s/multiple-overrides/values-override/service-override2.yaml) at deployment runtime. + +In the Workflow that deploys the Service, you simply create a Workflow variable named **filePath**: + +![](./static/override-values-yaml-files-150.png) + +This Workflow variable has a default value, but you can simply leave it blank. + +Now, in your Service Configuration **Values YAML Override**, in **File Path(s)**, you can replace the second file path with the Workflow variable expression `${workflow.variables.filePath}`: + +![](./static/override-values-yaml-files-151.png) + +Now, when you deploy the Workflow, you can provide the file path for that values.yaml override: + +![](./static/override-values-yaml-files-152.png) + +### Next Steps + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Pass Variables between Workflows](https://docs.harness.io/article/gkmgrz9shh-how-to-pass-variables-between-workflows) +* [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/override-variables-per-infrastructure-definition.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/override-variables-per-infrastructure-definition.md new file mode 100644 index 00000000000..802436ab91b --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/override-variables-per-infrastructure-definition.md @@ -0,0 +1,201 @@ +--- +title: Override Variables at the Infrastructure Definition Level +description: This topic describes how to override specific sets of variables for Kubernetes at the Infrastructure Definition level. +sidebar_position: 150 +helpdocs_topic_id: cc59hfou9c +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/t57uzu1i41).This topic describes how to override specific sets of variables for Kubernetes at the Infrastructure Definition level. You can override the `values.yaml` in your Service at the Infrastructure level and different Services can have different overrides in the same namespace. + +### Before You Begin + +* Your target Environment must have multiple Infrastructure Definitions. +* You must have a Service that needs to be overridden at the Infrastructure Definition level. +* The Application must contain a Service, a Workflow, and an Environment. +* Review the [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md) topic to understand Harness variable override and its hierarchy. + +### Step 1: Configure the Service + +Configure and deploy a Harness Kubernetes Service to an Environment that has multiple Infrastructure Definitions. + +The following are the sample service Manifests:  + +**Deployment.yaml** + + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: {{.Values.namespace}} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{.Values.name}}-{{.Values.track}} + labels: + app: {{.Values.name}} + track: {{.Values.track}} +data: + APP_ENV1: {{.Values.appEnv1}} + APP_ENV2: {{.Values.appEnv2}} +--- +apiVersion: apps/v1beta1 +kind: Deployment +metadata: + name: {{.Values.name}}-{{.Values.track}} + labels: + app: {{.Values.name}} + track: {{.Values.track}} + version: {{.Values.version}} +spec: + replicas: {{.Values.replicas}} + selector: + matchLabels: + app: {{.Values.name}} + track: {{.Values.track}} + template: + metadata: + labels: + app: {{.Values.name}} + track: {{.Values.track}} + version: {{.Values.version}} + spec: + containers: + - name: {{.Values.name}} + image: {{.Values.image}} + imagePullPolicy: Always + resources: + requests: + cpu: 100m + memory: 50Mi + ports: + - name: http + containerPort: 8080 + envFrom: + - configMapRef: + name: {{.Values.name}}-{{.Values.track}} +``` +**Service.yaml** + + +``` +apiVersion: v1 +kind: Service +metadata: + name: {{.Values.name}} + labels: + app: {{.Values.name}} +spec: + type: ClusterIP + ports: + - name: http + port: 9080 + protocol: TCP + targetPort: http + selector: + app: {{.Values.name}} +``` +Perform the following steps to configure the Service: + +1. In **Service**, add `s3bucketName` and `dnsServer` configuration variables. +2. Set the configuration variable `appEnv1` to `aaa`. +3. Set the configuration variable `appEnv2` to `bbb`.![](./static/override-variables-per-infrastructure-definition-130.png) +4. In `values.yaml`, reference Harness variables as: + +`appEnv1: ${serviceVariable.appEnv1}` +`appEnv2: ${serviceVariable.appEnv2}` + +**Values.yaml** + + +``` +namespace: ${infra.kubernetes.namespace} +apiUrl: http://localhost:8080 +replicas: 1 + +name: infra-override +image: ${artifact.metadata.image} +version: ${artifact.metadata.tag} +track: primary +endpoint: rpc + +appEnv1: ${serviceVariable.appEnv1} +appEnv2: ${serviceVariable.appEnv2} +``` + +### Step 2: Add the Environment Overrides + +Environment overrides are overridden at the Infrastructure Definition level. You can use the Infrastructure Definition's name or value to override the variable. + +To override, first you need the Environment variables. Then, you need to enable specific values to be passed when the Infrastructure Definition Mapping is selected. + +The Environment variables are the access points for the override variables to be assigned. Ensure that you have multiple Infrastructure Definitions mapped to your Environment. Once the mappings are configured, add Service Configuration Override variables to the Environment. + +![](./static/override-variables-per-infrastructure-definition-131.png) + +1. Provide an override variable for your Environment. +You can associate the Infrastructure Definition name with the variable. This helps to identify the overriding variable applied in your Environment. +2. Configure `values.yaml` file override. +3. Create a new key-value pair where the key is the variable value that is overridden at the Infrastructure Definition level, and the value is a Workflow variable (it is set up later) called `$``{``override.keyNameHere``}`. + +When you are done, it will look something like this:![](./static/override-variables-per-infrastructure-definition-132.png) + +### Step 3: Configure the Workflow + +You need to configure a shell script to handle the assignment of these variables in the Workflow. The shell script assigns the infrastructure variables as the Environment variables configured in the previous step. + +1. In Workflow, write a script to assign variables based on the infra name. + + Here is a sample shell script: + + ``` + echo + echo Using infrastructure definition [${infra.name}] + echo + + appEnv1=${serviceVariable.appEnv1} + appEnv2=${serviceVariable.appEnv2} + + if [[ "${infra.name}" == infra2 ]]; then + + appEnv2=${serviceVariable.infra2_appEnv2} + + elif [[ "${infra.name}" == infra3 ]]; then + + appEnv1=${serviceVariable.infra3_appEnv1} + appEnv2=${serviceVariable.infra3_appEnv2} + + fi + + echo Setting appEnv1 to [$appEnv1] + echo Setting appEnv2 to [$appEnv2] + echo + ``` + +2. Export the variables into the context. This variable is used in the override configured [earlier](override-variables-per-infrastructure-definition.md#step-1-configure-the-service). The `$``{``override.appEnv1``}` references a value based on this shell script. +3. In **Publish Variable Name**, enter **override**, which is referenced in the `values.yaml` configuration override. + + When you are done, it will look something like this:![](./static/override-variables-per-infrastructure-definition-133.png) + +4. Add the shell script to the **Deploy** steps before the Rollout Deployment. + + ![](./static/override-variables-per-infrastructure-definition-134.png) + +5. Deploy the Workflow. Based on the Infrastructure Definition, certain variables are overridden. For InfraDef1, the values were assigned based on the Service configuration variables provided in the Environment. InfraDef1 did not override the Environment level values. + + ![](./static/override-variables-per-infrastructure-definition-135.png) + +6. Run this deployment again in InfraDef2. Now the Environment level value is taken for `appEnv1`, but `appEnv2` is overridden with the value specific to InfraDef2. + + ![](./static/override-variables-per-infrastructure-definition-136.png) + +7. Deploy the third Infrastructure Definition. This time both the variables are overridden with values specific to InfraDef3. + +### Next Steps + +Check out the community article on publishing variable outputs: [Publish Variables](https://community.harness.io/t/publish-variables/227) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/provision-kubernetes-infrastructures.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/provision-kubernetes-infrastructures.md new file mode 100644 index 00000000000..7a1551cdf0e --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/provision-kubernetes-infrastructures.md @@ -0,0 +1,171 @@ +--- +title: Provision Kubernetes Infrastructures +description: Provision the target Kubernetes infrastructure as part of a Workflow. +sidebar_position: 190 +helpdocs_topic_id: huajnezo0r +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can provision the target Kubernetes infrastructure as part of a pre-deployment step in your Workflow. When the Workflow runs, it builds your Kubernetes infrastructure first, and then deploys to the new infrastructure. + +Provisioning involves creating a Harness Infrastructure Provisioner, and then using it in the Infrastructure Definition and Workflow. + +Provisioning Kubernetes is supported with the Google Cloud Platform Cloud Provider, but not the Azure or Kubernetes Cluster Cloud Providers. + +* [Next Steps](#next_steps) + +### Before You Begin + +* [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md) +* Creating a Terraform Infrastructure Provisioner is covered in the topic [Terraform Provisioner](../terraform-category/terrform-provisioner.md). In this topic, we will summarize all the related steps, but focus on the Infrastructure Definition and Workflow step. + +### Step 1: Set Up the Delegate for Terraform + +1. Install the Kubernetes Delegate where it can connect to the provisioned cluster. +The Delegate needs to be able to reach the Kubernetes master endpoint of the provisioned cluster and have the necessary credentials, such as the Kubernetes service account token. +Follow the steps in [Connect to Your Target Kubernetes Platform](connect-to-your-target-kubernetes-platform.md). +2. Install Terraform on the Delegate using a Delegate Profile. +Follow the steps in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) and [Common Delegate Profile Scripts](https://docs.harness.io/article/nxhlbmbgkj-common-delegate-profile-scripts). +3. Tag the Delegate. +When you add the Terraform Provision step in your Workflow, you will specify that a specific Delegate perform the operation by using its Delegate Tag. +Follow the steps in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +### Step 2: Set Up the Cloud Provider + +Harness supports provisioning Kubernetes using Google Cloud Platform (GKE) only. The Kubernetes Cluster Cloud Provider, which connects directly to an existing cluster, cannot be used to perform provisioning.Add a Harness Cloud Provider that connects to your Google Cloud Platform account. + +The GCP service account requires **Kubernetes Engine Admin** (GKE Admin) role to get the Kubernetes master username and password. Harness also requires **Storage Object Viewer** permissions. + +See [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +### Step 3: Git Repo Setup + +The Terraform script you use with Harness must be available in a Git repo. You connect Harness to the repo using a Harness Source Repro Provider. + +![](./static/provision-kubernetes-infrastructures-28.png) + +Set up a Harness Source Repro Provider that connects to the Git repo hosting your Terraform script. See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +### Step 4: Set Up a Terraform Infrastructure Provisioner + +You set up a Terraform Infrastructure Provisioner to identify your script repo information and script input variables. + +Setting up the Terraform Provisioner involves the following: + +1. Add your Terraform script via its Git Repo so Harness can pull the script. +2. Map the relevant Terraform output variables from the script to the required Harness fields for the deployment platform (AWS, Kubernetes, etc). + + Once the Terraform Infrastructure Provisioner is set up, it can be used in: + + * Infrastructure Definitions — To identify the target cluster and namespace. + * Workflow Terraform Provisioner Steps — To provision the infrastructure as part of the Workflow. + + Harness supports first class Terraform Kubernetes provisioning for Google Kubernetes Engine (GKE).To set up a Terraform Infrastructure Provisioner, do the following: + +3. In your Harness Application, click **Infrastructure Provisioners**. +4. Click **Add Infrastructure Provisioner**, and then click **Terraform**. The **Add Terraform Provisioner** dialog appears. + + ![](./static/provision-kubernetes-infrastructures-29.png) + +5. In **Display Name**, enter the name for this provisioner. You will use this name to select this provisioner in Harness Environments and Workflows. +6. Click **NEXT**. The **Script Repository** section appears. This is where you provide the location of your Terraform script in your Git repo. +7. In **Script Repository**, in **Git Repository**, select the [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) you added for the Git repo where your script is located. +8. In **Git Repository Branch**, enter the repo branch to use. For example, **master**. For master, you can also use a dot (`.`). +9. In **Terraform Configuration Root Directory**, enter the folder where the script is located. Here is an example showing the Git repo on GitHub and the **Script Repository** settings: + + ![](./static/provision-kubernetes-infrastructures-30.png) + +10. Click **NEXT**. The **Variables** section is displayed. This is where you will add the script input variables that must be given values when the script is run. +11. In **Variables**, click **Populate Variables**. The **Populate from Example** dialog appears. Click **SUBMIT** to have the Harness Delegate use the Source Repo Provider you added to pull the variables from your script and populate the **Variables** section. + + ![](./static/provision-kubernetes-infrastructures-31.png) + If Harness cannot pull the variables from your script, check your settings and try again. Ensure that your Source Repo Provisioner is working by clicking its **TEST** button. + Once Harness pulls in the variables from the script, it populates the **Variables** section. + +12. In the **Type** column for each variable, specify **Text** or **Encrypted Text**. + + When you add the provisioner to a Workflow, you will have to provide text values for **Text** variables, and select Harness Encrypted Text variables for **Encrypted Text** variables. See [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + +13. Click **NEXT**. The **Backend Configuration (Remote state)** section appears. This is an optional step. + + By default, Terraform uses the local backend to manage state, in a local [Terraform language](https://www.terraform.io/docs/configuration/syntax.html) file named **terraform.tfstate** on the disk where you are running Terraform. With remote state, Terraform writes the state data to a persistent remote data store (such as an S3 bucket or HashiCorp Consul), which can then be shared between all members of a team. You can add the backend configs (remote state variables) for remote state to your Terraform Provisioner in **Backend Configuration (Remote state)**. + +14. In **Backend Configuration (Remote state)**, enter the backend configs from your script. +15. Click **Next** and then **Submit**. The Terraform Provisioner is created. + +![](./static/provision-kubernetes-infrastructures-32.png) + +Now you can use the Terraform Provisioner in Infrastructure Definitions and Workflows. + +### Step 5: Map Outputs in Infrastructure Definition + +Typically, when you add an Environment, you specify the Infrastructure Definition for an *existing* infrastructure. To use your Terraform Provisioner, you add the Terraform Provisioner to the Infrastructure Definition to identify a dynamically provisioned infrastructure *that will exist*. + +Later, when you create a Workflow, you will use a Terraform Provisioner step to provision the infrastructure. During deployment, the Terraform Provisioner step will provision the infrastructure and then the Workflow will deploy to it via the Infrastructure Definition. + +To add the Infrastructure Provisioner to the Infrastructure Definition, do the following: + +1. In your Harness Environment, click **Infrastructure Definition**. The **Infrastructure Definition** settings appear. +2. In **Name**, enter the name for the Infrastructure Definition. You will use this name to select the Infrastructure Definition when you set up Workflows and Workflow Phases. +3. In **Cloud Provider Type**, select **Google Cloud Platform**. + +:::note +Harness supports first class Terraform Kubernetes provisioning for Google Kubernetes Engine (GKE). +::: + +4. In **Deployment Type**, select **Kubernetes**. +5. Click **Map Dynamically Provisioned Infrastructure**. +6. In **Provisioner**, select your Terraform Infrastructure Provisioner. +7. In **Cloud Provider**, select the Cloud Provider that you use to connect Harness with GCP. +8. In **Cluster Name** and **Namespace**, map the required fields to your Terraform script outputs. + +You map the Terraform script outputs using this syntax, where `exact_name` is the name of the output: + + +``` +${terrafrom.*exact\_name*} +``` +:::note +When you map a Terraform script output to a Harness field as part of a Service Mapping, the variable for the output, `${terrafrom.exact_name​}`, can be used anywhere in the Workflow that uses that Terraform Provisioner. +::: + +The Kubernetes deployment type requires that you map an output to **Cluster Name**. You can map an output to **Namespace** as an option. + +The following example shows the Terraform script outputs used for the Kubernetes deployment type fields: + +![](./static/provision-kubernetes-infrastructures-33.png) + +For information on Kubernetes deployments, see [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md). + +Click **Submit**. The Infrastructure Definition is created. + +![](./static/provision-kubernetes-infrastructures-34.png) + +Now that the Infrastructure Definition is created, you must add a **Terraform Provision** step to the Workflows that will use this Infrastructure Definition. + +### Step 6: Add Terraform Provisioner to Workflow + +The Terraform Provision step lets you customize the Infrastructure Provisioner for a specific deployment. You can specify the inputs, remote state, targets for the provisioning performed by the Workflow. + +1. Open or create a Workflow that is configured with an Infrastructure Definition that uses the Kubernetes Infrastructure Provisioner. +2. In **Pre-deployment Steps**, click **Add Step**. +3. Select **Terraform Provision**. The **Terraform Provision** dialog appears. + + ![](./static/provision-kubernetes-infrastructures-35.png) + +4. In **Provisioner**, select a Kubernetes Terraform Provisioner. +5. In **Timeout**, enter how long Harness should wait to apply the Terraform Provisioner before failing the Workflow. + +The **Inherit following configurations from dry run** setting is described in .1. Click **NEXT**. The remaining settings appear. + +The remaining settings are not Kubernetes-specific. You can review them in [Terraform Provisioner](../terraform-category/terrform-provisioner.md). + +In the **Post-deployment Steps** of the Workflow, you can add a **Terraform Destroy** step to remove any provisioned infrastructure, just like running the `terraform destroy` command. See the [Remove Provisioned Infra with Terraform Destroy](../terraform-category/terraform-destroy.md) How-to and  [destroy](https://www.terraform.io/docs/commands/destroy.html) from Terraform.1. Complete your Workflow and click **Deploy**. + +### Next Steps + +* [​Use Terraform Destroy](../terraform-category/terrform-provisioner.md#terraform-destroy) +* [Using the Terraform Apply Command](../terraform-category/using-the-terraform-apply-command.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/prune-kubernetes-resources.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/prune-kubernetes-resources.md new file mode 100644 index 00000000000..a31949a26a0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/prune-kubernetes-resources.md @@ -0,0 +1,137 @@ +--- +title: Prune Kubernetes Resources (FirstGen) +description: Remove old resources in your target Kubernetes cluster by pruning them during deployment. +sidebar_position: 350 +helpdocs_topic_id: pmndpu61bk +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `PRUNE_KUBERNETES_RESOURCES`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/t7phv4eowh).Changes to the manifests used in Harness Kubernetes deployments can result in orphaned resources you are unaware of. + +For example, one deployment might deploy resources A and B but the next deployment deploys A and C. C is the new resource and B was removed from the manifest. Without pruning, resource B will remain in the cluster. + +You can manually delete Kubernetes resources using the [Delete](delete-kubernetes-resources.md) step, but Harness will also perform resource pruning automatically during deployment. + +Harness uses pruning by default to remove any resources that were present in an old manifest, but no longer present in the manifest used for the current deployment. + +Harness also allows you to identify resources you do not want pruned using the annotation `harness.io/skipPruning`. + +### Before You Begin + +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) +* [Delete Kubernetes Resources](delete-kubernetes-resources.md) + +### Supported Platforms and Technologies + +Pruning is supported for the following deployment strategies and Workflow types: + +* Rolling Deployments +* Blue/Green Deployments + +See: + +* [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) + +### Limitations + +* To prevent pruning using the Harness annotation `harness.io/skipPruning: "true"`, the resource must have been deployed by Harness. +Harness pruning does not consider resources outside of a Harness deployment. +If you make any changes to your Kubernetes resources using a tool other than Harness (before or after the deployment), Harness does not track those changes. +* The maximum manifest/chart size is 0.5MB. When Harness prunes, it stores the full manifest in configMap to use it as part of release history. While deploying very large manifests/charts though Kubernetes, Harness is limited by configMap capacity. +* While it is unlikely, if you are using the same entity in two Harness Services, Harness does not know this. So if you prune the resource in one deployment it might be unavailable in another deployment. Use the annotation `harness.io/skipPruning: "true"` to avoid issues. + +### Review: Harness Kubernetes Pruning Criteria + +Kubernetes pruning in Harness is similar to the `kubectl apply --prune` method provided by [Kubernetes](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#alternative-kubectl-apply-f-directory-prune-l-your-label). + +Kubernetes pruning queries the API server for all objects matching a set of labels and attempts to match the returned live object configurations against the object configuration files. + +Similarly, Harness compares the objects you are deploying with the objects it finds in the cluster. If Harness finds objects which are not in the current release, it prunes them. + +Harness also allows you to identify resources you do not want pruned using the annotation `harness.io/skipPruning`. This is described later in this topic. + +#### Rolling Deployments + +Kubernetes Rolling deployments manage pruning as follows: + +In the Deploy stage of the Workflow, Harness compares resources in the last successful release with the current release. + +Harness prunes the resources from the last successful release that are not in current release. + +If a deployment fails, Harness recreates the pruned resources during its Rollback stage. + +During rollback, any new resources that were created in the failed deployment stage that were not in the last successful release are deleted also. + +#### Blue/Green Deployments + +Kubernetes Blue/Green deployments manage pruning as follows: + +In the first step of a Blue/Green deployment, the new version of the release is deployed to the stage environment (pod set). + +Harness prunes by comparing the new and previous releases in the stage pod set. Harness prunes the resources from the last successful release which are not in the current release. + +Let's look at an example. + +1. Deployment 1 is successfully deployed. It contained manifests for resources a, b, and c. +2. Deployment 2 failed. It contained manifests for resources a, c, and d, but not b. +3. Before failure, resource d is created and resource b is pruned. +4. During rollback, Harness recreates the previously pruned resource b and deletes resource d. + +### Review: Pruning Examples + +The first time you deploy a resource (Deployment, StatefulSet, ReplicaSet, etc) no pruning will take place. + +In Harness **Deployments**, in the Workflow step, you will see a **Prune** section with the following message: + + +``` +No previous successful deployment found, so no pruning required +``` +When Harness finds resources that match the pruning criteria, you will see a message like this: + + +``` +kubectl --kubeconfig=config delete Deployment/k8s-orphaned-resource-b --namespace=default + +deployment.apps "k8s-orphaned-resource-b" deleted + +kubectl --kubeconfig=config delete ConfigMap/k8s-orphaned-resource-configmap-b --namespace=default + +configmap "k8s-orphaned-resource-configmap-b" deleted + +Pruning step completed +``` +If a deployment fails, Harness recreates any of the pruned resources it removed as part of the deployment. In the **Rollback Deployment** step, you will see a **Recreate Pruned Resources** section with message like this: + + +``` +kubectl --kubeconfig=config apply --filename=manifests.yaml --record + +deployment.apps/k8s-orphaned-resource-f created + +Successfully recreated pruned resources. +``` +### Step 1: Skip Pruning for a Resource + +To ensure that a resource is not pruned, add the annotation `harness.io/skipPruning: "true"`. + +When Harness identifies resources from the last successful release which are not in current release, it searches for the `harness.io/skipPruning` annotation and ignores any resources that have it set to `true`. + +You can deploy a resource using the annotation `harness.io/skipPruning: "true"`, and then if the manifest is removed and another deployment occurs, Harness will see the annotation `harness.io/skipPruning: "true"` on the resource previously deployed and skip pruning it. + +As mentioned in **Limitations** above, you cannot add a resource with the annotation outside of a Harness deployment and have Harness skip the pruning of that resource. + +### See Also + +* [Delete Kubernetes Resources](delete-kubernetes-resources.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/pull-an-image-from-a-private-registry-for-kubernetes.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/pull-an-image-from-a-private-registry-for-kubernetes.md new file mode 100644 index 00000000000..8a5340dc432 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/pull-an-image-from-a-private-registry-for-kubernetes.md @@ -0,0 +1,78 @@ +--- +title: Pull Images from Private Registries for Kubernetes +description: Import the credentials from the Docker credentials file. +sidebar_position: 40 +helpdocs_topic_id: g3bw9z659p +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +Typically, If the Docker artifact source is in a private registry, Harness has access to that registry using the credentials set up in the Harness [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +In some cases, your Kubernetes cluster might not have the permissions needed to access a private Docker registry. For these cases, the default values.yaml file in Service **Manifests** section contains `dockercfg: ${artifact.source.dockerconfig}` . This key will import the credentials from the Docker credentials file in the artifact. + +### Before You Begin + +Ensure you have reviewed and set up the following: + +* [Kubernetes Deployments Overview](../concepts-cd/deployment-types/kubernetes-overview.md) +* [Add Container Images for Kubernetes Deployments](add-container-images-for-kubernetes-deployments.md) + +### Step 1: Use the dockercfg Value + +1. In your Harness Kubernetes Service, in **Manifests**, click **values.yaml**. +2. Verify that `dockercfg` key exists, and uses the `${artifact.source.dockerconfig}` expression to obtain the credentials: + + ``` + dockercfg: ${artifact.source.dockerconfig} + ``` +2. Click the **deployment.yaml** file. +3. Verify that the Secret object is inside an `if` argument using `dockercfg` and the `{{.Values.dockercfg}}` value: + + ``` + {{- if .Values.dockercfg}} + apiVersion: v1 + kind: Secret + metadata: + name: {{.Values.name}}-dockercfg + annotations: + harness.io/skip-versioning: "true" + data: + .dockercfg: {{.Values.dockercfg}} + type: kubernetes.io/dockercfg + --- + {{- end}} + ``` +With these requirements met, the cluster import the credentials from the Docker credentials file in the artifact. + +### Notes + +* Any secrets in the manifest are sanitized when they are displayed in the deployment logs. See [Secrets and Log Sanitization](https://docs.harness.io/article/o5ec7vvtju-secrets-and-log-sanitization). +* When you are using a public repo, the `dockercfg: ${artifact.source.dockerconfig}` in values.yaml is ignored by Harness. You do not need to remove it. +* If you want to use a private repo and no imagePullSecret, then set `dockercfg` to empty in values.yaml. +* **Legacy imagePullSecret Method** — Previously, Harness used a `createImagePullSecret` value in values.yaml that could be set to `true` or `false`, and `dockercfg: ${artifact.source.dockerconfig}` to obtain the credentials. If `createImagePullSecret` was set to `true`, the following default Secret object in deployment.yaml would be used: + + +``` +{{- if .Values.createImagePullSecret}} +apiVersion: v1 +kind: Secret +metadata: + name: {{.Values.name}}-dockercfg + annotations: + harness.io/skip-versioning: "true" +data: + .dockercfg: {{.Values.dockercfg}} +type: kubernetes.io/dockercfg +--- +{{- end}} +``` +This legacy method is still supported for existing Services that use it, but the current method of using the default values.yaml and deployment.yaml files is recommended. + +### Next Steps + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/run-kubernetes-jobs.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/run-kubernetes-jobs.md new file mode 100644 index 00000000000..0d0a634a1ab --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/run-kubernetes-jobs.md @@ -0,0 +1,284 @@ +--- +title: Run Kubernetes Jobs +description: Define and execute Kubernetes Jobs in Harness. +sidebar_position: 320 +helpdocs_topic_id: qpv2jfdjgm +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Kubernetes [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) create one or more pods to carry out commands. For example, a calculation or a backup operation. + +In Harness Kubernetes deployments, you define Jobs in the Harness Service **Manifests**. Next you add the **Apply** step to your Harness Workflow to execute the Job. + +In this topic, we will show you how to execute a Job in a Harness Kubernetes deployment as part of the main deployment. + +Typically, Jobs are not part of the main deployment. You can exclude them from the main deployment and simply call them at any point in the Workflow using the Apply step. For steps on ignoring the Job as part of the main deployment and executing it separately, see [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +### Before You Begin + +* **​Kubernetes Jobs** — We assume you are familiar with [Kubernetes Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). +* **Apply step** — The Harness Workflow Apply step allows you to deploy any resource you have set up in the Service **Manifests** section at any point in your Workflow. See  [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). +* **Ignoring Manifests** — You can annotate a manifest to have Harness ignore it when performing its main deployment operations. Then you can use the Apply step to execute the manifest wherever you want to run it in the Workflow. See [Ignore a Manifest File During Deployment](ignore-a-manifest-file-during-deployment.md). +* **Delete Jobs before rerunning deployments** — Once you've deployed the Job, you must delete it before deploying a Job of the same name to the same namespace. + +### Visual Summary + +In this topic, we will walk through a simple Job deployment. Here is the completed deployment in Harness: + +![](./static/run-kubernetes-jobs-61.png) + +### Review: Apply Step + +Workflows include an **Apply** step that allows you to deploy *any resource* you have set up in the Service **Manifests** section. + +For details on what you can deploy in different Harness Workflow types, see [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh-what-can-i-deploy-in-kubernetes). + +The Apply step can deploy *all workload types*, including Jobs in any Workflow type. + +You can add an Apply step anywhere in your Harness Workflow. This makes the Apply step useful for running Kubernetes Jobs. + +Here are some Job examples: + +* Run independent but related work items in parallel: sending emails, rendering frames, transcoding files, or scanning database keys. +* Create a new pod if the first pod fails or is deleted due to a node hardware failure or node reboot. +* Create a Job that cleans up the configuration of an environment, to create a fresh environment for deployment. +* Use a Job to spin down the replica count of a service, to save on cost. + +Any workload deployed with the **Apply** step is not rolled back by Harness. + +### Step 1: Add Job Manifest + +For this topic, we will create a Service named **Countdown** of the Kubernetes Deployment Type. + +![](./static/run-kubernetes-jobs-62.png) + +The Job manifest is added to the Harness Service **Manifests** section. + +Here is a Job that will countdown from 15 to 1 and print out the countdown when complete: + + +``` +apiVersion: batch/v1 +kind: Job +metadata: + name: {{.Values.name}} +spec: + template: + metadata: + name: {{.Values.name}} + labels: + app: {{.Values.name}} + spec: + containers: + - name: counter + image: {{.Values.image}} + command: + - "bin/bash" + - "-c" + - "for i in $(seq 1 15); do echo $((16-i)); sleep 1s; done" + restartPolicy: Never +``` +In your Harness Service, in **Manifests**, you simply add the Job in a manifest file. Let's walk through adding the manifest and the values.yaml file. + +First, we add a [CentOS Docker Image](https://hub.docker.com/_/centos) as the [Docker Registry Artifact Source](https://docs.harness.io/article/gxv9gj6khz-add-a-docker-image-service) for the Service. + +![](./static/run-kubernetes-jobs-63.png) + +Next, we create the **countdown.yaml** file in a **templates** folder. It contains the exact same countdown Job example listed above: + +![](./static/run-kubernetes-jobs-64.png) + +Next, edit **values.yaml** to contain the name and image labels only: + + +``` +name: countdown +image: ${artifact.metadata.image} +``` +Now that the Job is added to the Service, we can select the target cluster where the Job will be deployed. + +### Step 2: Define Target Cluster + +Jobs do not require any changes to the way you specify the target cluster in Harness. + +For steps on setting up the target cluster, see [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md). + +### Step 3: Add the Job to the Workflow + +For this topic, we will create a Harness Rolling Workflow for our Service, named **Countdown**. + +![](./static/run-kubernetes-jobs-65.png) + +1. In the Workflow **Deploy** section, delete the **Rollout Deployment** step. We don't need the Rollout Deployment step because we will simply deploy the Job using the **Apply** step. +2. In the Workflow **Deploy** section, click **Add Step**, and then select the **Apply** step. +3. Set up the Apply step to use the Job manifest in the Service **Manifests**: `templates/countdown.yaml`. + +![](./static/run-kubernetes-jobs-66.png) + +That's all you have to do to add the Job to your Workflow. Next, we'll add some test and clean up steps. + +### Option: Delegate Selector + +The Apply step has the Delegate Selector option. + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.### Option: Skip Steady State Check + +If you select this option, Harness will not check that the workload (Job) has reached steady state. + +### Option: Add Test and Clean Up Steps + +1. In **Workflow Variables**, add a new variable named **JobName** and give it the value `countdown`. We will use this variable in a Shell Script step to check if the Job is complete. + ![](./static/run-kubernetes-jobs-67.png) +2. In the **Verify** section of the Workflow, click **Add Step**, and then select the **Shell Script** step. +3. In the Shell Script step, in **Script**, add the following script to check if the Job completed: + + ``` + kubectl wait --for=condition=complete --timeout=30s jobs/${workflow.variables.JobName} -n ${infra.kubernetes.namespace} + ``` + You can see the script uses the Workflow variable expression `${workflow.variables.JobName}` to get the name of the Job, **countdown**. + + Next, we'll add a Shell Script step to output the log for the Job. When we deploy, the log will display the countdown from 15 to 1 performed by the Job. + +4. In the **Wrap Up** section of the Workflow, add another Shell Script step. In **Script**, enter the following script: + + ``` + kubectl logs -n ${infra.kubernetes.namespace} $(kubectl get pods -n ${infra.kubernetes.namespace} -l job-name=${workflow.variables.JobName} -o jsonpath='{.items[*].metadata.name}') + ``` + Finally, let's add a **Delete** step to remove the Job. + +5. In the **Wrap Up** section of the Workflow, after the Shell Script step, click **Add Step**. Select the **Delete** step. + +6. In **Resources**, enter the type and name of the resource, `Job/countdown`. + +![](./static/run-kubernetes-jobs-68.png) + +See [Delete Kubernetes Resources](delete-kubernetes-resources.md) for more information on how to reference resources. + +Now that our Job deployment is set up, we can run it. + +### Step 4: Deploy the Job + +1. In the Workflow, click **Deploy**. +2. In **Start New Deployment**, we enter `countdown` for the **JobName** Workflow variable, select a **Build/Version** for our CentOS artifact, and click **Submit**. + +![](./static/run-kubernetes-jobs-69.png) + +Let's look at the results of each step. + +In the **Apply** step, in **Wrap Up**, you can see that the Job is run: + + +``` +Wrapping up.. + + +kubectl --kubeconfig=config describe --filename=manifests.yaml + +Name: countdown +Namespace: default +Selector: controller-uid=aff025af-6ebe-11ea-b052-4201ac10c80b +Labels: app=countdown + controller-uid=aff025af-6ebe-11ea-b052-4201ac10c80b + job-name=countdown +Annotations: kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --filenam... + kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true +Parallelism: 1 +Completions: 1 +Start Time: Wed, 25 Mar 2020 17:33:06 +0000 +Completed At: Wed, 25 Mar 2020 17:33:22 +0000 +Duration: 16s +Pods Statuses: 0 Running / 1 Succeeded / 0 Failed +Pod Template: + Labels: app=countdown + controller-uid=aff025af-6ebe-11ea-b052-4201ac10c80b + job-name=countdown + Containers: + counter: + Image: registry.hub.docker.com/library/centos:6.10 + Port: + Host Port: + Command: + bin/bash + -c + for i in $(seq 1 15); do echo $((16-i)); sleep 1s; done + Environment: + Mounts: + Volumes: +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulCreate 22s job-controller Created pod: countdown-wzph5 + +Done. +``` +In the Shell Script step in **Verify**, we can see that our Job completed: + +![](./static/run-kubernetes-jobs-70.png) + +In the Shell Script step in **Wrap Up**, we can see the log for the Job pod: + +![](./static/run-kubernetes-jobs-71.png) + +Finally, in the **Delete** step, you can see the countdown Job deleted. + +![](./static/run-kubernetes-jobs-72.png) + +### Option: Showing Job Output + +To view Job output after the Apply step, you can use a simple script in a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step: + + +``` +echo + +pods=$(kubectl get pods -n ${infra.kubernetes.namespace} --selector=job-name=my-job --output=jsonpath='{.items[*].metadata.name}') + +kubectl logs -n ${infra.kubernetes.namespace} $pods + +echo +``` +If you need to show the logs *during* job execution rather than after the Apply step, then modify the script and run the step in parallel with Apply. + +Alternatively, if you have your cluster logs going to a log service you can generate a URL to that system that shows the job logs in parallel as well. + +### Summary + +Using the Apply step, you are able to configure, manage, and deploy a Kubernetes Job. + +### One More Thing to Try + +As we demonstrated, you can get the status of the Job using a simple script. In addition, you can output that status to a Jira, ServiceNow, or Email step using the Shell Script step **Publish Variable Name**. + +For example, let's change the Shell Script that checks the success of the Job. We will add the output to a variable and then publish that variable: + +![](./static/run-kubernetes-jobs-73.png) + +Now you can obtain the output via the variable expression `${context.checkjob.jobstatus}`. Here's an Email step using the published variable: + +![](./static/run-kubernetes-jobs-74.png) + +For information on these collaboration tools, see: + +* [Jira Integration](https://docs.harness.io/article/077hwokrpr-jira-integration) +* [ServiceNow Integration](https://docs.harness.io/article/7vsqnt0gch-service-now-integration) +* [Add Collaboration Providers](https://docs.harness.io/article/cv98scx8pj-collaboration-providers) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/scale-kubernetes-pods.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/scale-kubernetes-pods.md new file mode 100644 index 00000000000..3158408b69c --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/scale-kubernetes-pods.md @@ -0,0 +1,93 @@ +--- +title: Scale Kubernetes Pods +description: Update the number of Kubernetes pods running. +sidebar_position: 310 +helpdocs_topic_id: va3trqfy49 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +When you deploy a Kubernetes workload using Harness, you set the number of pods you want in your manifests and in the deployment steps. + +With the Scale step, you can scale this number of running pods up or down, by count or percentage. + + +### Before You Begin + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) +* [Kubernetes Workflow Variable Expressions](workflow-variables-expressions.md) + +### Step 1: Add Scale Step + +In your Harness Workflow, click **Add Step**, and select **Scale**. The **Scale** settings appear. + +Name the step and then provide the scaling strategy, described below. + +### Step 2: Pick Count or Percentage + +The Scale step updates the number of instances running, either by count or percentage. + +In **Instance Unit Type**, select **COUNT** or **PERCENTAGE**. + +* **COUNT** — The number is simply the number of pods. +* **PERCENTAGE** — A percentage of the pods defined in your Harness Service **Manifests** files or a previous Workflow step. + +### Step 3: Set the Number of Pods + +Enter the number of pods to scale up or down compared to the number of instances specified *before* the Scale step. + +The number may come from the Harness Service manifest or a previous Workflow step, whichever set the number of pods right before the Scale step. + +For example, in you have `replicas: 4` in a manifest in your Service, and you enter **50** **PERCENT** in **Instances**, then 2 pods are deployed in this step. + +If you have an odd number of instances, such as 3 instances, and then enter 50% in Scale, the number of instances is scaled down to 2. + +### Step 4: Specify Resources to Scale + +Enter the Harness built-in variable `${k8s.canaryWorkload}` or the name of the resource in the format `[namespace/]Kind/Name`, with `namespace` optional. For example:  + +`my-namespace/Deployment/harness-example-deployment-canary` + +You can scale Deployment, DaemonSet, or StatefulSet. + +You can only enter one resource in **Workload**. To scale another resource, add another Scale step. + +Here is what a completed step looks like: + +![](./static/scale-kubernetes-pods-112.png) + +### Option: Delegate Selector + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.### Option: Skip Steady State Check + +If you select this, Harness will not check to see if the workload has reached steady state. + +### Notes + +* You can scale down to **0** to remove all instances. + +### Next Steps + +* [Delete Kubernetes Resources](delete-kubernetes-resources.md) +* [Kubernetes Workflow Variable Expressions](workflow-variables-expressions.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/set-up-kubernetes-ingress-rules.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/set-up-kubernetes-ingress-rules.md new file mode 100644 index 00000000000..4698a964cab --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/set-up-kubernetes-ingress-rules.md @@ -0,0 +1,69 @@ +--- +title: Set up Kubernetes Ingress Rules +description: Add your Ingress rules to your Harness Kubernetes deployments. +sidebar_position: 300 +helpdocs_topic_id: tot87l7e6k +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Route traffic using the Ingress rules defined in your Harness Kubernetes Service. + +### Before You Begin + +* [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) +* If you are new to Ingress, see [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) from Kubernetes. + +A Harness Service is different from a Kubernetes service. A Harness Service includes the manifests and container used for deployment. A Kubernetes service enables applications running in a Kubernetes cluster to find and communicate with each other, and the outside world. To avoid confusion, a Harness Service is always capitalized in Harness documentation. A Kubernetes service is not. + +### Step 1: Add a Service Manifest + +For Ingress Rules, you simply add your Kubernetes service and Ingress manifests to your Harness Service, and then refer to the service name in the Ingress manifest. + +Here is the Kubernetes service manifest. Note the service name `my-svc`: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: my-svc +spec: + ports: + - name: my-port + port: 8080 + protocol: TCP + targetPort: my-container-port + selector: + app: my-deployment + type: ClusterIP +``` +The service name **my-svc** will be referred to in the Ingress manifest. + +### Step 2: Add an Ingress Manifest + +Add an Ingress manifest that refers to the service name. In our example, the service name is `my-svc`: + + +``` +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: my-ingress + annotations: + kubernetes.io/ingress.class: "nginx" +spec: + rules: + - http: + paths: + - path: /my/path + backend: + serviceName: my-svc + servicePort: 8080 +``` +### Notes + +* Using the values.yaml file and Go templating, you would simply add the service name and any other key values to the values.yaml file and then replace them in both manifests with the variable. For examples of using Go templating, see [Use Go Templating in Kubernetes Manifests](use-go-templating-in-kubernetes-manifests.md). + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/set-up-kubernetes-traffic-splitting.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/set-up-kubernetes-traffic-splitting.md new file mode 100644 index 00000000000..b0462ee4bee --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/set-up-kubernetes-traffic-splitting.md @@ -0,0 +1,254 @@ +--- +title: Set Up Kubernetes Traffic Splitting with Istio +description: Gradually migrate traffic between application versions. +sidebar_position: 280 +helpdocs_topic_id: 1qfb4gh9e8 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness provides traffic splitting to gradually migrate traffic between application versions. + +Harness supports Istio 1.2 and above. + +Not using Istio? No problem. See [Traffic Splitting Without Istio](traffic-splitting-without-istio.md).In a [Kubernetes Canary](create-a-kubernetes-canary-deployment.md) or [Blue/Green](create-a-kubernetes-blue-green-deployment.md) deployment, as the new application is verified, you can shift traffic from the previous version to a new version. + + +### Before You Begin + +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + +### Limitations + +Traffic Splitting is supported for Harness Canary and Blue/Green deployment strategies only. It is not supported with the Rolling Update strategy. + +### Step 1: Review Istio + +In Istio, traffic splitting is set in each VirtualService using route rules: + + +``` +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: reviews +spec: + hosts: + - reviews + http: + - route: + - destination: + host: reviews + subset: v1 + weight: 75 + - destination: + host: reviews + subset: v2 + weight: 25 +``` +In Harness, you can use a simple DestinationRule and a VirtualService without route rules, and then specify the routing in the Workflow that uses the VirtualService, via the Traffic Split step: + +![](./static/set-up-kubernetes-traffic-splitting-182.png) + +Setting up Traffic Splitting involves adding a standard traffic management manifest to your Harness Service, and then using the Traffic Split step in your Workflow. + +### Step 2: Add DestinationRule Manifest + +In your Harness Service, add a manifest for a simple [DestinationRule](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/#DestinationRule) without route rules. It will act as a template using the Service values.yaml file for names via its `{{.Values.name}}` placeholder. + +Here is a simple DestinationRule: + + +``` +apiVersion: networking.istio.io/v1alpha3 +kind: DestinationRule +metadata: + annotations: + harness.io/managed: "true" + name: {{.Values.name}}-destinationrule +spec: + host: {{.Values.name}}-svc + trafficPolicy: + loadBalancer: + simple: RANDOM +``` +#### harness.io/managed: true Required + +Note the use of the `harness.io/managed: "true"` annotation. This is **required** for Harness to identify this DestinationRule as managed. + +This annotation is used to identify which DestinationRule or VirtualService Harness should update during traffic splitting when there are more than one. + +Harness requires that the managed VirtualService have only one route in the `http` list in order to know which one to update. + +If the DestinationRule/VirtualService uses `harness.io/managed: "false"`, that is the same as if `harness.io/managed` were omitted. In this case, Harness will not perform any traffic shifting. + +The quotations around "true" and "false" are mandatory. + +### Step 3: Add VirtualService Manifest + +Next, in your Harness Service, add a manifest for a simple [VirtualService](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/) without route rules. As with the DestinationRule manifest, it will act as a template using the Service values.yaml file for names via its `{{.Values.name}}` placeholder. + +Here is a simple VirtualService: + + +``` +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + annotations: + harness.io/managed: "true" + name: {{.Values.name}}-virtualservice +spec: + gateways: + - {{.Values.name}}-gateway + hosts: + - test.com + http: + - route: + - destination: + host: {{.Values.name}}-svc +``` +#### harness.io/managed: true Required + +Note the use of the `harness.io/managed: "true"` annotation. This is **required** for Harness to identify this DestinationRule as managed. + +This annotation is used to identify which DestinationRule or VirtualService Harness should update during traffic splitting when there are more than one. + +Harness requires that the managed VirtualService have only one route in the `http` list in order to know which one to update. + +If the DestinationRule/VirtualService uses `harness.io/managed: "false"`, that is the same as if `harness.io/managed` were omitted. In this case, Harness will not perform any traffic shifting. + +The quotations around "true" and "false" are mandatory. + +### Step 4: Review Weighting + +You do not need to enter weights in the `destination` section. + +By default, Harness adds a weight value of 100 for the existing (stable) service and 0 for the (canary) service, regardless of whether you use the Traffic Split step. + +Here is a sample from the deployment log of a VirtualService without weights specified that where Harness has applied weights. + + +``` +- destination: + host: "anshul-traffic-split-demo-svc" + subset: "stable" + weight: 100 +- destination: + host: "anshul-traffic-split-demo-svc" + subset: "canary" + weight: 0 +``` +If you do specify weights in your VirtualService, Harness will still use its defaults for Canary deployments and you can use the Traffic Split to change the weights. + +### Step 5: Add Gateway Manifest + +The VirtualService includes a reference to a Gateway object. + +Typically, you will also add a Gateway manifest to your Service, describing the load balancer for the mesh receivingHTTP/TCP connections. The Gateway manifest can also use the values.yaml placeholder: + + +``` +apiVersion: networking.istio.io/v1alpha3 +kind: Gateway +metadata: + name: {{.Values.name}}-gateway +spec: + selector: + istio: ingressgateway + servers: + - port: + number: 80 + name: http + protocol: HTTP + hosts: + - "*" +``` +That's all you need to set up simple traffic management in your Harness Kubernetes Service. Next, you use the Traffic Split step in your Workflow to control routing weights for the existing service and the new service you are deploying. + +### Step 6: Add Traffic Split Step + +You perform traffic management in your Workflow using the Traffic Split step. + +You can use the Traffic Split step anywhere in your Workflow, but you will typically apply it the **Verify** section after the **Canary Deployment** step was successful. + +1. To add the Traffic Split step, in your Workflow, click **Add Step**. +2. Select **Traffic Split**. The **Traffic Split** settings appear. + +![](./static/set-up-kubernetes-traffic-splitting-183.png) + +### Step 7: Define Virtual Service Name + +By default, the Traffic Split step includes Harness variables to refer the VirtualService set up in the Service the Workflow is deploying, and the named destination service subsets Harness deploys. + +In **Virtual Service Name**, Traffic Split takes the name of the VirtualService set up in the Service: + +![](./static/set-up-kubernetes-traffic-splitting-184.png) + +The variable `${k8s.virtualServiceName}` refers to the value of the name label in your VirtualService manifest in the Harness Service. + +If you have multiple VirtualService manifests in your Harness Service **Manifests**, you can enter the name of the VirtualService you want to use manually. + +### Option: Delegate Selector + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting. + +### Step 8: Set Destinations and Weights + +In **Destination**, Harness provides two variables: + +* `${k8s.canaryDestination}` - Refers to the new Kubernetes service the Workflow is deploying. +* `${k8s.stableDestination}` - Refers to the previous Kubernetes service. + +Harness will use these variables to initialize destinations and then apply the traffic split by adding destination subsets and weights into the VirtualService it deploys. + +Here is an example of a Traffic Split step and the logs from its deployment showing how the destinations are initialized and then applied by Traffic Split: + +![](./static/set-up-kubernetes-traffic-splitting-185.png) + +That's all that is needed to set up Traffic Splitting. + +### Option 1: Use Subsets + +In cases where you are using multiple subsets in destination rules and you want to assign different values to them, you can use your own subsets in Traffic Split as well. Here is a simple example: + +![](./static/set-up-kubernetes-traffic-splitting-186.png) + +The only requirement of Destination field values is that they contain a host, subset and are valid YAML. See  [Destination](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/#Destination) from Istio for details. + +### Option 2: Use Multiple Traffic Split Steps + +You can use multiple Traffic Split steps in your Workflow to change the routing to your old and new service versions. Here is an example with Approval steps between each Traffic Split step: + +![](./static/set-up-kubernetes-traffic-splitting-187.png) + +### Notes + +* **Canary Delete and Traffic Management** — If you are using the **Traffic Split** step or doing Istio traffic shifting using the **Apply step**, move the **Canary Delete** step from **Wrap Up** section of the **Canary** phase to the **Wrap Up** section of the *Primary* phase (the phase containing the Rollout Deployment step). +Moving the **Canary Delete** step to the **Wrap Up** section of the Primary phase will prevent any traffic from being routed to deleted pods before traffic is routed to stable pods in the Primary phase. +* **HTTP in the VirtualService** — For traffic management using the Traffic Split step, Harness only supports HTTP in the VirtualService manifest. If you want to use HTTPS or TLS, you can use another manifest and apply it using the Apply Step. + +For more information, see [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md), [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md), and [Delete Kubernetes Resources](delete-kubernetes-resources.md). + +### Next Steps + +* [Traffic Splitting Without Istio](traffic-splitting-without-istio.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/skip-harness-label-selector-tracking-on-kubernetes-deployments.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/skip-harness-label-selector-tracking-on-kubernetes-deployments.md new file mode 100644 index 00000000000..baa95620e4e --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/skip-harness-label-selector-tracking-on-kubernetes-deployments.md @@ -0,0 +1,57 @@ +--- +title: Skip Harness Label Selector Tracking on Kubernetes Deployments +description: Prevent Harness from using its default Kubernetes label selector harness.io/track -- stable during Canary deployments. +sidebar_position: 410 +helpdocs_topic_id: nce6e8s725 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `SKIP_ADDING_TRACK_LABEL_SELECTOR_IN_ROLLING`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.You can prevent Harness from using its default Kubernetes label selector `harness.io/track: stable` during Canary deployments. Skipping this label can help when you have existing non-Harness deployed services or conflicts in naming. + +### Review: Labels in Harness Kubernetes Canary Phases + +Default Harness Kubernetes Canary deployments are a two phase process that relies on Deployment object labels for tracking. + +First, let's review the two phase Canary deployment: + +1. **Canary Phase:** + 1. Harness creates a Canary version of the Kubernetes Deployment object defined in your Service Definition **Manifests** section. + 2. All pods in the Canary Phase have label the `harness.io/track: canary`. Traffic could be routed to these pods using this label. + 3. Once that Deployment is verified, the Canary Delete step deletes it by default. + Using this method, Harness provides a Canary group as a way to test the new build, run your verification, and then roll out to the following Primary group. +2. **Primary Phase:** + 1. Runs the actual Deployment using a Kubernetes Rolling Update with the number of pods you specify in the **Manifests** files (for example, `replicas: 3`). + 2. All pods in the Primary Phase have label `harness.io/track: stable`. + +### Review: Invalid Value LabelSelector + +Kubernetes labels are immutable. If a specific Deployment object already exists in the cluster, you cannot change its labels. + +This can cause problems with the default Harness Kubernetes Canary deployments use of `harness.io/track` if the Deployment object already exists before the Harness deployment. + +If you are deploying different Harness Deployment objects to the same cluster, you might encounter a Selector error such as this: + + +``` +The Deployment “harness-example-deployment” is invalid: spec.selector: + Invalid value: v1.LabelSelector{MatchLabels:map[string]string{“app”:“harness-example”}, + MatchExpressions:[]v1.LabelSelectorRequirement{}}: field is immutable +``` +Most often, you can simply delete or rename the Deployment object. In some case, this can be a problem because you will have downtime when the object is restarted. + +As an alternative, you can force Harness to skip the `harness.io/track: stable` label in the Canary deployment. + +### Skipping the Tracking Label + +Once the relevant feature flag is enabled, a Harness Kubernetes Canary deployment will work like this: + +* If the Deployment object already exists in the cluster without the `harness.io/track: stable`, Harness will not add the `harness.io/track: stable` label to the Deployment object. +* If the Deployment object already exists with the `harness.io/track: stable` label, Harness will not delete it. +* For any new Deployment object, Harness will not add the `harness.io/track: stable` label. + + + + + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/_openshift.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/_openshift.png new file mode 100644 index 00000000000..e7340c0e4c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/_openshift.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-137.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-137.png new file mode 100644 index 00000000000..e3f9dee4978 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-137.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-138.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-138.png new file mode 100644 index 00000000000..ae235ce7872 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-138.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-139.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-139.png new file mode 100644 index 00000000000..ae235ce7872 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/add-container-images-for-kubernetes-deployments-139.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-55.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-55.png new file mode 100644 index 00000000000..d07606af032 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-55.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-56.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-56.png new file mode 100644 index 00000000000..d07606af032 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-56.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-57.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-57.png new file mode 100644 index 00000000000..eb4226c49f0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-57.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-58.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-58.png new file mode 100644 index 00000000000..eb4226c49f0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-58.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-59.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-59.png new file mode 100644 index 00000000000..94de8f7c177 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-59.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-60.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-60.png new file mode 100644 index 00000000000..94de8f7c177 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/adding-and-editing-inline-kubernetes-manifest-files-60.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/connect-to-your-target-kubernetes-platform-53.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/connect-to-your-target-kubernetes-platform-53.png new file mode 100644 index 00000000000..09c729572bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/connect-to-your-target-kubernetes-platform-53.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/connect-to-your-target-kubernetes-platform-54.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/connect-to-your-target-kubernetes-platform-54.png new file mode 100644 index 00000000000..09c729572bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/connect-to-your-target-kubernetes-platform-54.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-217.gif b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-217.gif new file mode 100644 index 00000000000..b5651987214 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-217.gif differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-218.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-218.png new file mode 100644 index 00000000000..e3f9dee4978 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-218.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-219.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-219.png new file mode 100644 index 00000000000..e3f9dee4978 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-219.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-220.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-220.png new file mode 100644 index 00000000000..56233885fa5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-220.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-221.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-221.png new file mode 100644 index 00000000000..45495332c2a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-221.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-222.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-222.png new file mode 100644 index 00000000000..45495332c2a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-222.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-223.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-223.png new file mode 100644 index 00000000000..3aa42f67576 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-223.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-224.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-224.png new file mode 100644 index 00000000000..3aa42f67576 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-224.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-225.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-225.png new file mode 100644 index 00000000000..07fec41ac5c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-225.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-226.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-226.png new file mode 100644 index 00000000000..1362945bf0e Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-226.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-227.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-227.png new file mode 100644 index 00000000000..1362945bf0e Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-227.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-228.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-228.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-228.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-229.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-229.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-229.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-230.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-230.png new file mode 100644 index 00000000000..366cdcb681a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-230.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-231.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-231.png new file mode 100644 index 00000000000..366cdcb681a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-231.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-232.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-232.png new file mode 100644 index 00000000000..7d1021fee71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-232.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-233.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-233.png new file mode 100644 index 00000000000..7d1021fee71 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-233.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-234.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-234.png new file mode 100644 index 00000000000..d8634e02c1a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-234.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-235.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-235.png new file mode 100644 index 00000000000..d8634e02c1a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-235.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-236.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-236.png new file mode 100644 index 00000000000..7aada17a545 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-236.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-237.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-237.png new file mode 100644 index 00000000000..7aada17a545 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-237.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-238.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-238.png new file mode 100644 index 00000000000..2009a1df716 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-238.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-239.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-239.png new file mode 100644 index 00000000000..2009a1df716 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-239.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-240.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-240.png new file mode 100644 index 00000000000..f7388d84c0d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-240.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-241.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-241.png new file mode 100644 index 00000000000..f7388d84c0d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-blue-green-deployment-241.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-02.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-02.png new file mode 100644 index 00000000000..840d0720620 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-02.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-03.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-03.png new file mode 100644 index 00000000000..e260bf1f253 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-03.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-04.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-04.png new file mode 100644 index 00000000000..c867fcd800f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-04.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-05.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-05.png new file mode 100644 index 00000000000..c867fcd800f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-05.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-06.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-06.png new file mode 100644 index 00000000000..ee98cd4cbf1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-06.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-07.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-07.png new file mode 100644 index 00000000000..ec5901bf4f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-07.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-08.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-08.png new file mode 100644 index 00000000000..9fcb368566d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-08.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-09.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-09.png new file mode 100644 index 00000000000..538a3d5e581 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-09.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-10.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-10.png new file mode 100644 index 00000000000..4f6b2bf954f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-10.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-11.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-11.png new file mode 100644 index 00000000000..4f6b2bf954f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-11.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-12.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-12.png new file mode 100644 index 00000000000..272a8ec5245 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-12.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-13.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-13.png new file mode 100644 index 00000000000..272a8ec5245 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-13.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-14.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-14.png new file mode 100644 index 00000000000..95b9d10ce3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-14.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-15.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-15.png new file mode 100644 index 00000000000..95b9d10ce3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-15.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-16.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-16.png new file mode 100644 index 00000000000..bc17b6371a6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-16.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-17.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-17.png new file mode 100644 index 00000000000..bc17b6371a6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-17.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-18.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-18.png new file mode 100644 index 00000000000..a397af86c62 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-18.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-19.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-19.png new file mode 100644 index 00000000000..a397af86c62 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-19.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-20.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-20.png new file mode 100644 index 00000000000..971b7645bd3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-20.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-21.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-21.png new file mode 100644 index 00000000000..971b7645bd3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-canary-deployment-21.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-104.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-104.png new file mode 100644 index 00000000000..8dcb8634460 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-104.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-105.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-105.png new file mode 100644 index 00000000000..cb61943d4fd Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-105.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-106.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-106.png new file mode 100644 index 00000000000..418de480c95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-106.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-107.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-107.png new file mode 100644 index 00000000000..418de480c95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-107.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-108.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-108.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-108.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-109.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-109.png new file mode 100644 index 00000000000..bfd0a21893b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-109.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-110.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-110.png new file mode 100644 index 00000000000..366cdcb681a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-110.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-111.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-111.png new file mode 100644 index 00000000000..366cdcb681a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-a-kubernetes-rolling-deployment-111.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-crd-deployments-215.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-crd-deployments-215.png new file mode 100644 index 00000000000..69c379f3aa3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-crd-deployments-215.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-crd-deployments-216.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-crd-deployments-216.png new file mode 100644 index 00000000000..8e208504a25 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-crd-deployments-216.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-based-on-infra-mapping-27.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-based-on-infra-mapping-27.png new file mode 100644 index 00000000000..89a23f35504 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-based-on-infra-mapping-27.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-207.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-207.png new file mode 100644 index 00000000000..10cad9e67f1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-207.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-208.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-208.png new file mode 100644 index 00000000000..aafd3744917 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-208.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-209.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-209.png new file mode 100644 index 00000000000..9ba36e9809f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-209.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-210.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-210.png new file mode 100644 index 00000000000..abfda20cd8b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/create-kubernetes-namespaces-with-workflow-variables-210.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-kubernetes-manifests-180.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-kubernetes-manifests-180.png new file mode 100644 index 00000000000..e3f9dee4978 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-kubernetes-manifests-180.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-kubernetes-manifests-181.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-kubernetes-manifests-181.png new file mode 100644 index 00000000000..7ad2c5c4105 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-kubernetes-manifests-181.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-170.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-170.png new file mode 100644 index 00000000000..4f8a9edce0a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-170.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-171.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-171.png new file mode 100644 index 00000000000..d758483acd6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-171.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-172.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-172.png new file mode 100644 index 00000000000..0ea6be56635 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-172.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-173.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-173.png new file mode 100644 index 00000000000..6a5e60b5abd Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-173.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-174.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-174.png new file mode 100644 index 00000000000..8c025f30c23 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-174.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-175.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-175.png new file mode 100644 index 00000000000..73847402b66 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-175.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-176.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-176.png new file mode 100644 index 00000000000..e231ef4d181 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/define-your-kubernetes-target-infrastructure-176.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-113.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-113.png new file mode 100644 index 00000000000..07745fac82a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-113.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-114.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-114.png new file mode 100644 index 00000000000..773d4b66fab Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-114.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-115.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-115.png new file mode 100644 index 00000000000..6017fb0f26b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-115.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-116.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-116.png new file mode 100644 index 00000000000..6017fb0f26b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-116.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-117.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-117.png new file mode 100644 index 00000000000..c35f8a2feb2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-117.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-118.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-118.png new file mode 100644 index 00000000000..7bc189cd92a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-118.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-119.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-119.png new file mode 100644 index 00000000000..1ef4457c27c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/delete-kubernetes-resources-119.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-36.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-36.png new file mode 100644 index 00000000000..e148f876213 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-36.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-37.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-37.png new file mode 100644 index 00000000000..e148f876213 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-37.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-38.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-38.png new file mode 100644 index 00000000000..ffa03db8c9d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-38.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-39.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-39.png new file mode 100644 index 00000000000..7556fae991d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-39.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-40.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-40.png new file mode 100644 index 00000000000..c9a6a1b64d6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-40.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-41.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-41.png new file mode 100644 index 00000000000..c13210d3dc1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-41.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-42.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-42.png new file mode 100644 index 00000000000..5bc5d8fcabc Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-42.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-43.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-43.png new file mode 100644 index 00000000000..2bf87a5fa8b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-43.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-44.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-44.png new file mode 100644 index 00000000000..92d07d33d83 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-44.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-45.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-45.png new file mode 100644 index 00000000000..4f3b27f0c2d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-45.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-46.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-46.png new file mode 100644 index 00000000000..839e65d2206 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-a-helm-chart-as-an-artifact-46.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-177.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-177.png new file mode 100644 index 00000000000..de4094d0df5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-177.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-178.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-178.png new file mode 100644 index 00000000000..d77b38f8ebc Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-178.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-179.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-179.png new file mode 100644 index 00000000000..f4fe24b48fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-manifests-packaged-with-artifacts-179.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-196.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-196.png new file mode 100644 index 00000000000..549aa4b6ffe Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-196.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-197.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-197.png new file mode 100644 index 00000000000..9f623210688 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-197.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-198.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-198.png new file mode 100644 index 00000000000..bf3537df7a6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-198.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-199.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-199.png new file mode 100644 index 00000000000..1f66976e0ef Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-199.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-200.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-200.png new file mode 100644 index 00000000000..549aa4b6ffe Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-200.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-201.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-201.png new file mode 100644 index 00000000000..0f856d978db Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-201.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-202.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-202.png new file mode 100644 index 00000000000..da597fb001c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-kubernetes-service-to-multiple-clusters-using-rancher-202.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-188.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-188.png new file mode 100644 index 00000000000..2a29fa400b9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-188.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-189.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-189.png new file mode 100644 index 00000000000..cbd4043cfea Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-189.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-190.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-190.png new file mode 100644 index 00000000000..1ddbe5627d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-190.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-191.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-191.png new file mode 100644 index 00000000000..1ddbe5627d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-191.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-192.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-192.png new file mode 100644 index 00000000000..b4fc2ee1c45 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-192.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-193.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-193.png new file mode 100644 index 00000000000..8bce4197d19 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-193.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-194.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-194.png new file mode 100644 index 00000000000..fbdb0a3e8c2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-194.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-195.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-195.png new file mode 100644 index 00000000000..15bc7859088 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/deploy-manifests-separately-using-apply-step-195.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-162.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-162.png new file mode 100644 index 00000000000..1ddbe5627d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-162.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-163.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-163.png new file mode 100644 index 00000000000..a9c450e7988 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-163.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-164.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-164.png new file mode 100644 index 00000000000..1ddbe5627d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/ignore-a-manifest-file-during-deployment-164.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-203.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-203.png new file mode 100644 index 00000000000..3e8a2cd2f9f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-203.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-204.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-204.png new file mode 100644 index 00000000000..e7340c0e4c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-204.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-205.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-205.png new file mode 100644 index 00000000000..391227de76c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/link-resource-files-or-helm-charts-in-git-repos-205.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-22.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-22.png new file mode 100644 index 00000000000..f77f1f0d236 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-22.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-23.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-23.png new file mode 100644 index 00000000000..aa80416395a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-23.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-24.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-24.png new file mode 100644 index 00000000000..9c91222d866 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-24.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-25.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-25.png new file mode 100644 index 00000000000..8f3bf320d77 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-25.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-26.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-26.png new file mode 100644 index 00000000000..728cb642927 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-harness-kubernetes-service-settings-26.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-140.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-140.png new file mode 100644 index 00000000000..65cec43f1e7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-140.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-141.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-141.png new file mode 100644 index 00000000000..bd8a88c027c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-141.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-142.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-142.png new file mode 100644 index 00000000000..8d096968a51 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-142.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-143.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-143.png new file mode 100644 index 00000000000..586c1f1df08 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-143.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-144.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-144.png new file mode 100644 index 00000000000..f65eada69da Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-144.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-145.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-145.png new file mode 100644 index 00000000000..ce5c6479dd2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-145.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-146.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-146.png new file mode 100644 index 00000000000..9580a8ace5a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-146.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-147.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-147.png new file mode 100644 index 00000000000..4ad01f8b728 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-147.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-148.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-148.png new file mode 100644 index 00000000000..dfe04ac9c3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-148.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-149.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-149.png new file mode 100644 index 00000000000..ebf44aa1d3f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-149.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-150.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-150.png new file mode 100644 index 00000000000..586e00dde23 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-150.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-151.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-151.png new file mode 100644 index 00000000000..07927fa877c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-151.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-152.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-152.png new file mode 100644 index 00000000000..154415d8c1b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-values-yaml-files-152.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-130.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-130.png new file mode 100644 index 00000000000..59faff40527 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-130.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-131.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-131.png new file mode 100644 index 00000000000..290c7807087 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-131.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-132.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-132.png new file mode 100644 index 00000000000..798c0ec5767 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-132.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-133.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-133.png new file mode 100644 index 00000000000..8f9b823ee24 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-133.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-134.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-134.png new file mode 100644 index 00000000000..f7e8b51fdb8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-134.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-135.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-135.png new file mode 100644 index 00000000000..b62e5570424 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-135.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-136.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-136.png new file mode 100644 index 00000000000..2686a7cbfec Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/override-variables-per-infrastructure-definition-136.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-28.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-28.png new file mode 100644 index 00000000000..7897808af8b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-28.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-29.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-29.png new file mode 100644 index 00000000000..7de21eac715 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-29.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-30.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-30.png new file mode 100644 index 00000000000..f929bb579af Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-30.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-31.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-31.png new file mode 100644 index 00000000000..1683ec3a033 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-31.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-32.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-32.png new file mode 100644 index 00000000000..6b82720b8cf Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-32.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-33.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-33.png new file mode 100644 index 00000000000..df75dd67a26 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-33.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-34.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-34.png new file mode 100644 index 00000000000..a02e7ddf9b5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-34.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-35.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-35.png new file mode 100644 index 00000000000..f71262a5b0a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/provision-kubernetes-infrastructures-35.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-61.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-61.png new file mode 100644 index 00000000000..71d9815e905 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-61.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-62.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-62.png new file mode 100644 index 00000000000..7de8e774026 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-62.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-63.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-63.png new file mode 100644 index 00000000000..c986484d98b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-63.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-64.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-64.png new file mode 100644 index 00000000000..b6eb04b19d2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-64.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-65.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-65.png new file mode 100644 index 00000000000..c5fb8939a8f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-65.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-66.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-66.png new file mode 100644 index 00000000000..37510b41e8a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-66.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-67.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-67.png new file mode 100644 index 00000000000..4a147c09f63 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-67.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-68.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-68.png new file mode 100644 index 00000000000..8e0a99c86e4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-68.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-69.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-69.png new file mode 100644 index 00000000000..618cdd50c54 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-69.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-70.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-70.png new file mode 100644 index 00000000000..ce2612c9303 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-70.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-71.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-71.png new file mode 100644 index 00000000000..75e3ba5e826 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-71.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-72.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-72.png new file mode 100644 index 00000000000..ef489391089 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-72.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-73.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-73.png new file mode 100644 index 00000000000..5243269db33 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-73.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-74.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-74.png new file mode 100644 index 00000000000..c911a7e947a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/run-kubernetes-jobs-74.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/scale-kubernetes-pods-112.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/scale-kubernetes-pods-112.png new file mode 100644 index 00000000000..c37ffa80813 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/scale-kubernetes-pods-112.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-182.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-182.png new file mode 100644 index 00000000000..696feb278a0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-182.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-183.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-183.png new file mode 100644 index 00000000000..ed17f6ee4fa Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-183.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-184.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-184.png new file mode 100644 index 00000000000..696feb278a0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-184.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-185.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-185.png new file mode 100644 index 00000000000..f6d2c961a40 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-185.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-186.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-186.png new file mode 100644 index 00000000000..df99ff2f998 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-186.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-187.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-187.png new file mode 100644 index 00000000000..e88299f0c16 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/set-up-kubernetes-traffic-splitting-187.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-122.gif b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-122.gif new file mode 100644 index 00000000000..c89a7052a05 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-122.gif differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-123.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-123.png new file mode 100644 index 00000000000..14cb953d60a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-123.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-124.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-124.png new file mode 100644 index 00000000000..8827da86229 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-124.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-125.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-125.png new file mode 100644 index 00000000000..edfa2e15308 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-125.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-126.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-126.png new file mode 100644 index 00000000000..e375d6b0be0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-126.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-127.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-127.png new file mode 100644 index 00000000000..16fffac872e Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-127.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-128.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-128.png new file mode 100644 index 00000000000..8827da86229 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-128.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-129.gif b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-129.gif new file mode 100644 index 00000000000..c89a7052a05 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/traffic-splitting-without-istio-129.gif differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upgrade-to-helm-3-charts-in-kubernetes-services-120.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upgrade-to-helm-3-charts-in-kubernetes-services-120.png new file mode 100644 index 00000000000..5ed78811409 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upgrade-to-helm-3-charts-in-kubernetes-services-120.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upgrade-to-helm-3-charts-in-kubernetes-services-121.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upgrade-to-helm-3-charts-in-kubernetes-services-121.png new file mode 100644 index 00000000000..1cfe46e4470 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upgrade-to-helm-3-charts-in-kubernetes-services-121.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-165.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-165.png new file mode 100644 index 00000000000..7c44bde1297 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-165.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-166.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-166.png new file mode 100644 index 00000000000..82c8ab59034 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-166.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-167.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-167.png new file mode 100644 index 00000000000..82c8ab59034 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-167.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-168.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-168.png new file mode 100644 index 00000000000..e00f92191e8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-168.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-169.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-169.png new file mode 100644 index 00000000000..89a23f35504 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/upload-kubernetes-resource-files-169.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-153.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-153.png new file mode 100644 index 00000000000..b51fc7a6faf Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-153.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-154.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-154.png new file mode 100644 index 00000000000..9580a8ace5a Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-154.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-155.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-155.png new file mode 100644 index 00000000000..e40f47ca578 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-155.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-156.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-156.png new file mode 100644 index 00000000000..3c3f182d772 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-156.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-157.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-157.png new file mode 100644 index 00000000000..f651046281c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-157.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-158.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-158.png new file mode 100644 index 00000000000..f651046281c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-158.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-159.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-159.png new file mode 100644 index 00000000000..a8e78b3d157 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-159.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-160.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-160.png new file mode 100644 index 00000000000..a8e78b3d157 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-160.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-161.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-161.png new file mode 100644 index 00000000000..d6f132d3c2d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-a-helm-repository-with-kubernetes-161.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-go-templating-in-kubernetes-manifests-206.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-go-templating-in-kubernetes-manifests-206.png new file mode 100644 index 00000000000..7ad2c5c4105 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-go-templating-in-kubernetes-manifests-206.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-47.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-47.png new file mode 100644 index 00000000000..35df5ea26a6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-47.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-48.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-48.png new file mode 100644 index 00000000000..83879b114d6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-48.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-49.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-49.png new file mode 100644 index 00000000000..24e7502d40f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-49.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-50.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-50.png new file mode 100644 index 00000000000..474f6bb0b79 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-50.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-51.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-51.png new file mode 100644 index 00000000000..1b77c750143 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-helm-chart-hooks-in-kubernetes-deployments-51.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-100.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-100.png new file mode 100644 index 00000000000..a8133a30135 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-100.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-101.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-101.png new file mode 100644 index 00000000000..414055a7e18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-101.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-75.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-75.png new file mode 100644 index 00000000000..e16394434d6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-75.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-76.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-76.png new file mode 100644 index 00000000000..93c45c2b215 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-76.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-77.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-77.png new file mode 100644 index 00000000000..12a03096d5b Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-77.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-78.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-78.png new file mode 100644 index 00000000000..192f4292599 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-78.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-79.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-79.png new file mode 100644 index 00000000000..02e45471cbe Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-79.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-80.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-80.png new file mode 100644 index 00000000000..e8b5f4e1751 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-80.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-81.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-81.png new file mode 100644 index 00000000000..5278775fff0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-81.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-82.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-82.png new file mode 100644 index 00000000000..82f0691ea08 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-82.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-83.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-83.png new file mode 100644 index 00000000000..ef1a0254f44 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-83.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-84.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-84.png new file mode 100644 index 00000000000..49e44eb8096 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-84.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-85.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-85.png new file mode 100644 index 00000000000..bcb6bc0f0c0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-85.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-86.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-86.png new file mode 100644 index 00000000000..c728a02b6b6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-86.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-87.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-87.png new file mode 100644 index 00000000000..8298f80c21e Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-87.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-88.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-88.png new file mode 100644 index 00000000000..91b4eaca671 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-88.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-89.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-89.png new file mode 100644 index 00000000000..4f78b62c31c Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-89.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-90.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-90.png new file mode 100644 index 00000000000..9ebe4e0b826 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-90.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-91.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-91.png new file mode 100644 index 00000000000..9600457da8d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-91.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-92.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-92.png new file mode 100644 index 00000000000..0ba3c3ecce0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-92.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-93.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-93.png new file mode 100644 index 00000000000..1fe485283d1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-93.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-94.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-94.png new file mode 100644 index 00000000000..380be8e5b1f Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-94.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-95.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-95.png new file mode 100644 index 00000000000..fec6a9e603d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-95.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-96.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-96.png new file mode 100644 index 00000000000..e510eb3b5bf Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-96.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-97.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-97.png new file mode 100644 index 00000000000..e2088b95cd6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-97.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-98.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-98.png new file mode 100644 index 00000000000..d6463d8988e Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-98.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-99.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-99.png new file mode 100644 index 00000000000..7f90d293be5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/use-kustomize-for-kubernetes-deployments-99.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-files-in-manifests-102.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-files-in-manifests-102.png new file mode 100644 index 00000000000..852bd83c7c2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-files-in-manifests-102.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-files-in-manifests-103.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-files-in-manifests-103.png new file mode 100644 index 00000000000..31ca6c18dca Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-files-in-manifests-103.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-211.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-211.png new file mode 100644 index 00000000000..91636a591f4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-211.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-212.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-212.png new file mode 100644 index 00000000000..9a9a63c71e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-212.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-213.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-213.png new file mode 100644 index 00000000000..a14c6f96e8d Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-213.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-214.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-214.png new file mode 100644 index 00000000000..36fea227eb1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-harness-config-variables-in-manifests-214.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-open-shift-with-harness-kubernetes-00.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-open-shift-with-harness-kubernetes-00.png new file mode 100644 index 00000000000..93c45c2b215 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-open-shift-with-harness-kubernetes-00.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-open-shift-with-harness-kubernetes-01.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-open-shift-with-harness-kubernetes-01.png new file mode 100644 index 00000000000..684521d6ffe Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/using-open-shift-with-harness-kubernetes-01.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/static/workflow-variables-expressions-52.png b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/workflow-variables-expressions-52.png new file mode 100644 index 00000000000..b76c9de0422 Binary files /dev/null and b/docs/first-gen/continuous-delivery/kubernetes-deployments/static/workflow-variables-expressions-52.png differ diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/traffic-splitting-without-istio.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/traffic-splitting-without-istio.md new file mode 100644 index 00000000000..9a4ba2d470c --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/traffic-splitting-without-istio.md @@ -0,0 +1,445 @@ +--- +title: Traffic Splitting Without Istio +description: Progressively increase traffic to new application versions using Ingress. +sidebar_position: 290 +helpdocs_topic_id: tkubd954r6 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to progressively increase traffic to new application versions using Ingress resources, Harness annotations, and the [Apply](deploy-manifests-separately-using-apply-step.md) step. + +**Using Istio already?** Follow the steps in [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md). + +For standard Canary and Blue/Green Kubernetes deployments, see [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) and [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md).[Kubernetes Annotations](https://docs.harness.io/article/ttn8acijrz-versioning-and-annotations) are used to ignore the Ingress manifests during the main deployment, and to specify weights for each Ingress resource (0, 25, 50). Each Ingress resource is then applied using a separate Apply strep. + +This technique can be use with [Blue/Green](create-a-kubernetes-blue-green-deployment.md) and [Canary](create-a-kubernetes-canary-deployment.md) deployments. For this topic, we will modify standard Harness Kubernetes Blue/Green deployment. + +For applications that are implemented using service mesh, such as Istio, see [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md). + + +### Before You Begin + +* You will need Kubernetes cluster with an Ingress controller deployed that supports traffic splitting, such as the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/deploy/). +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Set up Kubernetes Ingress Rules](set-up-kubernetes-ingress-rules.md) + +### Visual Summary + +Here's a recording of what your completed deployment will look like, including approval steps: + +![](./static/traffic-splitting-without-istio-122.gif) + +### Step 1: Review Blue/Green Service Swap + +When you create a Harness Service for a Blue/Green deployment, you need to include a manifest for each of the Kubernetes services used in Blue/Green. + +Harness refers to the two services as primary and stage Kubernetes services, distinguished using the following **mandatory** annotations: + +* **Primary** - `annotations: harness.io/primary-service: "true"` +* **Stage** - `annotations: harness.io/stage-service: "true"` + +When the Workflow is deployed, Harness modifies the `selector` at runtime to add `harness.io/color` with values `blue` and `green`. + +Harness uses these to redirect traffic from the stage service to the primary service (current version). + +After the route update where the primary and stage service is swapped, the primary service routes requests to the new app version based on the `harness.io/color` selector. Here is a log of the swap: + + +``` +Begin execution of command Kubernetes Swap Service Selectors + +Selectors for Service One : [name:colors-blue-green-primary] +app: colors-blue-green +harness.io/color: blue + +Selectors for Service Two : [name:colors-blue-green-stage] +app: colors-blue-green +harness.io/color: green + +Swapping Service Selectors.. + +Updated Selectors for Service One : [name:colors-blue-green-primary] +app: colors-blue-green +harness.io/color: green + +Updated Selectors for Service Two : [name:colors-blue-green-stage] +app: colors-blue-green +harness.io/color: blue + +Done +``` +With this method, only one version of the application is servicing the requests at anytime. + +For more information, see [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md). + +### Step 2: Review Traffic Splitting with Ingress + +In Traffic Splitting Without Istio deployment, first we deploy the new version of the app and use the **stage service** to send traffic to it. + +Next we increase the **stage service** traffic to 25% and then 50% so it receives half the traffic. Now the new and old versions are sharing traffic equally. + +Once the swap occurs, and the stage service is now routing to the old app version, we will decrease the traffic to the stage service (and old version) to 25% and then 0%. + +To increase and decrease traffic, we are using the weight-based traffic splitting of the [NGINX Ingress controller](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). To control the weights, we use this controller's [nginx.ingress.kubernetes.io/canary-weight](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary) annotation in our Ingress manifests. + +Here's how the Ingress weights will be used on the Kubernetes **stage** service: + +1. Before the swap: + 1. Stage service receives weight of 25%. + 2. Stage service receives weight of 50%. +2. After the swap: + 1. Stage service receives weight of 25%. + 2. Stage service receives weight of 0%. + +### Step 3: Create the Harness Service + +1. Create the Harness Kubernetes Service you will use for your artifact and manifests. +2. Add a container artifact using the steps in [Add a Docker Artifact Source](https://docs.harness.io/article/gxv9gj6khz-add-a-docker-image-service). + + In the following steps we will supplement the default files generated by Harness. The following files will be used but will not be changed: + + * deployment.yaml + * namespace.yaml + * values.yaml + +3. Remove the default **service.yaml** file. We will be replacing this file in the steps below. + +### Step 4: Create the Primary Service Manifest and Ingress + +1. In the templates folder, create a file named **service-primary.yaml**. +2. Add the following YAML to service-primary.yaml: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: {{.Values.name}}-svc-primary + annotations: + harness.io/primary-service: "true" +spec: + type: {{.Values.serviceType}} + ports: + - port: {{.Values.servicePort}} + targetPort: {{.Values.serviceTargetPort}} + protocol: TCP + selector: + app: {{.Values.name}} +``` +Note the `-primary​` suffix in the name and `harness.io/primary-service: "true"` annotation. + +Next, create the Ingress manifest for the primary service. Add a new file named ingress.yaml and add the following: + + +``` +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: {{.Values.name}}-svc​-primary​ + + labels: + app: {{.Values.name}} + annotations: + kubernetes.io/ingress.class: "nginx" +spec: + rules: + - host: {{.Values.name}}.com + http: + paths: + - backend: + serviceName: {{.Values.name}}-svc​-primary + servicePort: 80 +``` +Next, we'll create the Kubernetes manifest for the stage service. + +### Step 5: Create the Stage Service Manifest + +1. In the templates folder, create a file named **service-stage.yaml**. +2. Add the following YAML to service-stage.yaml: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: {{.Values.name}}-svc-stage + annotations: + harness.io/stage-service: "true" +spec: + type: {{.Values.serviceType}} + ports: + - port: {{.Values.servicePort}} + targetPort: {{.Values.serviceTargetPort}} + protocol: TCP + selector: + app: {{.Values.name}} +``` +Note the `-stage` suffix in the name and `harness.io/stage-service: "true"` annotation. + +### Step 6: Add the Ingress Manifests + +There are three Ingress manifests to add. Harness will ignore them in the main deployment step (**Stage Development** step) because they start with the comment: + +`# harness.io/skip-file-for-deploy` + +See [Ignore a Manifest File During Deployment](ignore-a-manifest-file-during-deployment.md) for more information on ignoring manifests. + +Each Ingress manifest will also contain the [nginx.ingress.kubernetes.io/canary-weight](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary) annotation. Here is an example from one Ingress manifest: + + +``` +# Enable canary and send 0% of traffic to version 2 +nginx.ingress.kubernetes.io/canary: "true" +**nginx.ingress.kubernetes.io/canary-weight: "0"** +``` +Copy and paste the following three Ingress manifests into three new files in **templates**. + +1. For each new file, click the **templates** folder, and then click **Add File**: + +![](./static/traffic-splitting-without-istio-123.png) + +Add the following three files: + +#### ingress-traffic-split0.yaml + + +``` +# harness.io/skip-file-for-deploy +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: {{.Values.name}}-svc-stage + + labels: + app: {{.Values.name}} + annotations: + kubernetes.io/ingress.class: "nginx" + + # Enable canary and send 0% of traffic to version 2 + nginx.ingress.kubernetes.io/canary: "true" + **nginx.ingress.kubernetes.io/canary-weight: "0"**spec: + rules: + - host: {{.Values.name}}.com + http: + paths: + - backend: + serviceName: {{.Values.name}}-svc-stage + servicePort: 80 +``` +#### ingress-traffic-split25.yaml + + +``` +# harness.io/skip-file-for-deploy +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: {{.Values.name}}-svc-stage + + labels: + app: {{.Values.name}} + annotations: + kubernetes.io/ingress.class: "nginx" + + # Enable canary and send 25% of traffic to version 2 + nginx.ingress.kubernetes.io/canary: "true" + **nginx.ingress.kubernetes.io/canary-weight: "25"**spec: + rules: + - host: {{.Values.name}}.com + http: + paths: + - backend: + serviceName: {{.Values.name}}-svc-stage + servicePort: 80 +``` +#### ingress-traffic-split50.yaml + + +``` +# harness.io/skip-file-for-deploy +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: {{.Values.name}}-svc-stage + + labels: + app: {{.Values.name}} + annotations: + kubernetes.io/ingress.class: "nginx" + + # Enable canary and send 50% of traffic to version 2 + nginx.ingress.kubernetes.io/canary: "true" + **nginx.ingress.kubernetes.io/canary-weight: "50"**spec: + rules: + - host: {{.Values.name}}.com + http: + paths: + - backend: + serviceName: {{.Values.name}}-svc-stage + servicePort: 80 +``` +That's all the configuration needed in the Harness Service. + +### Step 7: Define Your Kubernetes Target Infrastructure + +There are no Harness Infrastructure Definition settings specific to Kubernetes Blue/Green deployment. Create or use the Infrastructure Definition that targets your cluster, as described in [Define Your Kubernetes Target Infrastructure](define-your-kubernetes-target-infrastructure.md). + +Ensure that the Kubernetes cluster includes an Ingress controller that supports traffic splitting, such as the [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/) we are using as an example. + +### Step 8: Create the Blue/Green Workflow + +1. In your Application, click **Workflows**, and then click **Add Workflow**. The Workflow settings appear. Enter the following settings: + +* **Name:** Enter a name for your Workflow. +* **Workflow Type:** Select **Blue/Green Deployment**. +* **Environment:** Select the Environment that contains your target Infrastructure Definition. +* **Service:** Select the Service containing your Ingress and service manifests. +* **Infrastructure Definition:** Select the Infrastructure Definition for your target Kubernetes cluster. + + +The Workflow is created with the default Blue/Green steps. + +When we are done with the following steps the Workflow will look like this: + +![](./static/traffic-splitting-without-istio-124.png) + +### Step 9: Add Workflow Sections + +Workflow sections help you organize your steps. We'll add one section before the **Verify** section, and one after the **Route Update** section. + +To add a section, click the options button (**︙**) next to any section and then click **Add Section**. + +![](./static/traffic-splitting-without-istio-125.png) + +Enter the name **Shift 50% Traffic Before Switch** and click **Submit**. + +The new section is added to the bottom of the Workflow. Use the Reorder option to move the section to right above **Verify**. + +![](./static/traffic-splitting-without-istio-126.png) + +Add another section named **Shift Remaining Traffic After Switch** and move it to right before the **Wrap Up** step. + +When you're done, the Workflow will look like this: + +![](./static/traffic-splitting-without-istio-127.png) + +### Step 10: Add Apply Steps + +Next we'll add the Apply steps for the Ingress objects to defined in your Harness Service. + +1. In the new **Shift 50% Traffic Before Switch** section, click **Add Step**, and select the **Apply** step. +2. Enter the following settings to apply the `ingress-traffic-split25.yaml` file from the Harness Service and click **Submit**: + + * **Name:** enter Configure **Stage 25%**. + * **File Paths:** enter `templates/ingress-traffic-split25.yaml`. + * **Delegate Selector:** see [Option: Delegate Selector Setting](#option_delegate_selector_setting). + * For the rest of the settings, you can leave the defaults. + + This step will increase the traffic routed to the stage service and the new app version by 25%. + + Next, you will add a step to increase the traffic routed to the stage service and the new app version by 50%. + +3. Below this step, add another Apply step to apply the `ingress-traffic-split50.yaml` file from the Harness Service and click **Submit:** + + * **Name:** Enter Configure **Stage 50%**. + * **File Paths:** Enter `templates/ingress-traffic-split50.yaml`. + * **Delegate Selector:** see [Option: Delegate Selector Setting](#option_delegate_selector_setting). + * For the rest of the settings, you can leave the defaults. + + Now you can add the steps for decreasing the traffic routed to the stage service and the old version of the app. The first step decreases the traffic to 25%: + +4. In the **Shift Remaining Traffic After Switch** section, add an Apply step for the `templates/ingress-traffic-split25.yaml` file from the Harness Service and click **Submit:** + + * **Name:** Enter Configure **Stage 25%**. + * **File Paths:** Enter `templates/ingress-traffic-split25.yaml`. + * **Delegate Selector:** see [Option: Delegate Selector Setting](#option_delegate_selector_setting). + * For the rest of the settings, you can leave the defaults. + + Finally, you add a step to decrease the traffic routed to the stage service and the old app version to 0%: + +5. Below this step, add another Apply step to apply the `ingress-traffic-split0.yaml` file from the Harness Service and click **Submit:** + + * **Name:** Enter Configure **Stage 0%**. + * **File Paths:** Enter `templates/ingress-traffic-split0.yaml`. + * **Delegate Selector:** see [Option: Delegate Selector Setting](#option_delegate_selector_setting). + * For the rest of the settings, you can leave the defaults. + +You can run the deployment now. The following step add Approval steps between each traffic increase and decrease so you can approve of the changes. + +### Option 1: Add Approval Steps + +Add Approve steps in between each Apply step to ensure that traffic is not increased or decreased without your approval. + +Here is an example of the Approve step for the 25% increase: + +* **Name:** Enter **Approve 25%**. +* **Ticketing System:** Select **Harness UI**. +* **User Groups:** Select a group in which you are a member, such as **Account Administrator**. + +Create Approve steps before each of the remaining Apply steps. When you're done the Workflow will look like this: + +![](./static/traffic-splitting-without-istio-128.png) + +### Step 11: Deploy the Workflow + +Now that your Workflow is complete, you can deploy it. If you added the Approve steps, be sure to approve each one. + +Here's a recording of what your deployment will look like: + +![](./static/traffic-splitting-without-istio-129.gif) + +### Option: Rollback Steps + +To make the Workflow more robust, you can add an ingress-traffic-split0.yaml in an Apply step in the Workflow Rollback Steps. + +If there is a failure and rollback, Harness returns the services to the original state. + +When thinking of rollback, it's important to keep in mind at which point in deployment the primary service is the new version or the old version. On deployment success, it is the new version, on rollback it is the old version. + +In all rollback cases we want stage traffic to end up at 0 and primary traffic to end up at 100. + +When using the ingress-traffic-split0.yaml in Rollback steps, the following happens: + +* **Failure before the swap** — Stage traffic goes back to 0 and primary to 100. This is because stage was deployed and primary has not been touched. +* **Failure after the swap** — The new deployment is now primary, so Harness swaps back. This makes the old deployment primary. Stage traffic is now 0 and primary is 100. + +### Option: Delegate Selector Setting + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting. + +### Notes + +* **Conflicting Service Error:** If you deployed the standard Blue/Green Workflow before configuring it with the Ingress steps, and then deployed it with the Ingress steps, you might get the following error: + +``` +Found conflicting service [harness-example-svc] in the cluster. For blue/green deployment, the label [harness.io/color] is required in service selector. Delete this existing service to proceed +``` + +This is because the services changed between deployments. You can delete the first service using a Delete step or you can use a different name for your app. +* **Traffic splitting Apply steps must be in the same Workflow phase as the Canary deployment step:** Do not add the Apply steps to separate phases. + +### Next Steps + +* [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) +* [Set up Kubernetes Ingress Rules](set-up-kubernetes-ingress-rules.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/upgrade-to-helm-3-charts-in-kubernetes-services.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/upgrade-to-helm-3-charts-in-kubernetes-services.md new file mode 100644 index 00000000000..01fa155ec4c --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/upgrade-to-helm-3-charts-in-kubernetes-services.md @@ -0,0 +1,71 @@ +--- +title: Upgrade to Helm 3 Charts in Kubernetes Services +description: Upgrade your Harness Kubernetes Service to use Helm 3 charts. +sidebar_position: 380 +helpdocs_topic_id: lk57k7irla +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to upgrade your Harness Kubernetes Service to use Helm 3 charts. + +You can use Helm 3 charts in both Kubernetes and native Helm Services. For information on upgrading native Helm Services, see [Upgrade Native Helm 2 Deployments to Helm 3](../helm-deployment/upgrade-native-helm-2-deployments-to-helm-3.md). + +**What's a native Helm deployment in Harness?** Harness provides Kubernetes deployments that use Helm charts without requiring Helm or Tiller be installed in your target environment. These are called Harness Kubernetes deployments. This is the recommended method. If you want to deploy to a Kubernetes cluster using Helm explicitly, you can use native Helm deployments. You simply choose **Helm** as the **Deployment Type** when you create a Harness Service.I + + +### Review: Custom Helm Binaries + +Harness ships Helm binaries with all Harness Delegates. + +If you want the Delegate to use a specific Helm binary, do the following: + +1. In the installed Harness Delegate folder **harness-delegate**, open the file **config-delegate.yml**. + 1. For a Helm 2 binary, add `helmPath`. + 2. For a Helm 3 binary, add `helm3Path`. +2. Next, enter the path to the binary. For example: `helm3Path: /usr/local/bin/helm_3`. + +### Option 1: Upgrade Helm Version Number in the Harness UI + +1. In Harness, locate a Kubernetes Service that uses Helm charts, or where you plan on using Helm charts. +2. In **Manifests**, click **Link Remote Manifests**. The **Remote Manifests** settings appear. +3. In **Manifest Format**, **Helm Chart from Helm Repository** will be selected already if you are upgrading an existing Service. For steps on configuring all of these settings, see [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md). +4. In **Helm Version**, select the Helm version of your chart, such as **v3**.![](./static/upgrade-to-helm-3-charts-in-kubernetes-services-120.png) +5. Click **Submit**. + +You can now use Helm 3 charts. + +### Option 2: Upgrade Helm Version Number in YAML + +1. In Harness, locate a Service that uses Helm charts, or where you plan on using Helm charts. +2. In your Harness Kubernetes Service, click the **Configure As Code** button. The index.yaml for your Service contains the current Helm version used as `helmVersion: V2`:![](./static/upgrade-to-helm-3-charts-in-kubernetes-services-121.png)If `helmVersion` is not listed, do not worry. You can add it. +3. Click **Edit**. +4. Do one of the following: + * Change `helmVersion: V2` to `helmVersion: V3`. + * Add `helmVersion: V3`. +5. Click **Save**. + +You can now use Helm 3 charts. + +### Option 3: Use Helm v3.8.0 Binary + +Currently, this feature is behind the feature flag `HELM_VERSION_3_8_0`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.You can use Helm v3.8.0 binary by having Harness enable the feature flag `HELM_VERSION_3_8_0`. + +When this feature flag is enabled and you have selected **V3** in **Helm Version** in **Remote Manifests**, Harness will use Helm version v3.8.0 by default. + +### Step 2: Add Your Helm 3 Charts + +You can add charts using one of the following options: + +* [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md) +* [Use a Helm Repository with Kubernetes](use-a-helm-repository-with-kubernetes.md) + +### Notes + +* You do not need to add a Delegate Profile for Helm 3. Harness includes Helm 3 support in any Delegate that can connect to the target Kubernetes cluster. + +### Next Steps + +* [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/upload-kubernetes-resource-files.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/upload-kubernetes-resource-files.md new file mode 100644 index 00000000000..7ef781656e2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/upload-kubernetes-resource-files.md @@ -0,0 +1,81 @@ +--- +title: Upload Kubernetes Resource Files +description: Upload Kubernetes files and folders and manage them in Harness. +sidebar_position: 80 +helpdocs_topic_id: 2vcxg26xiu +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +Harness includes default Kubernetes resource files you can edit and add to, and the ability to [link to remote Git and Helm repo files](link-resource-files-or-helm-charts-in-git-repos.md), but you might also have resource files you want to upload into Harness. + +Harness enables you to upload files and folders and manage them in the Service **Manifests** section. + +### Before You Begin + +* [Adding and Editing Inline Kubernetes Manifest Files](adding-and-editing-inline-kubernetes-manifest-files.md) + +### Step 1: Delete the Default Files + +1. In your Harness Kubernetes Service, in **Manifests**, click the more options button (**︙**) and click **Delete All Manifest Files**. +2. Click **Confirm**. + +### Step 2: Upload Resource Files + +1. Click the more options button (**︙**) again, and then click **Upload Inline Manifest Files**. The **Upload Inline Manifest Files** settings appear. + ![](./static/upload-kubernetes-resource-files-165.png) +2. Click **Choose** and select folders and files or drag and drop files into the dialog. If you select a folder, all subordinate folders and files are copied. +3. Click **SUBMIT**. Your files are added to the **Manifests** section of the Service. + +The files are added to the root folder. You can create folders and add the files into them. Simply select a file, click the more options button (︙) and click **Rename File**. Add the folder name before the file name, using a forward slash, like **folderA/filename.yaml**. The file is moved into the folder. + +### Option 1: Use Namespace Alternatives + +You might have existing manifest files from other deployments that you want to add to your **Manifests** section, but you do not want to use the same namespace settings, such as the namespace in a Service object: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: my-service + namespace: prod +spec: + type: ExternalName + externalName: my.database.example.com +``` +You can remove existing `namespace` settings in the files you upload into Manifests by selecting the **Remove hard-coded namespaces from resource metadata** option in the **Upload Inline Manifest Files** dialog. + +[![](./static/upload-kubernetes-resource-files-166.png)](./static/upload-kubernetes-resource-files-166.png) + +The uploaded files will have their `namespace` key and value removed. Using our Service example, you can see `namespace: prod` is gone: + + +``` +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + type: ExternalName + externalName: my.database.example.com +``` +Harness will use the namespace you enter in the Infrastructure Definition **Namespace** field as the namespace for these manifests. + +Here is an Infrastructure Definition with its **Namespace** field: + +![](./static/upload-kubernetes-resource-files-168.png) + +You can also use the expression `namespace: ${infra.kubernetes.namespace}` in your manifest files and Harness will use the namespace you enter in the Infrastructure Definition **Namespace** field as the namespace for these manifests. + +![](./static/upload-kubernetes-resource-files-169.png) + +Another option is to add `namespace: ${infra.kubernetes.namespace}` in the **values.yaml** file and refer it in your manifest with `namespace: {{ .Values.namespace }}`. + +### Next Steps + +* [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/use-a-helm-repository-with-kubernetes.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-a-helm-repository-with-kubernetes.md new file mode 100644 index 00000000000..d8d4c6e126f --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-a-helm-repository-with-kubernetes.md @@ -0,0 +1,141 @@ +--- +title: Use a Helm Repository with Kubernetes +description: Link remote Helm charts in a Helm Repository. +sidebar_position: 90 +helpdocs_topic_id: hddm3rgf1y +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/cifa2yb19a).You can link remote Helm charts in a Helm Repository to your Harness Kubernetes Service, such as AWS S3, Google Cloud Storage (GCS), or a chart repo such as Bitnami. + +You can also use Helm charts in a Git repo. For more information, see [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md). + +### Before You Begin + +* [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + +### Migrating from a Harness Helm Deployment Type? + +If you are migrating from a Harness Helm deployment type to the Kubernetes deployment type, be aware that Helm charts in Kubernetes V2 require that you set up a Harness [Helm Artifact Server](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server) to connect to your remote Helm chart repo. + +### Step 1: Helm Repository Artifact Server + +Before you can link a Helm Repository to your Harness Kubernetes Service, you create a Harness Artifact Server to connect Harness with a Helm Repository. + +See [Add Helm Repository Artifact Servers](https://docs.harness.io/article/0hrzb1zkog-add-helm-repository-servers). + +### Step 2: Link the Service with a Helm Repository + +1. In your Harness Kubernetes Service, in **Manifests**, click **Link Remote Manifests**. The **Remote Manifests** settings appear. +2. In **Manifest Format**, select **Helm Chart from Helm Repository**. +3. In **Helm Repository**, select the Helm Chart Repository you added as a Harness Artifact Server. For more information, see [Add Helm Repository Artifact Servers](https://docs.harness.io/article/0hrzb1zkog-add-helm-repository-servers).If you are using GCS or a storage service for your Helm Repository, you will see a **Base Path** setting. +4. In **Base Path** (GCS or a storage service only), enter the path to the charts' bucket folder or a Workflow variable expression. + 1. If you use a charts' bucket folder, simply enter the name of the folder. Whether you need to specify a single folder (e.g. `charts`) a folder path (e.g. `helm/charts`) depends on the Helm Chart Repository you added as a Harness Artifact Server. + 2. If you use a Workflow variable expression, you can enter in the expression as part of the path. For example, `/Myservice/Chart/${workflow.variables.branchName}/` or simply `${workflow.variables.chartFolder}`.For more information, see [Kubernetes Workflow Variable Expressions](workflow-variables-expressions.md) and [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + 1. If the chart is in the **root** folder of the repository location set in the Helm Chart Repository you added as a Harness Artifact Server, leave **Base Path** empty. +5. In **Chart Name**, enter the name of the chart in that repo. For example, we use **nginx**. + In some cases, you might have different charts in different repos, and you do not want to create a new Harness Service for each chart. To address this, you have the following options: + * You can use a Service variable for the **Chart Name** setting. You can then supply the value at deployment runtime. See [Add Service Config Variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables). You can also override this setting using an Environment Service Override Variable. [Override a Service Configuration in an Environment](https://docs.harness.io/article/4m2kst307m-override-service-files-and-variables-in-environments). + * You can also override the Helm chart in the Service using a Helm chart override in an Environment. See [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). +6. In **Chart Version**, enter the chart version to use. This is found in the **Chart.yaml** **version** label. For this guide, we will use **1.0.1**. If you leave this field empty Harness gets the latest chart. +7. In **Helm Version**, select the Helm version of your chart, such as **v3**. + +When you are finished, the dialog will look like this: + +![](./static/use-a-helm-repository-with-kubernetes-153.png) + +### Option: Skip Versioning for Service + +By default, Harness versions ConfigMaps and Secrets deployed into Kubernetes clusters. In some cases, you might want to skip versioning. + +Typically, to skip versioning in your deployments, you add the annotation `harness.io/skip-file-for-deploy` to your manifests. See [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +In some cases, such as when using public manifests or Helm charts, you cannot add the annotation. Or you might have 100 manifests and you only want to skip versioning for 50 of them. Adding the annotation to 50 manifests is time-consuming. + +Instead, enable the **Skip Versioning for Service** option in **Remote Manifests**. + +When you enable **Skip Versioning for Service**, Harness will not perform versioning of ConfigMaps and Secrets for the Service. + +If you have enabled **Skip Versioning for Service** for a few deployments and then disable it, Harness will start versioning ConfigMaps and Secrets. + +### Option: Helm Command Flags + +You can extend the Helm commands that Harness runs when deploying your Helm chart. + +Use **Enable Command Flags** to have Harness run Helm-specific Helm commands and their options as part of preprocessing. All the commands you select are run before `helm install/upgrade`. + +Click **Enable Command Flags**, and then select commands from the **Command Flag Type** dropdown. + +Next, in **Input**, add any options for the command. + +The `--debug` option is not supported.For Kubernetes deployments using Helm charts from a Helm Repository, the following commands are supported (more might be added): + +* TEMPLATE: `helm template` to render the helm template files. +* VERSION: `helm version` to validate Helm on the Delegate. +* FETCH: `helm fetch` (v1) `helm pull` (v2) to get the Helm chart. + +You will see the outputs for the commands you select in the Harness deployment logs. The output will be part of pre-processing and appear before `helm install/upgrade`. + +If you use Helm commands in the Harness Service and in a Workflow deploying that Service, the Helm commands in the Harness Service override the commands in the Workflow. + +#### Harness Variable Expressions are Supported + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in any of the command options settings. For example, [Service Config variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) and [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +### Option: Override Helm Chart Values YAML + +For the **Inline** and **Remote** options, see [Override Values YAML Files](override-values-yaml-files.md).If you are using the Helm Chart from Helm Repository option in **Manifests**, you can override the chart in **Manifests** using one or more values YAML files inside the Helm chart. + +In **Configuration**, in **Values YAML Override**, click the edit icon. + +In **Store Type**, select **From Helm Repository**. + +In **File Path(s)**, enter the file path to the override values YAML file. + +Multiple files can be used. When you enter the file paths, separate the paths using commas. + +The latter paths are given higher priority. + +![](./static/use-a-helm-repository-with-kubernetes-154.png) + +See [Override Values YAML Files](override-values-yaml-files.md). + +### Example 1: Google GCS and AWS S3 + +![](./static/use-a-helm-repository-with-kubernetes-155.png) + +### Example 2: Workflow Variable Expression + +Here is an example using a Workflow variable expression. You can see the variable created in the Workflow's **Workflow Variables** section, referenced using an expression in **Remote Manifests**, and then a value provided for the variable in the deployment dialog that matches the chart folder's name. + +![](./static/use-a-helm-repository-with-kubernetes-156.png) + +Click **Submit**. The Helm repo is added to **Manifests**. + +### Example 3: Deploying Kubernetes Service Linked to a Helm Repository + +When you deploy a Workflow using a Harness Kubernetes Service set up with a Helm Repository, you will see Harness fetch the chart: + +[![](./static/use-a-helm-repository-with-kubernetes-157.png)](./static/use-a-helm-repository-with-kubernetes-157.png) + +Next, you will see Harness initialize using the chart: + +[![](./static/use-a-helm-repository-with-kubernetes-159.png)](./static/use-a-helm-repository-with-kubernetes-159.png) + +Harness does not support the following objects when using Helm charts in Harness Kubernetes deployments: ClusterRoleBindingList, RoleBindingList, RoleList. +If you use these objects in your chart, Harness will consider the chart invalid and fail the deployment. +See the Kubernetes API docs for information on these deployments.The Helm version info is displayed in the Service dashboard: + +![](./static/use-a-helm-repository-with-kubernetes-161.png) + +### Notes + +Helm Dependencies are supported with charts in Helm Repositories, not with Helm charts in Git repos. + +### Next Steps + +You can override the Helm Repository in a Harness Environment **Service Configuration Overrides** section. See [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/use-go-templating-in-kubernetes-manifests.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-go-templating-in-kubernetes-manifests.md new file mode 100644 index 00000000000..a2f3acb6e36 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-go-templating-in-kubernetes-manifests.md @@ -0,0 +1,314 @@ +--- +title: Use Go Templating in Kubernetes Manifests +description: Templatize your manifests. +sidebar_position: 60 +helpdocs_topic_id: mwy6zgz8gu +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/qvlmr4plcp).To make your Kubernetes manifest reusable and dynamic, you can use [Go templating](https://godoc.org/text/template) and Harness built-in variables in combination in your **Manifests** files. + +The inline values.yaml file used in a Harness Service does not support Helm templating, only Go templating. Helm templating is fully supported in the remote Helm charts you add to your Harness Service. + +### Before You Begin + +Ensue you are familiar with the following: + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + +### Step 1: Review the Default Values File + +Harness [variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) may be added to values.yaml, not the manifests themselves. This provides more flexibility. + +1. Look at the default values.yaml file to see the variables used in the default configuration files: + + ``` + # This will be used as {{.Values.name}} + name: harness-example + + # This will be used as {{int .Values.replicas}} + replicas: 1 + + # This will be used as {{.Values.image}} + image: ${artifact.metadata.image} + ``` + The variable `${artifact.metadata.image}` is a Harness variable for referencing the metadata of the Artifact Source. For more information about Harness variables, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +2. Look at the default object descriptions to understand how easy it is to use Kubernetes in Harness. + + ``` + apiVersion: v1 # for versions before 1.9.0 use apps/v1beta2 + kind: ConfigMap # store non-confidential data in key-value pairs + metadata: + name: {{.Values.name}}-config # name is taken from values.yaml + data: + key: value # example key-value pair + --- + apiVersion: apps/v1 + kind: Deployment # describe the desired state of the cluster + metadata: + name: {{.Values.name}}-deployment # name is taken from values.yaml + spec: + replicas: {{int .Values.replicas}} # tells deployment to run pods matching the template + selector: + matchLabels: + app: {{.Values.name}} # name is taken from values.yaml + template: + metadata: + labels: + app: {{.Values.name}} # name is taken from values.yaml + spec: + containers: + - name: {{.Values.name}} # name is taken from values.yaml + image: {{.Values.image}} # image is taken from values.yaml + envFrom: + - configMapRef: + name: {{.Values.name}}-config # name is taken from values.yaml + ports: + - containerPort: 80 + ``` + + +### Step 2: Use Expression Builder + +When you edit manifests in the Harness Service, you can enter expressions by entering `{{.` and Harness will fetch the values available in the values.yaml file. + +![](./static/use-go-templating-in-kubernetes-manifests-206.png) + +This expression builder helps to ensure that you do not accidentally enter an incorrect value in your manifests. + +### Example 1: Use a Harness Variable in a Manifest + +Harness built-in variables can be used in values.yaml file, and are evaluated at runtime. For a list of Harness variables, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +In the values.yaml file, it will look like this: + + +``` +name: ${serviceVariable.serviceName} +``` +In a manifest file, it will be used like this: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{.Values.name}} # ${serviceVariable.serviceName} +spec: + selector: + matchLabels: + app: nginx + replicas: 1 + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:latest + ports: + - containerPort: 80 +``` +### Option: Skip Rendering of Manifest Files + +By default, Harness uses Go templating and a values.yaml for templating manifest files. See [Use Go Templating in Kubernetes Manifests](use-go-templating-in-kubernetes-manifests.md). + +In some cases, you might not want to use Go templating because your manifests use some other formatting. + +To skip rendering your manifest files using Go templating, use the **Apply** step instead of the default Kubernetes Workflow steps (Rollout, Canary Deployment, Stage Deployment, etc) and its **Skip Rendering K8s manifest files** option. + +See [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +### Go Templating Examples + +You can use piping and Go actions, arguments, pipelines, and variables in your manifests. + +Let's look at some examples. + +#### Quotation Marks + +The following example puts quotations around whatever string is in the `something` value. This can handle values that could otherwise be interpreted as numbers, or empty values, which would cause an error. + + +``` +{{.Values.something | quote}} +``` +You should use single quotes if you are using a value that might contain a YAML-like structure that could cause issues for the YAML parser. + +For example, using a [Service Config variables](using-harness-config-variables-in-manifests.md) or [Environment Service Override variable](override-harness-kubernetes-service-settings.md) to replace a value in the values.yaml file, and the evaluated replacement value has a `"`, `:` , or `'` in it: + +`TOPIC_MAP: "foo:foo-1:foo-1-trigger"` + +In this case, put single quotes around the value: + +`TOPIC_MAP: '${serviceVariable.TOPIC_MAP}'` + +#### Verbatim + +Use `indent` and `toYaml` to put something from the values file into the manifest verbatim. + + +``` +{{.Values.env.config | toYaml | indent 2}} +``` +#### Indexing Structures in Templates + +If the data passed to the template is a map, slice, or array it can be indexed from the template. + +You can use `{{index x number}}` where `index` is the keyword, `x` is the data, and `number` is an integer for the `index` value. + +If we had `{{index names 2}}` it is equivalent to `names[2]`. We can add more integers to index deeper into data. `{{index names 2 3 4}}` is equivalent to `names[2][3][4]`. + +Let's look at an example: + + +``` +{{- if .Values.env.config}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{.Values.name}}-{{.Values.track}} + labels: + app: {{.Values.name}} + track: {{.Values.track}} + annotations: + harness.io/skip-versioning: "true" +data: +{{- if hasKey .Values.env .Values.track}} +{{index .Values.env .Values.track "config" | mergeOverwrite .Values.env.config | toYaml | indent 2}} +{{- else }} +{{.Values.env.config | toYaml | indent 2}} +{{- end }} +--- +{{- end}} + +{{- if .Values.env.secrets}} +apiVersion: v1 +kind: Secret +metadata: + name: {{.Values.name}}-{{.Values.track}} + labels: + app: {{.Values.name}} + track: {{.Values.track}} +stringData: +{{- if hasKey .Values.env .Values.track}} +{{index .Values.env .Values.track "secrets" | mergeOverwrite .Values.env.secrets | toYaml | indent 2}} +{{- else }} +{{.Values.env.secrets | toYaml | indent 2}} +{{- end }} +--- +{{- end}} +``` +#### Iterate Over Existing Items + +Here is example inserting an element into an existing list in a manifest for Istio VirtualService and the Destination rule. + +The critical line is: + +`{{- range $track := split " " .Values.nonPrimary }}` + +This line iterates over a list of existing items, where the list was computed with a simple [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) command and output to the context prior to the rollout. + +VirtualService: + + +``` +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: {{ .Values.name }}-gateway-vs + labels: + app: {{ .Values.name }} + group: multiservice +spec: + hosts: + - "*" + gateways: + - ingressgateway + http: +{{- if .Values.nonPrimary }} +{{- range $track := split " " .Values.nonPrimary }} +{{- range $uri := $.Values.uri }} + - name: {{ $track }} + match: + - headers: + x-pcln-track: + exact: {{ $track }} + uri: + {{ $uri.matchType }}: {{ $uri.matchString }} +{{- if $.Values.rewrite }} + rewrite: + uri: {{ $.Values.rewrite }} +{{- end }} + route: + - destination: + host: {{ $.Values.name }} + subset: {{ $.Values.name }}-{{ $track }} +{{- end }} +{{- end }} +{{- end }} +{{- if .Values.hasPrimary }} + - name: primary + match: +{{- range $uri := .Values.uri }} + - uri: + {{ $uri.matchType }}: {{ $uri.matchString }} +{{- end }} +{{- if .Values.rewrite }} + rewrite: + uri: {{ .Values.rewrite }} +{{- end }} + route: + - destination: + host: {{ .Values.name }} + subset: {{ .Values.name }}-primary +{{- end }} +``` +DestinationRule: + + +``` +apiVersion: networking.istio.io/v1alpha3 +kind: DestinationRule +metadata: + name: {{ .Values.name }} + labels: + app: {{ .Values.name }} + group: multiservice +spec: + host: {{ .Values.name }} + subsets: +{{- if .Values.nonPrimary }} +{{- range $track := split " " .Values.nonPrimary }} + - name: {{ $.Values.name }}-{{ $track }} + labels: + track: {{ $track }} +{{- end }} +{{- end }} +{{- if .Values.hasPrimary }} + - name: {{ .Values.name }}-primary + labels: + track: primary +{{- end }} +``` +For more information, see the [Go text template documentation](https://golang.org/pkg/text/template/). + +### Notes + +* Harness uses Go template version 0.4. If you are used to Helm templating, you can download Go template and try it out locally to find out if your manifests will work. This can help you avoid issues when adding your manifests to Harness. +You can install Go template version 0.4 locally to test your manifests. + + Mac OS: curl -O https://app.harness.io/public/shared/tools/go-template/release/v0.4/bin/darwin/amd64/go-template + + Linux: curl -O https://app.harness.io/public/shared/tools/go-template/release/v0.4/bin/linux/amd64/go-template + + Windows: curl -O https://app.harness.io/public/shared/tools/go-template/release/v0.4/bin/windows/amd64/go-templateFor steps on doing local Go templating, see [Harness Local Go-Templating](https://community.harness.io/t/harness-local-go-templating/460) on Harness Community. +* Harness uses an internal build of Go templating. It cannot be upgraded. Harness uses [Spring templates functions](http://masterminds.github.io/sprig/), excluding those functions that provide access to the underlying OS (env, expandenv) for security reasons. +In addition, Harness uses the functions ToYaml, FromYaml, ToJson, FromJson. + +### Next Steps + +* [Adding and Editing Inline Kubernetes Manifest Files](adding-and-editing-inline-kubernetes-manifest-files.md) +* [Link Resource Files or Helm Charts in Git Repos](link-resource-files-or-helm-charts-in-git-repos.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/use-helm-chart-hooks-in-kubernetes-deployments.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-helm-chart-hooks-in-kubernetes-deployments.md new file mode 100644 index 00000000000..15a4e507edb --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-helm-chart-hooks-in-kubernetes-deployments.md @@ -0,0 +1,161 @@ +--- +title: Use Helm Chart Hooks in Kubernetes Deployments +description: Use your Helm Chart Hooks in Harness deployments. +sidebar_position: 390 +helpdocs_topic_id: qk178jyns7 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use [Helm chart hooks](https://helm.sh/docs/topics/charts_hooks/) in your Kubernetes deployments to intervene at specific points in a release cycle. + +Harness provides a few ways to integrate your Helm chart hooks into your Harness deployments. You can use a native Helm deployment in Harness or use a Harness Kubernetes deployment which supports Canary and Blue/Green strategies. + +This topic describes the available options. + +### Before You Begin + +* **Helm chart hooks** — We assume you are familiar with Helm chart hooks (sometimes called *lifecycle hooks*). If you are new to them, review Helm's [docs](https://helm.sh/docs/topics/charts_hooks/). +* **Harness Kubernetes and Helm differences** — Harness includes both Kubernetes and Helm deployments, and you can use Helm charts in both. Here's the difference: + + Harness [Kubernetes Deployments](kubernetes-deployments-overview.md) allow you to use your own Kubernetes manifests (remote or local) or a Helm chart, and Harness executes the Kubernetes API calls to build everything without Helm and Tiller needing to be installed in the target cluster. + + For Harness [Helm Deployments](../helm-deployment/helm-deployments-overview.md), you must always have Helm and Tiller running on one pod in your target cluster. Tiller makes the API calls to Kubernetes in these cases. +* **Apply step** — The Harness Workflow Apply step allows you to deploy any resource you have set up in the Service **Manifests** section at any point in your Workflow. See [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +### Option 1: Kubernetes and the Apply Step + +This is the recommended method. It allows you to use Harness Kubernetes Canary and Blue/Green deployments and to apply the hooks flexibly with the Apply step. + +A Harness Kubernetes deployment runs `kubectl apply` for manifest files. There is no Tiller involved in this process because Harness is not running any Helm commands. Harness native Helm implementation can only perform [Basic deployments](../concepts-cd/deployment-types/deployment-concepts-and-strategies.md). + +Let's implement a hook using Harness Kubernetes implementation, with no Helm or Tiller. + +Here is a typical example of a Kubernetes Job using a Helm chart hook: + + +``` +apiVersion: batch/v1 +kind: Job +metadata: + name: "{{.Release.Name}}" + labels: + app.kubernetes.io/managed-by: {{.Release.Service | quote }} + app.kubernetes.io/instance: {{.Release.Name | quote }} + app.kubernetes.io/version: {{ .Chart.AppVersion }} + helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}" + annotations: + # This is what defines this resource as a hook. Without this line, the + # job is considered part of the release. + "helm.sh/hook": post-install + "helm.sh/hook-weight": "-5" + "helm.sh/hook-delete-policy": hook-succeeded +spec: + template: + metadata: + name: "{{.Release.Name}}" + labels: + app.kubernetes.io/managed-by: {{.Release.Service | quote }} + app.kubernetes.io/instance: {{.Release.Name | quote }} + helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}" + spec: + restartPolicy: Never + containers: + - name: post-install-job + image: "alpine:3.3" + command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"] +``` +The hook is a `post-install-job`. It will execute the Job after all resources are loaded into Kubernetes. + +If you are using a *pre-install* hook, ensure the `hook-weight` is **less than** the `hook-weight` of the job.To implement Helm chart hooks in Harness Kubernetes deployments, you remove the Hook annotations and split out the Kubernetes Job as a separate yaml file in your Harness Service **Manifests**. + +Next, you set when the Job is to be executed using the **Apply** step in your Workflow. Where you add the Apply step in the Workflow replaces the Helm hook annotation values (pre-install, post-delete, etc). + +Here is an example of a Phase in a Harness Canary Workflow showing where all of the Helm chart hooks can be applied using Apply steps: + +![](./static/use-helm-chart-hooks-in-kubernetes-deployments-47.png) + +Using the example Job above, the Hook annotations are removed and the Kubernetes Job is set as a separate yaml file. In your Harness Service **Manifests** section, the Job manifest should look something like this: + + +``` +apiVersion: batch/v1 +kind: Job +metadata: + name: "{{.Release.Name}}" + labels: + app.kubernetes.io/managed-by: {{.Release.Service | quote }} + app.kubernetes.io/instance: {{.Release.Name | quote }} + app.kubernetes.io/version: {{ .Chart.AppVersion }} +spec: + template: + metadata: + name: "{{.Release.Name}}" + labels: + app.kubernetes.io/managed-by: {{.Release.Service | quote }} + app.kubernetes.io/instance: {{.Release.Name | quote }} + spec: + restartPolicy: Never + containers: + - name: post-install-job + image: "alpine:3.3" +``` +To apply the Job in a Workflow, you add the [Apply](deploy-manifests-separately-using-apply-step.md) step to your Workflow and reference the job.yaml in Service **Manifests**: + +![](./static/use-helm-chart-hooks-in-kubernetes-deployments-48.png) + +Since the original Helm chart hook was a `post-install`, you simply place the **Apply** step after the **Canary Deployment** step in your Workflow. + +![](./static/use-helm-chart-hooks-in-kubernetes-deployments-49.png) + +You can see the flexibility available for deploying your manifests in any order you want. + +For example, if there is a `pre-install` Helm chart hook, you can use Apply to place this job.yaml *before* the Canary Deployment step. + +With this method, you can integrate your hooks using the Canary and Blue/Green strategies. + +### Option 2: Use Native Helm + +You can also use a Harness native Helm implementation. This utilizes Helm and Tiller capabilities. + +As noted, you cannot use Canary deployment or Blue/Green deployments. Native Helm deployments can only leverage a Basic deployment. + +For a Harness native Helm implementation, you simply link to your remote Helm chart in your Harness Service. + +![](./static/use-helm-chart-hooks-in-kubernetes-deployments-50.png) + +And then deploy the chart using a Harness Basic Workflow: + +![](./static/use-helm-chart-hooks-in-kubernetes-deployments-51.png) + +You Helm chart hooks are implement by Helm and Tiller in your target cluster. + +### Option: Delegate Selector + +The Apply step has the **Delegate Selector** option. + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In these cases, you shouldn't add a Delegate Selector to any step in the Workflow. The Workflow is already using a Selector via its Infrastructure Definition's Cloud Provider. + +If your Workflow Infrastructure Definition's Cloud Provider isn't using a Delegate Selector, and you want this Workflow step to use a specific Delegate, do the following: + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting. + +### Related + +* [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Deployment Concepts and Strategies](../concepts-cd/deployment-types/deployment-concepts-and-strategies.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/use-kustomize-for-kubernetes-deployments.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-kustomize-for-kubernetes-deployments.md new file mode 100644 index 00000000000..a2a84e581aa --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/use-kustomize-for-kubernetes-deployments.md @@ -0,0 +1,672 @@ +--- +title: Use Kustomize for Kubernetes Deployments (FirstGen) +description: Use kustomizations in your Kubernetes deployments. +sidebar_position: 330 +helpdocs_topic_id: zrz7nstjha +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports [Kustomize](https://kustomize.io/) kustomizations in your Kubernetes deployments. You can use overlays, multibase, plugins, sealed secrets, etc, just as you would in any native kustomization. + +**New to Kustomize?** In a nutshell, kustomizations let you create specific Kubernetes deployments while leaving the original manifests untouched. You drop a kustomization.yaml file next to your Kubernetes YAML files and it defines new behavior to be performed during deployment. +Please review the video [Kustomize: Deploy Your App with Template Free YAML](https://youtu.be/ahMIBxufNR0) (30min), the [Kustomize Glossary](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/), and the [Declarative Management of Kubernetes Objects Using Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/). + +### Before You Begin + +* [Connect to Your Target Kubernetes Platform](connect-to-your-target-kubernetes-platform.md) +* [Kubernetes Quickstart](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart) + +Kustomize is supported in Harness Kubernetes v2 Services only. This is the default type, but some Harness users might be using a legacy Kubernetes v1 Service. + +### Visual Summary + +The following diagram shows a very simple topology for implementing Kustomize. + +![](./static/use-kustomize-for-kubernetes-deployments-75.png) + +The Harness Kubernetes Delegate runs in the target cluster with Kustomize pre-installed. The Delegate obtains kustomization.yaml and resource files from a Git repo. The Delegate deploys the Kubernetes objects declared using Kustomize in the target pods. + +In this diagram we use Google GCP, but Harness deploys to any Kubernetes cluster vendor. + +### Video Summary + + + + +### Limitations + +Currently, Harness support for Kustomize has the following limitations: + +* Harness variables and secrets are not supported. + + Harness variables are not supported because Kustomize follows a template-free methodology. + + Use [sealed secrets](https://github.com/bitnami-labs/sealed-secrets) instead. See **Sealed Secrets** in [How to keep your Kubernetes secrets secure in Git](https://learnk8s.io/kubernetes-secrets-in-git) by Omer Levi Hevroni. +* Harness artifacts are not supported, as described in [Review: Artifact Sources and Kustomization](#review_artifact_sources_and_kustomization). +* Harness does not use Kustomize for rollback. Harness renders the templates using Kustomize and then passes them onto kubectl. A rollback works exactly as it does for native Kubernetes. + +### Review: Kustomize and Harness Delegates + +All Harness Delegates include kustomize by default. There is no installation required. + +Your Delegate hosts, typically a pod in the target cluster, require outbound HTTPS/SSH connectivity to Harness and your Git repo. + +The Delegate you use for Kustomize deployments must have access to the Git repo containing your kustomize and resource files.The remainder of this topic assumes you have a running Harness Delegate and Cloud Provider connection. For details on setting those up, see [Connect to Your Target Kubernetes Platform](connect-to-your-target-kubernetes-platform.md). + +### Step 1: Connect to Your Kustomize Repo + +You add a connection to the repo containing your kustomize and resource files as a Harness Source Repo Provider. + +For details on adding a Source Repro Provider, see [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +Here is a quick summary: + +1. In Harness, click **Setup**, and then **Connectors**. +2. Click **Source Repo Providers**, and then click **Add Source Repo Provider**. +3. Provide the following settings and click **Submit**: + + + +| | | +| --- | --- | +| | * **Display Name:** You will use this name to select the repo in your Harness Service. +* **URL:** Provide the Git repo URL. +* **Username/password:** Enter your Git credentials. +* **Branch:** Enter the name of the branch you want to use, such as **master**. + | + +The Delegate you use for Kustomize deployments must have access to the Git repo containing your kustomize and resource files.Now you have a connection to your kustomize and resource files. Next, you can identify these files as Remote Manifests in a Harness Service. + +The following steps assume you have created a Harness Service for your Kubernetes deployment. For details, see [Create the Harness Kubernetes Service](define-kubernetes-manifests.md#step-1-create-the-harness-kubernetes-service). + +### Review: Artifact Sources and Kustomization + +Typically, Harness Services are configured with an Artifact Source. This is the container image or other artifact that Harness will deploy. For Kustomize, you do not specify an Artifact Source in your Harness Service. + +The artifact you want to deploy must be specified in a spec (for example, deployment.yaml). If the image is in the public Docker hub repo, you can just list its name: + + +``` +... +spec: + containers: + - name: app + image: pseudo/your-image:latest +... +``` +If your image is hosted in a private Docker hub repo, you need to specify an [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) in the spec field: + + +``` +... +spec: + containers: + - name: app + image: pseudo/your-image:latest + imagePullSecrets: + - name: dockerhub-credential + ... +``` +### Step 2: Add Manifests and Kustomization + +1. In your Harness Service, in **Manifests**, click **Link Remote Manifests**. + +![](./static/use-kustomize-for-kubernetes-deployments-76.png) + +1. In **Remote Manifests**, in **Manifest Format**, click **Kustomization Configuration**. +2. Enter the following settings and click **Submit**. + + + +| | | +| --- | --- | +| | * **Source Repository:** Select the Source Repo Provider connection to your repo. +* **Commit ID:** Select **Latest from Branch** or **Specific Commit ID**. Do one of the following: + + **Branch:** Enter the branch name, such as **master**. + + **Commit ID:** Enter the Git commit ID. +* **Path to kustomization directory:** This setting is discussed below. +* **Path to Kustomize plugin on Delegate:** Enter the path to the plugin installed on the Delegate. This setting and using plugins are discussed later in this topic. + | + +Once you have set up **Kustomization Configuration**, you can use the Service in a Harness Workflow. There are no other Kustomize-specific settings to configure in Harness. + +#### Path to Kustomization Directory + +You can manually enter the file path to your [kustomization root](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization-root): The directory that contains a kustomization.yaml file in your repo. You do not need to enter the filename. + +![](./static/use-kustomize-for-kubernetes-deployments-77.png) + +If you are using overlays, enter the path to the overlay kustomization.yaml. + +As explained below, you can use Harness variable expressions in **Path to kustomization directory** to dynamically select bases for overlays. + +#### Skip Versioning for Service + +By default, Harness versions ConfigMaps and Secrets deployed into Kubernetes clusters. In some cases, you might want to skip versioning. + +Typically, to skip versioning in your deployments, you add the annotation `harness.io/skip-file-for-deploy` to your manifests. See [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +In some cases, such as when using public manifests or Helm charts, you cannot add the annotation. Or you might have 100 manifests and you only want to skip versioning for 50 of them. Adding the annotation to 50 manifests is time-consuming. + +Instead, enable the **Skip Versioning for Service** option in **Remote Manifests**. + +When you enable **Skip Versioning for Service**, Harness will not perform versioning of ConfigMaps and Secrets for the Service. + +If you have enabled **Skip Versioning for Service** for a few deployments and then disable it, Harness will start versioning ConfigMaps and Secrets. + +#### Review: Artifact Sources and Kustomization + +You can list artifacts in two ways: + +* Artifacts can be hardcoded in the deployment YAML file deployed using your Kustomization files. +* You can add artifacts to the Service **Artifact Source** section and reference them in Kustomize Patch files using the Harness variable `${artifact.metadata.image}`. See [Option: Kustomize Patches](#option_kustomize_patches) below, and [Built-in Variables List](https://docs.harness.io/article/aza65y4af6-built-in-variables-list). + +### Option: Kustomize Patches + +Currently, this feature is behind the Feature Flag `KUSTOMIZE_PATCHES_CG`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. When you enable this Feature Flag, you will be able to use Kustomize version 4.0.0.You cannot use Harness variables in the base manifest or kustomization.yaml. You can only use Harness variables in kustomize patches you add in **Kustomize Patches**.Kustomize patches override values in the base manifest. Harness supports the `patchesStrategicMerge` patches type. + +For example, let's say you have a simple kustomization.yaml for your **application** folder like this: + + +``` +resources: + - namespace.yaml + - deployment.yaml + - service.yaml + - configmap.yaml +``` +And you have an overlay for a production environment that points to the **application** folder like this: + + +``` +resources: + - ../../application +namePrefix: nonpro- +configMapGenerator: +- name: example-config + namespace: default + #behavior: replace + files: + - configs/config.json +patchesStrategicMerge: + - env.yaml +``` +The `patchesStrategicMerge` label identifies the location of the patch **env.yaml**, which looks like this: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: example-deploy +spec: + template: + spec: + containers: + - name: example-app + env: + - name: ENVIRONMENT + value: Production +``` +As you can see, it patches a new environment variable `name: ENVIRONMENT`. + +Here's what the patching looks like side-by-side: + +![](./static/use-kustomize-for-kubernetes-deployments-78.png) + +When the kustomization.yaml is deployed, the patch is rendered and the environment variable is added to the deployment.yaml that is deployed. + +#### Adding Kustomize Patches + +You cannot use Harness variables in the base manifest or kustomization.yaml. You can only use Harness variables in kustomize patches you add in **Kustomize Patches**.In **Service**, in **Configuration**, in **Kustomize Patches**, click **Add Patches**. + +![](./static/use-kustomize-for-kubernetes-deployments-79.png) + +You can add multiple files by using **Add Patches** multiple times.In **Store Type**, select **Inline** or **Remote**. + +For **Inline**, enter the patch YAML, and click **Submit**. + +For **Remote**, in **Source Repository**, select your Source Repo Connector. See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +For Commit ID, select whether to use the latest branch or a specific commit Id/Git tag. + +In **Branch**/**Commit Id**: enter the branch or commit Id/Git tag. + +In **File/Folder Path:** enter the path to the patch file(s) from the root of the repo. The file you add should be the same file listed in `patchesStrategicMerge` of the main kustomize file in your Service. + +You can add multiple files by using **Add Patches** multiple times.The order in which you add file paths for patches in **File/Folder Path** is the same order that Harness applies the patches during the kustomization build.Small patches that do one thing are recommended. For example, create one patch for increasing the deployment replica number and another patch for setting the memory limit.Click **Submit**. The patch file(s) is added to **Kustomize Patches**. + +When the main kustomization.yaml is deployed, the patch is rendered and its overrides are added to the deployment.yaml that is deployed. + +##### How Harness Uses patchesStrategicMerge + +If the `patchesStrategicMerge` label is missing from the kustomization YAML file, but you have added Kustomize Patches to your Harness Service, Harness will add the Kustomize Patches you added in Harness to the `patchesStrategicMerge` in the kustomization file. + +If you have hardcoded patches in `patchesStrategicMerge`, but not add these patches to Harness as Kustomize Patches, Harness will ignore them. + +#### Using Harness Variables in Patches + +Currently, this feature is behind the Feature Flag `KUSTOMIZE_PATCHES_CG`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Kustomize does not natively support variable substitution but Harness supports variable substitution using [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in Kustomize patches. + +This allows you to configure any patch YAML labels as Harness variables expressions and replace those values at Pipeline runtime. + +Let's look at an example. + +Here is the deployment.yaml used by our kustomization: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: example-deploy + namespace: default + labels: + app: example-app + annotations: +spec: + selector: + matchLabels: + app: example-app + replicas: 1 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + template: + metadata: + labels: + app: example-app + spec: + containers: + - name: example-app + image: harness/todolist-sample:latest + imagePullPolicy: Always + ports: + - containerPort: 5000 +``` +You cannot use Harness variables in the base manifest or kustomization.yaml. You can only use Harness variables in kustomize patches you add in **Kustomize Patches**.You add the patch files that will patch deployment.yaml to **Kustomize Patches**. Only these patch files can use Harness variables. + +We're going to use variables for `replicas` and `image`. + +Let's look at the Harness variables in our Service. Here are two Service [Config Variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables): + +![](./static/use-kustomize-for-kubernetes-deployments-80.png) + +One variable is for the `image` and another for the `replicas` count. + +A patch using these variables will look like this: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: example-deploy + namespace: default +spec: + template : + spec: + containers: + - name: example-app + image: ${serviceVariable.image} + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: example-deploy + namespace: default +spec: + replicas: ${serviceVariable.replica} +``` +Add this patch in the Service **Kustomize Patches**. + +Now, when the Pipeline is run, the values for the two variables are rendered in the patch YAML and then the patch is applied to the deployment.yaml. + +If you look at the Initialize phase of the deployment step (in Rolling, Canary, etc), you can see the variable values rendered in the Deployment manifest. + +#### Using Harness Secrets in Patches + +You can also use [Harness secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets) in patches. + +For example, let's say we have two secrets, one for `image` and one for `app`: + +![](./static/use-kustomize-for-kubernetes-deployments-81.png) + +The following patch uses these secrets for `image` and `app`, referencing them using the expression `${secrets.getValue("[secret name]")}`. + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: example-deploy + namespace: default +spec: + template : + spec: + containers: + - name: example-app + image: ${secrets.getValue("image-example")} + +--- +apiVersion: v1 +kind: Service +metadata: + name: example-service + namespace: default +spec: + selector: + app: ${secrets.getValue("app")} +``` +The secret output in the manifest will be asterisks (\*). The secret value is not displayed. + +#### Override Patches in Environments + +You can override the Service settings for **Kustomize Patches** in a Harness Environment using **Service Configuration Overrides**. + +Click **Service Configuration Overrides**. + +In **Service**, select the Service for your kustomization that has **Kustomize Patches** configured. + +In **Override Type**, click **Kustomize Patches**. + +![](./static/use-kustomize-for-kubernetes-deployments-82.png) + +In **Store Type**, select **Inline** or **Remote**. + +For **Inline**, enter the patch YAML, and click **Submit**. + +For **Remote**, in **Source Repository**, select your Source Repo Connector. See [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +For Commit ID, select whether to use the latest branch or a specific commit Id/Git tag. + +In **Branch**/**Commit Id**: enter the branch or commit Id/Git tag. + +In **File/Folder Path:** enter the path to the patch file(s) from the root of the repo. The file you add should be the same file listed in `patchesStrategicMerge` of the main kustomize file in your Service. + +You can add multiple files by using **Add Patches** multiple times.The order in which you add file paths for patches in **File/Folder Path** is the same order that Harness applies the patches during the kustomization build.Small patches that do one thing are recommended. For example, create one patch for increasing the deployment replica number and another patch for setting the memory limit.Click **Submit**. The patch file(s) is added to **Service Configuration Overrides**. + +### Option: Overlays and Multibases using Variable Expressions + +An overlay is a kustomization that depends on another kustomization, creating variants of the common base. In simple terms, overlays change pieces of the base kustomization.yaml. These are commonly used in patches. + +A multibase is a type of overlay where copies of the base use the base but make additions, like adding a namespace.yaml. Basically, you are declaring that the overlays aren't just changing pieces of the base, but are new bases. + +In both overlays and multibases, the most common example is staging and production variants that use a common base but make changes/additions for their environments. A staging overlay could add a configMap and a production overlay could have a higher replica count and persistent disk. + +To execute a staging overlay you would run the following command selecting the overlay's root: + + +``` +kubectl apply -f $DEMO_HOME/overlays/staging +``` +To deploy each overlay in Harness, you could create a Service for each overlay and configure the **Path to kustomization directory** setting in **Remote Manifests** to point to the overlay root: + +![](./static/use-kustomize-for-kubernetes-deployments-83.png) + +A better method is to use a single Service for all bases and manually or dynamically identify which base to use at deployment runtime. + +You can accomplish this using Harness Variable Expressions in **Path to kustomization directory** + +![](./static/use-kustomize-for-kubernetes-deployments-84.png) + +##### Environment Name Variables + +Using Environment name variables is the simplest method of using one Service and selecting from multiple bases. + +First, in your repo, create separate folders for each environment's kustomization.yaml. Here we have folders for **dev**, **production**, and **staging**: + +![](./static/use-kustomize-for-kubernetes-deployments-85.png) + +The kustomization.yaml file in the root will reference these folders of course: + + +``` +resources: +- dev +- staging +- production +``` +We are only concerned with staging and production in this example. + +Next, mirror the repo folder names in Harness Environment names. Here we have two Environments named **production** and **staging** for the corresponding repo folders named **production** and **staging**. + +![](./static/use-kustomize-for-kubernetes-deployments-86.png) + +Next, use the built-in Harness variable expression `${env.name}` in **Path to kustomization directory** to use the Environment names. The `${env.name}` expression resolves to the name of the Harness Environment used by a Workflow. + +For example, if you have two Environments named **production** and **staging**, at deployment runtime the `${env.name}` expression resolves to whichever Environment is used by the Workflow. + +![](./static/use-kustomize-for-kubernetes-deployments-87.png) + +Now, to use the `${env.name}` expression in **Path to kustomization directory**, and reference the Environments and corresponding folders, you would enter `kustomize/multibases/${env.name}`. + +![](./static/use-kustomize-for-kubernetes-deployments-88.png) + +Each time a Workflow runs, it will replace the `${env.name}` expression with the name of the Environment selected for the Workflow. + +For example, if the Workflow uses the Environment **production**, the **Path to kustomization directory** setting will become `kustomize/multibases/production`. Now Harness looks in the **production** folder in your repo for the kustomization.yaml file. + +Once you have created a Workflow, you can templatize its Service setting so that you can select the Environment and its corresponding repo folder: + +![](./static/use-kustomize-for-kubernetes-deployments-89.png) + +You can also select the Environment in a Trigger than executes the Workflow: + +![](./static/use-kustomize-for-kubernetes-deployments-90.png) + +For more information, see [Triggers](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) and [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows). + +##### Service Variables + +You can also use Service variables in **Path to kustomization directory**. This allows you to templatize the **Path to kustomization directory** setting and overwrite it at the Harness Environment level. Let's look at an example. + +Here is an example of using a Service variable in **Path to kustomization directory**: + +![](./static/use-kustomize-for-kubernetes-deployments-91.png) + +If you have Service **Config Variables** set up, you will see the variable expressions displayed when you enter `$`. For details on Service variables, see [Services](https://docs.harness.io/article/eb3kfl8uls-service-configuration). + +Service variables can be overwritten at the Harness Environment level. This allows you to use a variable for the **Path to kustomization directory** setting and then override it for each Harness Environment you use with this Service. + +For example, if you have two Environments, staging and production, you can supply different values in each Environment for **Path to kustomization directory**. + +For details on overriding Service settings, see [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). + +##### Workflow Variables + +For Workflow variables, you need to create the variable in the Workflow and then enter the variable name manually in **Path to kustomization directory**, following the format `${workflow.variable.variable_name}`. + +Here is an example of using a Workflow variable for **Path to kustomization directory**: + +![](./static/use-kustomize-for-kubernetes-deployments-92.png) + +If you use Workflow variables for **Path to kustomization directory**, you can provide a value for **Path to kustomization directory** when you deploy the Workflow (standalone or as part of a Pipeline). + +![](./static/use-kustomize-for-kubernetes-deployments-93.png) + +Typically, when you deploy a Workflow, you are prompted to select an artifact for deployment. If a Workflow is deploying a Service that uses a remote **Kustomization Configuration**, you are not prompted to provide an artifact for deployment.See [Workflows](https://docs.harness.io/article/m220i1tnia-workflow-configuration) and [Kubernetes Workflow Variable Expressions](https://docs.harness.io/article/9dvxcegm90-variables). + +### Option: Use Plugins in Deployments + +Kustomize offers a plugin framework to generate and/or transform a kubernetes resource as part of a kustomization. + +You can add your plugins to the Harness Delegate(s) and then reference them in the Harness Service you are using for the kustomization. + +When Harness deploys, it will apply the plugin you reference just like you would with the `--enable_alpha_plugins` parameter. See [Extending Kustomize](https://kubectl.docs.kubernetes.io/guides/extending_kustomize/) from Kustomize. + +#### Add Plugins to Delegate using a Delegate Profile + +To add a plugin to a Delegate, you create a Delegate Profile and apply it to the Delegates. + +1. In Harness, click **Setup**, and click **Harness Delegates**. +2. Click **Manage Delegate Profiles**, and then click **Add Delegate Profile**. The Delegate Profile settings appear. +3. Enter a name and the script for the plugin and click **Submit**. + +For example, here is a ConfigMap generator plugin script: + + +``` +MY_PLUGIN_DIR=$HOME/K_PLUGINS/kustomize/plugin/myDevOpsTeam/sillyconfigmapgenerator +mkdir -p $MY_PLUGIN_DIR +cat <<'EOF' >$MY_PLUGIN_DIR/SillyConfigMapGenerator +#!/bin/bash +# Skip the config file name argument. +shift +today=`date +%F` +echo " +kind: ConfigMap +apiVersion: v1 +metadata: + name: the-map +data: + today: $today + altGreeting: "$1" + enableRisky: "$2" +" +EOF +cat $MY_PLUGIN_DIR/SillyConfigMapGenerator +chmod +x $MY_PLUGIN_DIR/SillyConfigMapGenerator +readlink -f $MY_PLUGIN_DIR/SillyConfigMapGenerator +``` +Each plugin is added to its own directory, following this convention: + + +``` +$XDG_CONFIG_HOME/kustomize/plugin + /${apiVersion}/LOWERCASE(${kind}) +``` +The default value of `XDG_CONFIG_HOME` is `$HOME/.config`. See [Placement](https://kubectl.docs.kubernetes.io/guides/extending_kustomize/go_plugins/#placement) from Kustomize. + +In the script example above, you can see that the plugin is added to its own folder following the plugin convention: + + +``` +$HOME/K_PLUGINS/kustomize/plugin/myDevOpsTeam/sillyconfigmapgenerator +``` +Note the location of the plugin because you will use that location in the Harness Service to indicate where the plugin is located (described below). + +Plugins can only be applied to Harness Kubernetes Delegates.Next, apply the Profile to Kubernetes Delegate(s): + +1. Click the Profile menu in the Delegate lists and choose your Profile. + + ![](./static/use-kustomize-for-kubernetes-deployments-94.png) + +2. Click **Confirm**. + +Wait a few minutes for the Profile to install the plugin. Next click **View Logs** to see the output of the Profile. + +#### Select Plugin in Service + +Once the plugin is added to the Delegate(s), you can reference it in the Remote Manifests **Path to Kustomize plugin on Delegate** setting in the Harness Service. You will indicate the same location where your Delegate Profile script installed the plugin: + +![](./static/use-kustomize-for-kubernetes-deployments-95.png) + +Click **Submit**. Harness is now configured to use the plugin when it deploys using kustomize. + +### Example 1: Multibase Rolling Deployment + +For this example, we will deploy the [multibases example for Kustomize](https://github.com/kubernetes-sigs/kustomize/tree/master/examples/multibases) in a Rolling Update strategy. You can set up a [Harness Source Repro Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) to connect to that repo. + +We will use Harness Environment names that match the base folder names in the repo. + +In the Harness Service, we will use the `${env.name}` expression in the **Path to kustomization directory** setting. + +When we deploy, the Workflow will use the name of the Environment in **Path to kustomization directory** and the corresponding repo folder's kustomization.yaml will be used. + +Here is what the repo looks like: + +![](./static/use-kustomize-for-kubernetes-deployments-96.png) + +Here are the Harness Environments whose names correspond to the dev, stage, and production repo folders: + +![](./static/use-kustomize-for-kubernetes-deployments-97.png) + +Here is the Harness Service **Remote Manifests** settings. The **Path to kustomization directory** setting uses the `${env.name}` expression that will be replaced with a Harness Environment name at deployment runtime. + +![](./static/use-kustomize-for-kubernetes-deployments-98.png) + +Next we'll create a Workflow using the Rolling Deployment strategy. Here we select the Service we set up. + +When you first create the Workflow you cannot set the **Environment** setting as a variable expression. Create the Workflow using any of the Environments, and then edit the Workflow settings and turn the **Environment** and **Infrastructure Definition** settings to variable expressions by clicking their **[T]** icons. + +When you are done, the Workflow settings will look like this: + +![](./static/use-kustomize-for-kubernetes-deployments-99.png) + +There is nothing to set up in the Workflow. Harness automatically adds the Rollout Deployment step that performs the Kubernetes Rolling Update. + +In the Workflow, click **Deploy**. In **Start New Deployment**, select the name of the Environment that corresponds to the repo folder containing the base you want to use: + +![](./static/use-kustomize-for-kubernetes-deployments-100.png) + +In this example, we select the **stage** Environment. Once deployment is complete you can see the stage repo folder's base used and the `staging-myapp-pod` created: + +![](./static/use-kustomize-for-kubernetes-deployments-101.png) + +### Review: What Workloads Can I Deploy? + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/6ujb3c70fh). + +### Change the Default Path for the Kustomize Binary + +The Harness Delegate ships with the 3.5.4 [release](https://github.com/kubernetes-sigs/kustomize/releases) of Kustomize. + +If you want to use a different release of Kustomize, add it to a location on the Delegate, update the following Delegate files, and restart the Delegate. + +See [Manage Harness Delegates](https://docs.harness.io/category/gyd73rp7np-manage-delegates) for details on each Delegate type. + +#### Shell Script Delegate + +Add `kustomizePath: ` to config-delegate.yml. + +#### Kubernetes Delegate + +Update the `value` environment variable in harness-delegate.yaml: + + +``` +... +name: KUSTOMIZE_PATH +value: "" +... +``` +#### Helm Delegate + +Add `kustomizePath: ""` to harness-delegate-values.yaml. + + +``` +kustomizePath: "" +``` +#### Docker Delegate + +Set the Kustomize path in the launch-harness-delegate.sh: + + +``` +-e KUSTOMIZE_PATH= \ +``` +#### ECS Delegate + +Update the following in ecs-task-spec.json: + + +``` +... +{ + "name": "KUSTOMIZE_PATH", + "value": "" +} +... +``` +### Next Steps + +* [Create a Kubernetes Rolling Deployment](create-a-kubernetes-rolling-deployment.md) +* [Create a Kubernetes Canary Deployment](create-a-kubernetes-canary-deployment.md) +* [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/using-harness-config-files-in-manifests.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/using-harness-config-files-in-manifests.md new file mode 100644 index 00000000000..777243f0217 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/using-harness-config-files-in-manifests.md @@ -0,0 +1,75 @@ +--- +title: Use Harness Config Files in Manifests +description: Use Kubernetes Service Config Files in your manifests. +sidebar_position: 120 +helpdocs_topic_id: q71d8kurhz +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +You can use files added to the **Config Files** section in your Kubernetes Service in your manifests, such as in a ConfigMap. You can reference unencrypted and encrypted files, and they can be single or multiline. + + +### Before You Begin + +* [Using Harness Config Variables in Manifests](using-harness-config-variables-in-manifests.md) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + +### Review: Config File Reference Scope + +You cannot reference a Service's Config File in the Pre-Deployment Phase of a Workflow. Canary and Multi-Service Workflows are the only Workflow types with the Pre-Deployment Phase. + +The Pre-Deployment Phase does not use a Service and so it has no access to Service Config Files (or Config variables). + +You can reference a Service's Config File in the Deployment Phase of the Workflow. + +### Review: Config Files Encoding and References + +Files added in the **Config Files** section are referenced using the `configFile.getAsString("fileName")` Harness expression: + +* `configFile.getAsString("fileName")` - Plain text file contents. +* `configFile.getAsBase64("fileName")` - Base64-encoded file contents. + +### Review: Use Base64 to Avoid New Lines + +If you are going to use a Config File in a manifest, be aware that `${configFile.getAsString()}` can cause problems by adding new lines to your manifest (unless you have formatted the file very carefully). + +Instead, use `${configFile.getAsBase64()}`. This will ensure that the contents of the file are rendered as a single line. + +### Step 1: Add the File to Config Files + +In this example, we will use a file in a ConfigMap object. + +1. Add the unencrypted file to **Config Files**. In this example, the file is a base64 encoded file named `myFile`. + + Make sure you have the **update** permission on the Service or the Environment before you try to add the Service Config File. See [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) for more information about assigning permissions. + + ![](./static/using-harness-config-files-in-manifests-102.png) + +The base64 encoded file will be decoded when added to the manifest, as shown below. + +### Step 2: Reference Config File + +1. In the **values.yaml** in the Harness Service **Manifests** section, reference the Config File using `my_file: ${configFile.getAsBase64("myFile")}`. + + ![](./static/using-harness-config-files-in-manifests-103.png) + +### Step 3: Decode the File + +1. In the manifest (in our example, a ConfigMap), decode the base64 Config File and indent it for the YAML syntax: + + ``` + data: + keyname: | + {{.Values.my_file | b64dec | indent 4}} + ``` + +At runtime, the Config File is decoded and used as plaintext. + +### Limitations + +* Do not use Harness variables within the file used as a Config File. Harness does not do variable substitution of content within an uploaded Harness Config File. + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/using-harness-config-variables-in-manifests.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/using-harness-config-variables-in-manifests.md new file mode 100644 index 00000000000..75e78ce4094 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/using-harness-config-variables-in-manifests.md @@ -0,0 +1,65 @@ +--- +title: Using Harness Config Variables in Manifests +description: Use Service Config Variables in your Manifests files. +sidebar_position: 110 +helpdocs_topic_id: qy6zw1u0y2 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/category/qfj6m1k2c4). + +You can create a variable in the Harness Kubernetes Service **Config Variables** section and then use it in your **Manifests** files, such as in the ConfigMap definition. + + +### Before You Begin + +* [Services](https://docs.harness.io/article/eb3kfl8uls-service-configuration) +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + +In Harness Kubernetes version 1 implementation, Harness would create the ConfigMap automatically using the `${CONFIG_MAP_NAME}` expression and all unencrypted Service **Config Variables** and **Config Files**. In the current Harness Kubernetes implementation, you define your ConfigMap manually using the values.yaml and **Config Variables** and **Config Files**. + +### Review: Config Variable Reference Scope + +You cannot reference a Service's Config Variable in the Pre-Deployment Phase of a Workflow. Canary and Multi-Service Workflows are the only Workflow types with the Pre-Deployment Phase. + +The Pre-Deployment Phase does not use a Service and so it has no access to Service Config Variables (or Config Files). + +You can reference a Service's Config Variable in the Deployment Phase of the Workflow. + +### Step 1: Create the Service Variable in Config Variables + +For this explanation, we'll create a variable that indicates the database to use for a ConfigMap. + +1. In **Config Variables**, click **Add Variable**. +2. In **Config Variable**, add a variable named `database` with the value `mongodb`. + + ![](./static/using-harness-config-variables-in-manifests-211.png) + +3. Click **Submit**. The variable is added to the **Config Variables** section. + + ![](./static/using-harness-config-variables-in-manifests-212.png) + +### Step 2: Reference the Service Variable in values.yaml + +1. In **values.yaml**, create a new variable named `databaseType` that references the Service variable `database`: + + ![](./static/using-harness-config-variables-in-manifests-213.png) + +### Step 3: Reference the Variable in the Manifest + +1. In the manifest file containing you object (in this example, ConfigMap), reference the values.yaml variable in the ConfigMap `data` section. + + ![](./static/using-harness-config-variables-in-manifests-214.png) + +When the Service is deployed, the Service variable will be used to provide the value `mongodb` to the `data` label in ConfigMap. + +### Notes + +* You can also overwrite the Service variable in an Environment **Service Configuration Override.** When a Workflow using that Service and Environment deploys, the Service variable for the ConfigMap `data` value will be overwritten. See [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). + +### Next Steps + +* [Define Kubernetes Manifests](define-kubernetes-manifests.md) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/using-open-shift-with-harness-kubernetes.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/using-open-shift-with-harness-kubernetes.md new file mode 100644 index 00000000000..6e0e7383119 --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/using-open-shift-with-harness-kubernetes.md @@ -0,0 +1,124 @@ +--- +title: Using OpenShift with Harness Kubernetes +description: This topic reviews OpenShift support in the Harness Delegate and Workflows. +sidebar_position: 370 +helpdocs_topic_id: p756zrn9vc +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports OpenShift for Kubernetes deployments. This topic reviews OpenShift support in the Harness Delegate and Workflows. + +### Before You Begin + +* [Connect to Your Target Kubernetes Platform](connect-to-your-target-kubernetes-platform.md) +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) + +### Review: Kubernetes Delegate and OpenShift + +Harness supports OpenShift using a Delegate running externally to the Kubernetes cluster. + +For steps on connecting, see Kubernetes Cluster in [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers).Harness does support running Delegates internally for OpenShift 3.11 or greater, but the cluster must be configured to allow images to run as root inside the container in order to write to the filesystem. + +Typically, OpenShift is supported through an external Delegate installation (shell script installation of the Delegate outside of the Kubernetes cluster) and a service account token, entered in the **Kubernetes Service Account Token** field. You only need to use the **Master URL** and **Kubernetes Service Account Token** fields in the **Kubernetes Cloud Provider** dialog. + +The following shell script is a quick method for obtaining the service account token. Run this script wherever you run kubectl to access the cluster. + +Set the `SERVICE_ACCOUNT_NAME` and `NAMESPACE` values to the values in your infrastructure. + + +``` +SERVICE_ACCOUNT_NAME=default +NAMESPACE=mynamepace +SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.secrets[].name') +TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -D) +echo $TOKEN +``` +Once configured, OpenShift is used by Harness as a typical Kubernetes cluster. + +### Review: Deployment Strategy Support + +In order to successfully deploy the workloads in your **Manifests** section of the Harness Service, they must meet the *minimum* requirements of the type of deployment you are performing. + +* [Canary](create-a-kubernetes-canary-deployment.md) and [Blue/Green](create-a-kubernetes-blue-green-deployment.md) Workflow Type - Deployment workloads only. +* [Rolling Workflow Type](create-a-kubernetes-rolling-deployment.md) - All workload types except Jobs. Jobs will be added soon. +* [​Apply Step](deploy-manifests-separately-using-apply-step.md) - All workload types, including Jobs. +* **OpenShift:** Harness supports [DeploymentConfig](https://docs.openshift.com/container-platform/4.1/applications/deployments/what-deployments-are.html), [Route](https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html), and [ImageStream](https://docs.openshift.com/enterprise/3.2/architecture/core_concepts/builds_and_image_streams.html#image-streams) across Canary, Blue Green, and Rolling deployment strategies. Please use `apiVersion: apps.openshift.io/v1` and not `apiVersion: v1`. + +### Review: Harness Supports List Objects + +You can leverage Kubernetes list objects as needed without modifying your YAML for Harness. + +When you deploy, Harness will render the lists and show all the templated and rendered values in the log. + +Harness supports: + +* List +* NamespaceList +* ServiceList +* For Kubernetes deployments, these objects are supported for all deployment strategies (Canary, Rolling, Blue/Green). +* For Native Helm, these objects are supported for Basic deployments. + +If you run `kubectl api-resources` you should see a list of resources, and `kubectl explain` will work with any of these. + +### Step: Add Manifests and OpenShift Template + +1. In your Harness Service, in **Manifests**, click **Link Remote Manifests**. + + ![](./static/using-open-shift-with-harness-kubernetes-00.png) + +2. In **Remote Manifests**, in **Manifest Format**, click **OpenShift Template**. + +3. Enter the following settings and click **Submit**. + +* **Source Repository:** Select the Source Repo Provider connection to your repo. +* **Commit ID:** Select **Latest from Branch** or **Specific Commit ID**. Do one of the following: + * **Branch:** Enter the branch name, such as **master**. + * **Commit ID:** Enter the Git commit ID. +* **Template File** **Path:** Enter the Openshift Template File Path. + + ![](./static/_openshift.png) + +### Option: Define Service Variables + +You can define Service variables in **OpenShift Param File**, after adding the OpenShift Template file. **OpenShift Param File** is visible only after you have selected an OpenShift Template in **Remote Manifests**. + +1. In the Harness Service, in the **Configuration** section, click **Add Param**. +2. Select **Inline** or **Remote** Store Type. + a. If you select **Inline,** then enter the value inline. If you select **Remote**, perform the following steps. + ![](./static/using-open-shift-with-harness-kubernetes-01.png) +3. Select **Source Repository** from the drop-down menu. +4. Select **Latest from Branch** or **Specific Commit ID**. Do one of the following: + * **Branch:** Enter the branch name, such as **master**. + * **Commit ID:** Enter the Git commit ID. +5. Enter the file path in **Params File Path**. + +Service variables can be overwritten at the Harness Environment level. For details on overriding Service settings, see [Override Harness Kubernetes Service Settings](override-harness-kubernetes-service-settings.md). + +### Option: Skip Versioning for Service + +By default, Harness versions ConfigMaps and Secrets deployed into Kubernetes clusters. In some cases, you might want to skip versioning. + +Typically, to skip versioning in your deployments, you add the annotation `harness.io/skip-file-for-deploy` to your manifests. See [Deploy Manifests Separately using Apply Step](deploy-manifests-separately-using-apply-step.md). + +In some cases, such as when using public manifests or Helm charts, you cannot add the annotation. Or you might have 100 manifests and you only want to skip versioning for 50 of them. Adding the annotation to 50 manifests is time-consuming. + +Instead, enable the **Skip Versioning for Service** option in **Remote Manifests**. + +When you enable **Skip Versioning for Service**, Harness will not perform versioning of ConfigMaps and Secrets for the Service. + +If you have enabled **Skip Versioning for Service** for a few deployments and then disable it, Harness will start versioning ConfigMaps and Secrets. + +### Notes + +* Make sure that you update your version to `apiVersion: apps.openshift.io/v1` and not `apiVersion: v1`. +* The token does not need to have global read permissions. The token can be scoped to the namespace. +* The Kubernetes containers must be OpenShift-compatible containers. If you are already using OpenShift, then this is already configured. But be aware that OpenShift cannot simply deploy any Kubernetes container. You can get OpenShift images from the following public repos: and . +* Useful articles for setting up a local OpenShift cluster for testing: [How To Setup Local OpenShift Origin (OKD) Cluster on CentOS 7](https://computingforgeeks.com/setup-openshift-origin-local-cluster-on-centos/), [OpenShift Console redirects to 127.0.0.1](https://chrisphillips-cminion.github.io/kubernetes/2019/07/08/OpenShift-Redirect.html). + +### Next Steps + +* [Delegate Installation](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) + diff --git a/docs/first-gen/continuous-delivery/kubernetes-deployments/workflow-variables-expressions.md b/docs/first-gen/continuous-delivery/kubernetes-deployments/workflow-variables-expressions.md new file mode 100644 index 00000000000..cebac1ba74b --- /dev/null +++ b/docs/first-gen/continuous-delivery/kubernetes-deployments/workflow-variables-expressions.md @@ -0,0 +1,86 @@ +--- +title: Kubernetes Workflow Variables and Expressions +description: Learn about the Kubernetes-specific Harness variables you can use in your Workflows. +sidebar_position: 360 +helpdocs_topic_id: 7bpdtvhq92 +helpdocs_category_id: n03qfofd5w +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following Kubernetes-specific Harness variables are available as expressions you can use in your Workflows. + +### ${k8s.primaryServiceName} + +`${k8s.primaryServiceName}` - The service in your Harness Service **Manifests** section that uses the `annotations: harness.io/primary-service: "true"` annotation. See [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md). + +Boolean annotation values must use quotes (`"true"|"false"`). + +### ${k8s.stageServiceName} + +`${k8s.stageServiceName}` - The service in your Harness Service **Manifests** section uses the `annotations: harness.io/stage-service: "true"` annotation. See [Create a Kubernetes Blue/Green Deployment](create-a-kubernetes-blue-green-deployment.md). + +### ${k8s.canaryWorkload} + +`${k8s.canaryWorkload}` - The Kubernetes workload set up in the Canary Workflow. Workflows only deploy one workload per deployment. Workloads include the Deployment, StatefulSet, DaemonSet, and DeploymentConfig objects. + +### ${k8s.virtualServiceName} + +`${k8s.virtualServiceName}` - The name in the [Virtual Service](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/) manifest deployed by the Workflow. This is the manifest in the Service **Manifests** section that uses `kind: VirtualService` for Istio. See [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md). + +### Canary Destinations + +`${k8s.canaryDestination}` and `${k8s.stableDestination}` - The names in the [Destination Rule](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/) subsets deployed by the Canary Workflow. See [Set Up Kubernetes Traffic Splitting](set-up-kubernetes-traffic-splitting.md). + +This is the manifest Harness generates for Istio traffic splitting: + + +``` +Found VirtualService with name anshul-traffic-split-demo-virtualservice + +Found following destinations +${k8s.canaryDestination} +weight: 50 + +${k8s.stableDestination} +weight: 50 + +... + +http: +- route: + - destination: + host: "anshul-traffic-split-demo-svc" + subset: "canary" + weight: 50 + - destination: + host: "anshul-traffic-split-demo-svc" + subset: "stable" + weight: 50 +``` +### ${infra.kubernetes.namespace} + +`${infra.kubernetes.namespace}` - The Harness variable `${infra.kubernetes.namespace}` refers to the namespace entered in the Harness Environment Infrastructure Definition settings **Namespace** field: + +![](./static/workflow-variables-expressions-52.png) + +You can use `${infra.kubernetes.namespace}` in your Harness Service **Manifests** definition of a Kubernetes Namespace to reference the name you entered in the Infrastructure Definition **Namespace** field. When the Harness Service is deployed to that Infrastructure Definition, it will create a Kubernetes namespace using the value you entered in the Infrastructure Definition **Namespace** field. + +In the values.yaml file, it will look like this: + + +``` +namespace: ${infra.kubernetes.namespace} +``` +In a manifest file for the Kubernetes Namespace object, it will be used like this: + + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: {{.Values.namespace}} +``` +When this manifest is used by Harness to deploy a Kubernetes Namespace object, it will replace `${infra.kubernetes.namespace}` with the value entered in the Infrastructure Definition **Namespace** field, creating a Kubernetes Namespace object using the name. Next, Harness will deploy the other Kubernetes objects to that namespace. + +If you omit the `namespace` key and value from a manifest in your Service, Harness automatically uses the namespace you entered in the Harness Environment Infrastructure Definition settings' **Namespace** field. \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/_category_.json new file mode 100644 index 00000000000..4276bb38026 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/_category_.json @@ -0,0 +1,15 @@ +{ + "label": "Model Your CD Pipeline", + "position": 500, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Model Your CD Pipeline" + }, + "customProps": { + "helpdocs_category_id": "ywqzeje187", + "helpdocs_parent_category_id": "1qtels4t8p" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/_category_.json new file mode 100644 index 00000000000..6185826981b --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Add Applications", + "position": 10, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Add Applications" + }, + "customProps": { + "helpdocs_category_id": "fwal42867c" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/application-configuration.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/application-configuration.md new file mode 100644 index 00000000000..d8662a5f162 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/application-configuration.md @@ -0,0 +1,112 @@ +--- +title: Create an Application +description: How to create a Harness Application. +# sidebar_position: 2 +helpdocs_topic_id: bucothemly +helpdocs_category_id: fwal42867c +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Harness **Application** represents a group of microservices, their deployment pipelines, and all the building blocks for those pipelines. Harness represents your microservice using a logical group of one or more entities: Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CI/CD.  + +The following procedure creates a new Application. Once you are done, you can add components to the Application, such as Services and Environments. + +### Visual Summary + +The following diagram displays how an Application organizes Services, Workflows, and Environments into components that can be selected and deployed using Pipelines (although you can deploy a workflow by itself, also). The Artifact Servers and Cloud Providers you connect to your Harness account are used to obtain your microservices/applications and deploy them to your deployment environments. + +![](./static/application-configuration-07.png) + +Keep this diagram in mind when setting up your Harness Application. + + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + + +### Step 1: Set up the Application + +1. Click **Setup**, and then click **Add Application**. The **Application** dialog appears.![](./static/application-configuration-08.png) + + +2. Enter the name for your Application. + +#### Option: Authorize Manual Triggers + +Select **Authorize Manual Triggers** to make API keys mandatory to authorize Manual Triggers invocation. For more information, see [Manual Triggers](../triggers/trigger-a-deployment-on-git-event.md#option-manual-triggers). + +![](./static/application-configuration-09.png) + +#### Option: Mandate Webhook Secrets for GitHub Triggers + +Select **Mandate Webhook Secrets for Github Triggers** to make Webhook Secrets mandatory for all your GitHub Triggers. You must always supply your secret token added in your Git provider to authenticate the Webhook once you make it essential to provide Webhook secrets. For more information on how to authenticate the Webhook, see [Authenticate the Webhook](../triggers/trigger-a-deployment-on-git-event.md#option-authenticate-the-webhook). + +![](./static/application-configuration-10.png) + +Currently, this feature is behind the Feature Flag `GITHUB_WEBHOOK_AUTHENTICATION`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +#### Option: Set up Git Sync + +Select **Set up Git Sync** to sync your Application to a Git repo. You must also select a Git Connector and Branch Name that you want to use for Git Sync. For more information, see [Harness Application-Level Git Sync](https://docs.harness.io/article/6mr74fm55h-harness-application-level-sync). + +![](./static/application-configuration-11.png) + +1. Click **SUBMIT**. Your new Application appears. +2. Click your Application’s name. The Application appears.![](./static/application-configuration-12.png) + + +### Step 2: Add Services to the Application + +Services represent your microservices/apps. You define where the artifacts for those microservices come from, and the container specs, configuration variables, and files for those microservices. + +Add your microservices, including their artifact sources, container types, configuration variables, and YAML files. + +For more information, see [Service Configuration](../setup-services/service-configuration.md). + + +### Step 3: Add Environments to the Application + +Environments represent one or more of your deployment infrastructures, such as Dev, QA, Stage, Production, etc. + +Add deployment Environments for the Services in your application. These Environments will be deployed on the cloud providers you added as a connector. + +For more information, see [Environment Configuration](../environments/environment-configuration.md). + + +### Step 4: Add Workflows to the Application + +Workflows define how a Service is deployed, verified, and rolled back, among other important phases. There are many different types of Workflows, from Basic to Canary and Blue/Green. + +Add Workflows to manage the stages of Service deployments. + +For more information, see [Workflow Configuration](../workflows/workflow-configuration.md). + + +### Step 5: Add Pipelines to the Application + +A Pipeline is a collection of one or more stages, containing workflows for one or more services and other deployment and verification steps. + +Add a Pipeline to define the workflows used in deployment and verification. + +For more information, see [Pipeline Configuration](../pipelines/pipeline-configuration.md). + + +### Step 6: Add Triggers to the Application + +Triggers automate deployments using a variety of conditions, such as Git events, new artifacts, schedules, and the success of other pipelines. + +Add a Trigger to define when a Workflow or Pipeline is executed. + +For more information, see [Trigger Configuration](../triggers/add-a-trigger-2.md). + + +### Step 7: Add Infrastructure Provisioners to the Application + +Infrastructure Provisioners define blueprints from known Infrastructure-as-Code technologies (Terraform, Cloud Formation, etc) and map the output (such as load balancers, VPCs, etc). They enable Workflows to provision infrastructure on the fly when deploying Services. + +Add an Infrastructure Provisioner such as CloudFormation or Terraform as a blueprint for the system, networking, and security infrastructure for the Service deployment. + +For more information, see [Infrastructure Provisioner Configuration](../infrastructure-provisioner/add-an-infra-provisioner.md). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/set-default-application-directories-as-variables.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/set-default-application-directories-as-variables.md new file mode 100644 index 00000000000..c3de8f9633e --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/set-default-application-directories-as-variables.md @@ -0,0 +1,84 @@ +--- +title: Create Default Application Directories and Variables +description: You can define Application-wide variables that can be referenced in any entity within a Harness Application. The Application Defaults include the paths for runtime, staging, and backup used by the sc… +# sidebar_position: 2 +helpdocs_topic_id: lgg12f0yry +helpdocs_category_id: fwal42867c +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can define Application-wide variables that can be referenced in any entity within a Harness Application. The **Application Defaults** include the paths for runtime, staging, and backup used by the scripts in the Service, and can also be used in your Workflow steps. + +### Before You Begin + +* [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) +* [Run Shell Scripts in Workflows](../workflows/capture-shell-script-step-output.md) +* [Add Artifacts and App Stacks for Traditional (SSH) Deployments](https://docs.harness.io/article/umpe4zfnac-add-artifacts-for-ssh-deployments) +* [Add Scripts for Traditional (SSH) Deployments](https://docs.harness.io/article/ih779z9kb6-add-deployment-specs-for-traditional-ssh-deployments) + +### Review: Application Defaults + +For example, here is the **Application Defaults** dialog, the **Copy Artifact** script in the Service using the `RUNTIME_PATH` variable and the Tomcat application stack **webapps** folder, and the resulting file path for the deployed artifact on the target host: + +![](./static/set-default-application-directories-as-variables-00.png) + +For more information, see  [Application Default Variables](https://docs.harness.io/article/9dvxcegm90-variables#application_default_variables). + +### Limitations + +To create or edit Application Defaults, you must be logged into Harness as a member of a User Group that has create or update permissions for that Application. + +The User Group must also have the **Administer Other Account Functions** setting enabled. + +See [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). + +### Step 1: Create an Application Default + +To create an Application Default, do the following: + +1. In Harness, open a Harness Application. +2. Click **Application Defaults**. + + ![](./static/set-default-application-directories-as-variables-01.png) + + The **Application Defaults** dialog appears, displaying several built-in variables. + + ![](./static/set-default-application-directories-as-variables-02.png) +3. To add a new default, click **Add Row**. A new row appears. + + ![](./static/set-default-application-directories-as-variables-03.png) + +4. In **Name**, enter a name for the default. Ensure that the name is descriptive, as users will be looking at a list of variable names and will need to distinguish between them. +5. In **Type**, select **STRING**. +6. In **Value**, enter the value for the variable. + + For example, if you added an **Application Default** variable for a product name, the dialog would look like this: + + ![](./static/set-default-application-directories-as-variables-04.png) + +7. Click **SUBMIT**. The new variable is added. Now, let's reference the variable. + +### Step 2: Reference an Application Default + +You can reference an Application Default anywhere in your Application. Here is an example using a Service. + +1. Open a Harness Application, and then open a Service within the Application. +2. In the service, under **Configuration**, click **Add Variable**. The **Config Variable** dialog appears. + + ![](./static/set-default-application-directories-as-variables-05.png) + +3. In **Value**, enter `${app.defaults` to see the Application variables displayed. + + ![](./static/set-default-application-directories-as-variables-06.png) + +4. All Application Defaults variables begin with `app.defaults` to identify the namespace of the variable. +5. Click the Application variable name to enter it. It is entered as `${app.defaults.variable_name}`. + +### See Also + +* [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) +* [Run Shell Scripts in Workflows](../workflows/capture-shell-script-step-output.md) +* [Add Artifacts and App Stacks for Traditional (SSH) Deployments](https://docs.harness.io/article/umpe4zfnac-add-artifacts-for-ssh-deployments) +* [Add Scripts for Traditional (SSH) Deployments](https://docs.harness.io/article/ih779z9kb6-add-deployment-specs-for-traditional-ssh-deployments) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-07.png new file mode 100644 index 00000000000..0f3f3563f17 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-08.png new file mode 100644 index 00000000000..c57c44563ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-09.png new file mode 100644 index 00000000000..cbb2492e1b8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-10.png new file mode 100644 index 00000000000..328fc9e0cca Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-11.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-11.png new file mode 100644 index 00000000000..59df10da3c0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-11.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-12.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-12.png new file mode 100644 index 00000000000..015efa8c953 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/application-configuration-12.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-00.png new file mode 100644 index 00000000000..3add0162e37 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-01.png new file mode 100644 index 00000000000..134c06d6f93 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-02.png new file mode 100644 index 00000000000..a1d280b6903 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-03.png new file mode 100644 index 00000000000..f7618e517ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-04.png new file mode 100644 index 00000000000..cdf6754cebd Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-05.png new file mode 100644 index 00000000000..193179a1182 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-06.png new file mode 100644 index 00000000000..21501cd8c4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/applications/static/set-default-application-directories-as-variables-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/_category_.json new file mode 100644 index 00000000000..f3453034d65 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Add Approvals", + "position": 80, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Add Approvals" + }, + "customProps": { + "helpdocs_category_id": "4edbfn50l8" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/approvals.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/approvals.md new file mode 100644 index 00000000000..8275382f77a --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/approvals.md @@ -0,0 +1,118 @@ +--- +title: Harness UI Approvals +description: Add Approval steps to Workflows or Pipelines, so that deployments must receive approval before they can proceed. +sidebar_position: 10 +helpdocs_topic_id: 0ajz35u2hy +helpdocs_category_id: 4edbfn50l8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can specify Harness User Group(s) to approve or reject a Pipeline or Workflow. During deployment, the User Group members use the Harness Manager to approve or reject the deployment. + +The other approval mechanisms are: + +* [Jira Approvals](jira-based-approvals.md) +* [ServiceNow Approvals](service-now-ticketing-system.md) +* [Custom Shell Script Approvals](shell-script-ticketing-system.md) + +### Before You Begin + +* [Workflows](../workflows/workflow-configuration.md) +* [Pipelines](../pipelines/pipeline-configuration.md) +* [Create Pipeline Templates](../pipelines/templatize-pipelines.md) +* [User Notifications and Alert Settings](https://docs.harness.io/article/kf828e347t-notification-groups) +* [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) + +### Review: Approval Options + +When you run a Workflow with an Approval step, there are three options available: + +![](./static/approvals-08.png) + +* **Approve:** the step is marked as approved and the Workflow execution proceeds. +* **Reject and Follow Failure Strategy:** initiates the Failure Strategy for the Workflow. By default, any Workflow you create has rollback as the default Failure Strategy. But in many cases you will have a different Failure Strategy. +* **Reject and Rollback:** rolls back the Workflow regardless of the Workflow Failure Strategy. + +See [Define Workflow Failure Strategy](../workflows/define-workflow-failure-strategy-new-template.md) for details on **Rollback Workflow** and **Rollback Provisioners after Phases** options. + +Currently, **Rollback Provisioners after Phases** is behind the feature flag `ROLLBACK_PROVISIONER_AFTER_PHASES`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +### Add an Approval Step in a Pipeline + +In your Pipeline, in **Pipeline Stages**, click **+**. The following settings appear. + +![](./static/approvals-09.png) + +Select **Approval Step**. + +Select **Harness UI** in the **Ticketing System**. + +Select one or more **User Group(s)** to notify for the approval requests. + +You can template this setting, turning it into a parameter. Just click the template button: + +![](./static/approvals-10.png) + +In a Pipeline or Workflow Approval step, clicking this button creates a variable named `${User_Group}`. This is the default variable, but you can edit its name or replace the variable with a Service or Workflow variable, or even an [Application Default](../applications/set-default-application-directories-as-variables.md) variable. + +When you deploy the Pipeline or Workflow, you are prompted to select a User Group: + +![](./static/approvals-11.png) + +You cannot pass in a value for this templated setting from another Workflow. + +Ensure that the User Groups you select have **Action:** **read**, **Permission Type: Deployments**, and **Application:** the current Application or **All Applications**. + +![](./static/approvals-12.png) + +Enter the time duration that Harness should wait for the approval or rejection before killing the deployment process. You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day. + +The maximum is 3w 3d 20h 30m.Select **Execute in Parallel with Previous Step** checkbox to execute the steps in parallel. + +Click **Advanced Settings** to set the additional settings. + +![](./static/approvals-13.png) + +Select either **Do not skip**, **Skip always**, or **Skip Based on Assertion Expression** for setting the skip option. For more information, see [Pipeline Skip Conditions](../pipelines/skip-conditions.md). + +Click **Add Variable**. These variables serve as inputs to later stages of the same Pipeline, where they support conditional execution or user overrides. See [Pipeline Skip Conditions](../pipelines/skip-conditions.md) and [Using Variables in Workflow Approvals](use-variables-for-workflow-approval.md). + +In **Publish Variable Name**, enter a unique parent name for all of the output variables. + +For example, if you add `info` in **Publish Variable Name**, and you have an Input Variable named `foo`, you can reference it in subsequent steps using the expression `{context.info.foo}`. + +Do not use reserved words in **Publish Variable Name**, such as `var`. See [Variable Expression Limitations and Restrictions](https://docs.harness.io/article/9ob3r6v9tg-variable-expression-name-restrictions).Select **Auto-Reject previous deployments paused in this stage on approval** to reject previous deployments of this Pipeline with the same Services before this approval stage. This will prohibit older Pipelines from being approved and older builds from being deployed in the Environment. + +Currently, Auto-Reject previous deployments is behind the Feature Flag `AUTO_REJECT_PREVIOUS_APPROVALS`. Contact Harness Support to enable the feature.Click **Submit**. + +Deploy the Pipeline. When you deploy your Pipeline, the **Approval Stage** notifies the selected User Group(s), via their configured [notification settings](https://docs.harness.io/article/kf828e347t-notification-groups#notification_settings_for_user_groups) to approve or reject the deployment. + +In **Deployments** page, the **Approval Stage** displays the following information: + +* **Started At**: The time at which the Pipeline was triggered. +* **Time Remaining**: Time remaining to complete the Pipeline deployment. +* **Approval User Groups**: The user group(s) that you have specified to notify for the approval requests. +* **Timeout**: The time duration that Harness should wait for the approval or rejection before killing the deployment process.The maximum timeout duration is 24 days. +* **Will Expire At**: The time at which the Pipeline will expire. +* **Triggered By**: The user who triggered the Pipeline deployment. It can be triggered using a [Pipeline](../pipelines/pipeline-configuration.md) or [Trigger](../triggers/add-a-trigger-2.md) process. +* **Variables**: Details of the variable inputs that you have specified for the conditional execution of later Pipeline stages. For more information, see [Skip Based on Assertion Expression](../pipelines/skip-conditions.md#skip-based-on-assertion-expression). +You can also **Approve**, **Reject**, or **Approve/Reject with a note** the Pipeline deployment from this page. + +### Add an Approval Step in a Workflow + +You can add UI Approval steps in a Workflow. Configuring the Workflow Approval step is similar to Pipeline UI Approvals, although the Workflow Approval steps only have the **Additional Input Variables** in **Advanced Settings**. + +See [Using Variables in Workflow Approvals](use-variables-for-workflow-approval.md). + +### Needs Approval + +Here is an example of a Pipeline Approval Stage: + +![](./static/approvals-14.png) + +### Auto-Reject Previous Deployments + +Currently, this feature is behind the Feature Flag `AUTO_REJECT_PREVIOUS_APPROVALS`. Contact Harness Support to enable the feature. You will see this option only if there are other Pipelines/Workflows queued for approval at the same stage.While approving a Workflow or Pipeline deployment, you can auto-reject previously paused deployments of the same Pipeline/Workflow (for the same Service and Infrastructure Definition combination) paused at the same approval stage. + +![](./static/approvals-15.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/jira-based-approvals.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/jira-based-approvals.md new file mode 100644 index 00000000000..d255abca2b9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/jira-based-approvals.md @@ -0,0 +1,61 @@ +--- +title: Jira Approvals +description: Describes how to add Jira based approvals for a Pipeline or a Workflow. +sidebar_position: 20 +helpdocs_topic_id: qxki6o7y31 +helpdocs_category_id: 4edbfn50l8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use Jira to approve or reject a Workflow or Pipeline step. + +In your Harness Workflow or Pipeline, you define a Jira ticket and approval and rejection criteria. During deployment, a Jira ticket is created and its approval/rejection determines if the Pipeline (and Workflow) deployment may proceed. For details on integrating Jira with Harness, see [Jira Integration](../workflows/jira-integration.md). + +The other approval mechanisms are: + +* [Harness UI Approvals](approvals.md) +* [ServiceNow Approvals](service-now-ticketing-system.md) +* [Custom Shell Script Approvals](shell-script-ticketing-system.md) + +### Before You Begin + +* [Workflows](../workflows/workflow-configuration.md) +* [Pipelines](../pipelines/pipeline-configuration.md) +* [Create Pipeline Templates](../pipelines/templatize-pipelines.md) +* [Use Variable Expressions](https://docs.harness.io/category/use-variable-expressions) + +### Step: Add an Approval Step + +1. In your Pipeline, in **Pipeline Stages**, click **+**. The following settings appear. + + ![](./static/jira-based-approvals-16.png) + +2. Select **Approval Step**. +3. Select **Jira** in the **Ticketing System**. +4. Select the Jira account in **Jira Connector** that you want to use by selecting the Collaboration Provider you added for the account. For more information, see [Add Jira Collaboration Provider](https://docs.harness.io/article/bhpffyx0co-add-jira-collaboration-provider). +5. Select the Jira **Project** containing the Jira issue you want to use for approval. You can enter text and expressions together. +6. Enter the **Key/Issue ID**. It is the output variable for a Jira issue created in a Workflow, for example `${Jiravar.issueId}`. You can enter the Jira Key/Issue ID for any Jira issue in the Jira project. You can also enter text and expressions together. +7. Enter the time duration in **Timeout** that Harness should wait for the approval or rejection before failing the deployment. You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day.The maximum is 3w 3d 20h 30m. +8. Enter a value for approving the Pipeline. You can select a status from the **Approved Pipeline if Jira Status is** drop-down list or enter an expression. You can enter text and expressions together. +9. Enter a value for rejecting the Pipeline. You can select a status from the **Rejected** **Pipeline if Jira Status is** drop-down list or enter an expression. You can enter text and expressions together. +10. Select **Execute in Parallel with Previous Step** checkbox to execute the steps in parallel. +11. Select either **Do not skip** or **Skip always** for setting the skip option. For more information, see [Skip Execution](../pipelines/skip-conditions.md#skip-execution). +12. Click **Submit**. +13. Deploy your Pipeline and go to the **Deployments** page. The **Approval Stage** displays the following information: + * **Message**: The "Message" appears only when the stage of a Pipeline is completed, and there is no action pending from the user or system. It displays the completed status of the process. For example, approval provided, approval rejected, or Pipeline aborted. + * **Started At**: The time at which the Pipeline was triggered. + * **Ended At**: The time at which the system or a user completed the approval process. + * **Issue URL**: Link to the Jira issue. + * **Timeout**: The time duration that Harness should wait for the approval or rejection before killing the deployment process. + * **Triggered By**: The user who triggered the Pipeline deployment. It can be triggered using a [Pipeline](../pipelines/pipeline-configuration.md) or [Trigger](../triggers/add-a-trigger-2.md) process. + * **Approval Criteria**: Criterion set for approving the request. + * **Current Value**: Current status of Jira issue. + * **Rejection Criteria**: Criterion set for rejecting the request.![](./static/jira-based-approvals-17.png) + +You can click on the **Issue URL** link to see the Jira issue in Jira's UI and select set approval or rejection criteria. In this example, the status **Done** fulfills the Approval Criteria. + +![](./static/jira-based-approvals-18.png) + +Once the Jira issue is approved, the Approval stage turns green in **Deployments**, and the deployment continues. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/service-now-ticketing-system.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/service-now-ticketing-system.md new file mode 100644 index 00000000000..c4d1d5fc068 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/service-now-ticketing-system.md @@ -0,0 +1,75 @@ +--- +title: ServiceNow Approvals +description: Describes how to add ServiceNow based approvals for a Pipeline or a Workflow. +sidebar_position: 40 +helpdocs_topic_id: 9nkuhm8moo +helpdocs_category_id: 4edbfn50l8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use ServiceNow to approve or reject a Workflow or Pipeline step. + +In your Harness Workflow or Pipeline, you define a ServiceNow ticket and the approval and rejection criteria. During deployment, a ServiceNow ticket is created and its approval/rejection determines if the Pipeline (and Workflow) deployment may proceed. For details on integrating ServiceNow with Harness, see [ServiceNow Integration](../workflows/service-now-integration.md). + +The other approval mechanisms are: + +* [Jira Approvals](jira-based-approvals.md) +* [Harness UI Approvals](approvals.md) +* [Custom Shell Script Approvals](shell-script-ticketing-system.md) + +### UTC Timezone Only + +The ServiceNow API only allows date time and time values in the UTC timezone. Consequently, input for any datetime/time fields in Harness ServiceNow steps must be provided in UTC format irrespective of time zone settings in your ServiceNow account. + +The timezone settings govern the display value of the settings not their actual value. + +The display values in the Harness UI depend on ServiceNow timezone settings. + +### Step: Add an Approval Step + +The following steps are for a Pipeline Approval stage, but the same same settings apply to Workflow Approval steps. + +1. In your Pipeline, in **Pipeline Stages**, click **+**. The following settings appear. +2. Select **Approval Step**. +3. Select **ServiceNow** in the **Ticketing System**. +4. Select the ServiceNow account in **ServiceNow Connector** that you want to use by selecting the Collaboration Provider you added for the account, as described in [Add ServiceNow as a Collaboration Provider](../workflows/service-now-integration.md#add-service-now-as-a-collaboration-provider). Use the same provider you used to create the ticket in the Workflow. +5. Select the ServiceNow **Ticket Type** from the drop-down list. Use the same type as the ticket you created in the Workflow. +6. Enter the **Issue Number**. It is an output variable for a ServiceNow issue created in a Workflow, such as `${snow.issueId}`. +7. Enter the time duration in **Timeout** that Harness should wait for the approval or rejection before failing the deployment. You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day.The maximum is 3w 3d 20h 30m. +8. In **Approval**, define the approval criteria using the ServiceNow status items. +9. In **Rejection**, define the rejected criteria using the ServiceNow status items. + For both Approval and Rejection, the criteria you see depends on the Ticket Type you selected: + + ![](./static/service-now-ticketing-system-19.png) + +10. In **Enable Approval Change Window**, use **Window Start** and **Window End** values to specify the window in which Harness will proceed with the deployment. Once this step is approved, Harness will not proceed with deployment unless the current time is within this window. The values that appear depend on the type selected in **Ticket Type**. + + ![](./static/service-now-ticketing-system-20.png) + + The start and end times use the time zone set in the ServiceNow account selected in **ServiceNow Connector**.This is available in Approvals only. +11. Select **Execute in Parallel with Previous Step** checkbox to execute the steps in parallel. +12. Select either **Do not skip** or **Skip always** for setting the skip option. For more information, see [Skip Execution](../pipelines/skip-conditions.md#skip-execution). +13. Click **Submit**. +14. Deploy your Workflow or Pipeline and go to the **Deployments** page. The **Approval Stage** displays the following information: +* **Message**: The "Message" appears only when the stage of a Pipeline is completed, and there is no action pending from the user or system. It displays the completed status of the process. For example, approval provided, approval rejected, or Pipeline aborted. +* **Started At**: The time at which the Pipeline was triggered. +* **Ended At**: The time at which the system or a user completed the approval process. +* **Timeout**: The time duration that Harness should wait for the approval or rejection before killing the deployment process. +* **Triggered By**: The user who triggered the Pipeline deployment. It can be triggered using a [Pipeline](../pipelines/pipeline-configuration.md) or [Trigger](../triggers/add-a-trigger-2.md) process. +* **Approval Criteria**: Criterion set for approving the request. +* **Current Value**: Current status of ServiceNow ticket. +* **Rejection Criteria:** Criterion set for rejecting the request. + +![](./static/service-now-ticketing-system-21.png) + +### Option: State Model and Transitions + +If you select **Change** in **Ticket Type**, you enable ServiceNow's state model to move and track change requests through several states. + +![](./static/service-now-ticketing-system-22.png) + +After the change request is authorized by all the approvers, it transitions into Scheduled state by default. + +For more information, see [State model and transitions](https://docs.servicenow.com/) from ServiceNow. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/shell-script-ticketing-system.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/shell-script-ticketing-system.md new file mode 100644 index 00000000000..d5993910c3c --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/shell-script-ticketing-system.md @@ -0,0 +1,48 @@ +--- +title: Custom Shell Script Approvals +description: Describes how to add custom shell script based approvals for a Pipeline or a Workflow. +sidebar_position: 30 +helpdocs_topic_id: lf79ixw2ge +helpdocs_category_id: 4edbfn50l8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add approval steps in Pipelines and Workflows using a custom shell script ticketing system. + +The other approval options are: + +* [Jira Approvals](jira-based-approvals.md) +* [Harness UI Approvals](approvals.md) +* [ServiceNow Approvals](service-now-ticketing-system.md) + +### Before You Begin + +* [Workflows](../workflows/workflow-configuration.md) +* [Pipelines](../pipelines/pipeline-configuration.md) +* [Create Pipeline Templates](../pipelines/templatize-pipelines.md) + +### Step: Add an Approval Step for a Pipeline + +1. In your Pipeline, in **Pipeline Stages**, click **+**. The following settings appear.![](./static/shell-script-ticketing-system-00.png) +2. Select **Approval Step**. +3. Select **Custom Shell Script** in the **Ticketing System**. +4. Enter custom shell script to approve or reject the Pipeline deployment request. +5. Enter the time duration in **Retry Interval (sec)** that Harness should wait between attempts to successfully execute the script. +6. Enter the time duration that Harness should wait for the approval or rejection before killing the deployment process. You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day. +7. In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. Harness will use Delegates matching the Selector(s) for this approval step. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). +8. Select **Execute in Parallel with Previous Step** checkbox to execute the steps in parallel. +9. Select either **Do not skip** or **Skip always** for setting the skip option. For more information, see [Skip Execution](../pipelines/skip-conditions.md#skip-execution). +10. Click **Submit**. + +Deploy your Pipeline and go to the **Deployments** page. The **Approval Stage** displays the following information: + +* **Message**: The "Message" appears only when the stage of a Pipeline is completed, and there is no action pending from the user or system. It displays the completed status of the process. For example, approval provided, approval rejected, or Pipeline aborted. +* **Started At**: The time at which the Pipeline was triggered. +* **Ended At**: The time at which the system or a user completed the approval process. +* **Timeout**: The time duration that Harness should wait for the approval or rejection before killing the deployment process.The maximum is 3w 3d 20h 30m. +* **Triggered By**: The user who triggered the Pipeline deployment. It can be triggered using a [Pipeline](../pipelines/pipeline-configuration.md) or [Trigger](../triggers/add-a-trigger-2.md) process. + +Once the deployment is completed, the Details panel also displays a log of the script's execution. + +![](./static/shell-script-ticketing-system-01.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/slack-approvals.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/slack-approvals.md new file mode 100644 index 00000000000..dcb2a973869 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/slack-approvals.md @@ -0,0 +1,110 @@ +--- +title: Slack Approvals in Workflows and Pipelines +description: Approve or reject Workflows and Pipelines directly from Slack. +sidebar_position: 60 +helpdocs_topic_id: mtr398a9cl +helpdocs_category_id: 4edbfn50l8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions.Harness users can approve or reject Workflows and Pipelines directly from Slack. + +To configure Harness notifications to Slack *without* enabling deployment approvals from Slack, see [Send Notifications Using Slack](https://docs.harness.io/article/4blpfqwfdc-send-notification-using-slack) and [Send Slack Messages from Workflows](https://docs.harness.io/article/4zd81qhhiu-slack-notifications). + +### Step 1: Create Slack App + +Create and configure an app in your Slack developer account at [https://api.slack.com](https://api.slack.com/). + +For details on these steps, see Slack's [Incoming Webhooks](https://api.slack.com/incoming-webhooks) documentation. + +When you create the app, do the following: + +1. Name your app, assign it to a Workspace, and select **Create App**. + + ![](./static/slack-approvals-02.png) + +2. Select **Incoming Webhooks**. +3. Enable the **Activate Incoming Webhooks** slider: + + ![](./static/slack-approvals-03.png) + +4. Click **Add New Webhook to Workspace**. +5. At the **Confirm your identity** challenge, select the Slack channel to receive approval notifications. (To select an appropriate channel, see [Configure Harness User Notifications](#configure_notifications) below.) + + ![](./static/slack-approvals-04.png) + +6. Click **Install** (or **Authorize**) to confirm this setup. + +### Step 2: Configure Harness API Endpoint in Slack + +Configure the API endpoint from Harness for interactivity. + +For details, see Slack's [Making Messages Interactive](https://api.slack.com/interactive-messages) documentation. + +1. In Slack, select **Interactive Components**. +2. Enable the **Interactivity** slider:![](./static/slack-approvals-05.png) +3. Enter the **Request URL**, as either `http` or `https`. The **Request URL** is independent of the approval channel that we'll configure [below](#configure_notifications). + + Harness SaaS users should enter a URL of the form shown above: + `https://app.harness.io/gateway/api/slack/approval?accountId=` + + On-prem users will instead have a URL of the form: + `https://.harness.io/gateway/api/slack/approval?accountId=` + +4. Click **Save**. +5. Return to the Slack app's **Incoming Webhooks** tab. +6. From the resulting app settings, copy the **Webhook URL** to your clipboard.![](./static/slack-approvals-06.png) + +You've configured API endpoints in both your Slack app and Harness. + +Now, you're ready to set up notifications for the Harness User Group, and the Slack channel, where you want to enable approvals. + +### Step 3: Configure Harness User Notifications for Slack + +Enabling Slack notifications in Harness is straightforward, but there are some important considerations: + +* Select a Harness User Group that has appropriate permissions. These permissions must include **Application: Execute Workflow** and **Execute Pipeline** permissions for all relevant Applications. +See [Application Permissions](https://docs.harness.io/article/ven0bvulsj#application_permissions) in [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). +* You must create a Slack channel that has the same members as the User Group. +* You must manually keep the Slack channel's membership synchronized with the User Group's membership. +* You should normally configure this as a private Slack channel. + +Any member of the Slack channel that you configure will be able to approve or reject deployments. This is why Harness emphasizes the importance of using a private channel, and of manually maintaining synchronization between your Harness User Group's membership and the Slack channel's membership.Once you have selected the appropriate User Group and Slack channel to use: + +1. Open the User Group's notification settings. See [Add Notification Settings for User Groups](https://docs.harness.io/article/kf828e347t#step_add_notification_settings_for_user_groups). +2. In **Notification Settings**, specify the **Slack Channel Name**. +3. Paste in the **Slack Webhook URL** you copied above in [Step 2: Configure Harness API Endpoint in Slack](slack-approvals.md#step-2-configure-harness-api-endpoint-in-slack).![](./static/slack-approvals-07.png) + +### Option: Approve ​Workflows via Slack + +Once you've configured the URLs and channel notification settings above, members of the configured channel can approve or reject Workflow and Pipeline deployments directly from that channel. + +To set up Slack approval within a Workflow: + +1. Within a Workflow section, select **Add Step**. +2. Click **Approval.** +3. In **Ticketing System**, select **Harness UI**. +4. In the **User Groups** drop-down, select the group you enabled above in [Step 3: Configure Harness User Notifications for Slack](slack-approvals.md#step-3-configure-harness-user-notifications-for-slack). +5. Click **Submit.** + +To see how approval of this step works in practice, see [Harness UI Approvals](approvals.md). + +### Option: Approve Pipelines via Slack + +To set up Slack approval within a Pipeline: + +1. Add a stage to the Pipeline. +2. Click **Approval Step**. +3. In **Ticketing System**, select **Harness UI**. +4. Select the User Group you enabled above in [Step 3: Configure Harness User Notifications for Slack](slack-approvals.md#step-3-configure-harness-user-notifications-for-slack) +5. Set a **Timeout** and click **Submit**. + +To see how approval of this step works in practice, see [Harness UI Approvals](approvals.md). + +### See Also + +* [Send Notifications Using Slack](https://docs.harness.io/article/4blpfqwfdc-send-notification-using-slack) +* [Send Slack Messages from Workflows](https://docs.harness.io/article/4zd81qhhiu-slack-notifications) +* [Set Up Slack Notifications for CE](https://docs.harness.io/article/5xiwejal3p-set-up-slack-notifications) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-08.png new file mode 100644 index 00000000000..e076dfe4bc0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-09.png new file mode 100644 index 00000000000..782ed338219 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-10.png new file mode 100644 index 00000000000..f39befba217 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-11.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-11.png new file mode 100644 index 00000000000..3c8fffa399f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-11.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-12.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-12.png new file mode 100644 index 00000000000..c52bb05aa81 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-12.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-13.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-13.png new file mode 100644 index 00000000000..93efebf2b89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-13.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-14.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-14.png new file mode 100644 index 00000000000..0e589dafdcb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-14.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-15.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-15.png new file mode 100644 index 00000000000..2cea95fdd19 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/approvals-15.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-16.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-16.png new file mode 100644 index 00000000000..c73625ed276 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-16.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-17.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-17.png new file mode 100644 index 00000000000..75ccc67591f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-17.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-18.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-18.png new file mode 100644 index 00000000000..9ac343f3356 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/jira-based-approvals-18.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-19.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-19.png new file mode 100644 index 00000000000..1d49ce9a708 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-19.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-20.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-20.png new file mode 100644 index 00000000000..9204240f9fd Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-20.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-21.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-21.png new file mode 100644 index 00000000000..dfffc134102 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-21.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-22.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-22.png new file mode 100644 index 00000000000..ddc2d42d77f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/service-now-ticketing-system-22.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/shell-script-ticketing-system-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/shell-script-ticketing-system-00.png new file mode 100644 index 00000000000..316ca7f960a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/shell-script-ticketing-system-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/shell-script-ticketing-system-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/shell-script-ticketing-system-01.png new file mode 100644 index 00000000000..bf9f474cb89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/shell-script-ticketing-system-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-02.png new file mode 100644 index 00000000000..9f4d88f33ab Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-03.png new file mode 100644 index 00000000000..2f15ba51c1f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-04.png new file mode 100644 index 00000000000..d1269222665 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-05.png new file mode 100644 index 00000000000..98c33cc0ae4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-06.png new file mode 100644 index 00000000000..11693c0b940 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-07.png new file mode 100644 index 00000000000..182fb4f8ff4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/slack-approvals-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-23.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-23.png new file mode 100644 index 00000000000..9cf290d6be7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-23.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-24.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-24.png new file mode 100644 index 00000000000..38afb173c21 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-24.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-25.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-25.png new file mode 100644 index 00000000000..667123443b9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-25.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-26.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-26.png new file mode 100644 index 00000000000..bada785826e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-26.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-27.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-27.png new file mode 100644 index 00000000000..10a0669b396 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-27.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-28.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-28.png new file mode 100644 index 00000000000..d80062fb4fc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/static/use-variables-for-workflow-approval-28.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/use-variables-for-workflow-approval.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/use-variables-for-workflow-approval.md new file mode 100644 index 00000000000..f08ac6a5127 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/approvals/use-variables-for-workflow-approval.md @@ -0,0 +1,126 @@ +--- +title: Using Variables in Workflow Approvals +description: Describes how to use variables in a Workflow approval step. +sidebar_position: 50 +helpdocs_topic_id: 5pspec1apl +helpdocs_category_id: 4edbfn50l8 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add predefined or user-defined variables in a Workflow Approval step. When the Workflow is deployed, these variables become available as inputs for logging and auditing and can be overwritten by approvers. + +The available approval mechanisms are: + +* [Jira Approvals](jira-based-approvals.md) +* [ServiceNow Approvals](service-now-ticketing-system.md) +* [Harness UI Approvals](approvals.md) +* [Custom Shell Script Approvals](shell-script-ticketing-system.md) + +### Before You Begin + +* [Workflows](../workflows/workflow-configuration.md) +* [Pipelines](../pipelines/pipeline-configuration.md) +* [Create Pipeline Templates](../pipelines/templatize-pipelines.md) + +### Step 1: Add an Approval Step in a Workflow + +1. In your Workflow, click **Add Step**. + + ![](./static/use-variables-for-workflow-approval-23.png) + +2. Select **Approval**. You can search or click **Flow Control** and select **Approval**. +3. Click **Next**. + + The Add Step settings appear. + + ![](./static/use-variables-for-workflow-approval-24.png) + +4. In **Configure Approval**, enter **Name**. +5. Select **Harness UI** in the **Ticketing System**.You can use **Jira Service Desk**, **ServiceNow**, and **Custom Shell Script** based approvals as well. For more information on how to add these approvals, see [Jira Approvals](jira-based-approvals.md), [ServiceNow Approvals](service-now-ticketing-system.md), and [Custom Shell Script Approvals](shell-script-ticketing-system.md). +6. Select one or more **User Group(s)** to notify for the approval requests. +7. Enter the time duration that Harness should wait for the approval or rejection before killing the deployment process. You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day. +8. Click **Advanced Settings** to set the additional settings. + + ![](./static/use-variables-for-workflow-approval-25.png) + +9. Click **Add Variable** to define variables. To define a new variable, enter a **Name** and a **Default Value**. You can use this variable combined with its parent variable name to reference the variable elsewhere. +10. In **Publish Variable Name**, enter a parent name for the collection of subordinate variables that you can reference with its child variables (as you defined in the previous step). This parent name helps to avoid conflicts when there are subordinate variables of the same name within the same scope. +11. Set the **Scope** for the variables that you defined in the previous step. You can choose, **Phase**, **Workflow**, or **Pipeline**. +12. Click **Submit** to save this Approval step, along with its variables. + +### Step 2: Approve Workflows + +Workflow uses the Approval variables as defined in the **Add an Approval Step in a Workflow** option, when it deploys. Deployment pauses at the Approval step. + +![](./static/use-variables-for-workflow-approval-26.png) + +1. Click **Approval** step. + + A user (in one of the User Groups configured as approvers) can click the Approval step. The **Needs Approval** settings appear. + + ![](./static/use-variables-for-workflow-approval-27.png) + +2. Enter your comments, and click **Approve.** + +Once this step is approved, the Workflow can continue deployment. + +![](./static/use-variables-for-workflow-approval-28.png) + +### Option: Use Approval Variables in Other Workflow Steps + +Approval variables can be defined only within Workflow Approval steps, but they can be referenced in other Workflow steps, such as the [Shell Script](../workflows/capture-shell-script-step-output.md) step. + +For all of the following examples, `published_name` refers to the name you entered in the **Publish Variable Name** setting in the Approval step. + +`${published_name.variables.var_name}` + +* Use the `.variables.` prefix when referring to an **Additional Input Variable** that was defined in a Workflow Approval step. + +`${approvedBy.name}` — (Deprecated) + +* The name of the Harness user that approved a Workflow approval step. + +`${approvedBy.email}` — (Deprecated) + +* The email address of the Harness user that approved a Workflow approval step. + +`${published_name.approvedBy.name}` + +* The name of the Harness user that approved a Workflow approval step. +* As of December, 2019, this—and the other Approval variables below—must be preceded by a published output variable name (`published_name`). + +`${published_name.approvedBy.email}` + +* The email address of the Harness user that approved a Workflow approval step. + +`${published_name.approvedOn}` + +* The epoch time at which a Workflow approval step was approved. + +`${published_name.comments}` + +* Free-text comments that a user entered when approving (or rejecting) a Workflow approval step. + +`${published_name.timeoutMillis}` + +* Timeout (in milliseconds) set for this approval step. + +`${published_name.approvalStateType}` + +* The ticketing system used for this approval: USER\_GROUP, JIRA, SERVICENOW, and SHELL\_SCRIPT. + +`${published_name.approvalStatus}` + +* The approval's outcome. Can take the values SUCCESS or REJECTED. + +`${published_name.userGroups[].name}` + +* An array of User Groups that were added in a Workflow's approval step. +* For example, if two User Groups were added, you can access those groups' names as `${published_name.userGroups[0].name}` and `${published_name.userGroups[1].name}`. + +### See Also + +* [Pass Variables between Workflows](../expressions/how-to-pass-variables-between-workflows.md) +* [Passing Variables into Workflows and Pipelines from Triggers](../expressions/passing-variable-into-workflows.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/_category_.json new file mode 100644 index 00000000000..744ffb650c0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Add Environments", + "position": 30, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Add Environments" + }, + "customProps": { + "helpdocs_category_id": "1eqg76ac72" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/environment-configuration.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/environment-configuration.md new file mode 100644 index 00000000000..405ebb9b43e --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/environment-configuration.md @@ -0,0 +1,53 @@ +--- +title: Add an Environment +description: Define environments where your Service can be deployed. +sidebar_position: 10 +helpdocs_topic_id: n39w05njjv +helpdocs_category_id: 1eqg76ac72 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You define your target deployment infrastructure using a Harness Environment. Environments represent your deployment infrastructures, such as Dev, QA, Stage, Production, etc. + + +### Before You Begin + +* [Add Services](https://docs.harness.io/category/add-services) + +### Step 1: Add an Environment + +To add an environment, do the following: + +1. Click **Setup**. +2. Click an Application. +3. Click **Environments**. +4. Click **Add Environment**. The **Environment** dialog appears. + + ![](./static/environment-configuration-00.png) + +5. Enter a name and description for the Environment. For example, the name **DEV** with the description: **This is the development environment for example.com**. +6. In **Environment Type**, choose **Production** or **Non-Production**. +7. Click **Submit**. The **Environment Overview** appears. Here you can add the Infrastructure Definition and overrides to the configurations of the Services that use this Environment. + +### Step 2: Add Infrastructure Definition + +The Infrastructure Definition is where you specify the target infrastructure for your deployment. The target infrastructure can be an existing infrastructure or an infrastructure provisioner, such as Terraform or CloudFormation. For detailed information on adding Infrastructure Definition, see [Add an Infrastructure Definition](infrastructure-definitions.md). + +### Step 3: Override a Service Configuration + +For information about how a Service configuration is overwritten in a Kubernetes deployment, see [Override Harness Kubernetes Service Settings](https://docs.harness.io/article/ycacqs7tlx-override-harness-kubernetes-service-settings).You can configure your Environment to override settings of the Services that use the Environment. For example, a Service might use a specific values.yaml file, but your Environment might need to change the name and namespace of the Deployment object because it is deploying the Service to a QA Environment. + +To override a Service configuration, see [Override a Service Configuration in an Environment](override-service-files-and-variables-in-environments.md). + +### Step 4: Add Service Verification + +24/7 Service Guard applies Harness Continuous Verification unsupervised machine-learning to detect regressions and anomalies across transactions and events for the service and displays the results in the 24/7 Service Guard dashboard. + +For detailed information on adding Service Verification, see [24/7 Service Guard Overview](../../continuous-verification/continuous-verification-overview/concepts-cv/24-7-service-guard-overview.md). + +### Next Steps + +* [Restrict Deployment Access to Specific Environments](https://docs.harness.io/article/twlzny81xl-restrict-deployment-access-to-specific-environments) +* [Add Workflows](https://docs.harness.io/category/add-workflows) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/environment-level-variables-for-all-services.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/environment-level-variables-for-all-services.md new file mode 100644 index 00000000000..ef64fa37366 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/environments/environment-level-variables-for-all-services.md @@ -0,0 +1,114 @@ +--- +title: Create Environment-level Variables and Files for All Services +description: This topic describes how to create a Harness Environment-level variable that is not set in a Service, but is available in any Workflow using the Environment. This is helpful when you have want Enviro… +sidebar_position: 60 +helpdocs_topic_id: ki525qfbs0 +helpdocs_category_id: 1eqg76ac72 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to create a Harness Environment-level variable that is not set in a Service, but is available in any Workflow using the Environment. + +This is helpful when you have want Environment-specific variables that will apply to all Services, but don't want to set up Config Vars in every Service. + +For example, let's say you have many Services using Tomcat that all connect to a backend database. The JDBC port will be different for different Environments, such as QA/SIT/UAT/PROD. Instead of defining a Service-level JDBC\_PORT variable for every Service, you can just create one JDBC\_PORT variable in each Environment. When each Environment is used in a Workflow, it supplies a different value for the JDBC\_PORT variable. + +### Before You Begin + +* [Built-in Variables List](https://docs.harness.io/article/aza65y4af6-built-in-variables-list) +* [Override a Service Configuration in an Environment](override-service-files-and-variables-in-environments.md) +* [Add Service Config Variables](../setup-services/add-service-level-config-variables.md) +* [Add Service Config Files](../setup-services/add-service-level-configuration-files.md) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Create a Service Configuration Override for All Services + +1. In a Harness Application, click **Environments**. +2. In an **Environment**, in **Service Configuration Overrides**, click **Add Configuration Overrides**. +The **Service Configuration Override** settings appear. +3. In **Service**, select **All Services** (at the bottom of the list). You can also select a specific Service. You simply aren't overriding its Service Config Variables or Files. +4. In **Override Type**, select **Variable Override** or **File Override**. + +### Option: Variable Override + +1. In **Configuration Variables**, enter a name for the variable. +This is the name you will use to reference this variable later using the expression `${serviceVariable.ConfigurationVariablesName}` or `${environmentVariable.ConfigurationVariablesName}`. +2. In **Type**, select **Text** or **Encrypted Text**. +3. In **Override Value**, enter the value for the variable or select/add a new Encrypted Text variable. +4. Click **Submit**. + +### Option: File Override + +1. Click **Choose File**, and then select the file to add. +To select a Harness [Encrypted Text file](https://docs.harness.io/article/nt5vchhka4-use-encrypted-file-secrets), click **Encrypt File**, and then select the file. +2. In **Relative File Path**, enter the name of the file. You can also enter the path where it will be placed on the target host(s). +This is the name you will use to reference this file later using the expression `${configFile.getAsString("RelativeFilePathName")}`. +3. Click **Submit**. + +### Step 2: Use the Variable or File in a Workflow + +1. In a new or existing Workflow, select the Environment in the Workflow's settings. +2. Add a step to the Workflow that will use the Environment. For example, a [Shell Script](../workflows/capture-shell-script-step-output.md) step. +3. Reference the Environment variable using an expression. + +For Variable Overrides, use the following for unencrypted or encrypted variables: `${serviceVariable.ConfigurationVariablesName}` or `${environmentVariable.ConfigurationVariablesName}`. + +For File Overrides, use the following for unencrypted or encrypted files: + +* `${configFile.getAsString("RelativeFilePathName")}` — standard text string. +* `${configFile.getAsBase64("RelativeFilePathName")}` — Base64 encoded. + +### Review: Example Deployment + +Let's look at an example where encrypted and unencrypted variables and files. + +Here is an Environment with 4 Environment-level variables and files: + +![](./static/environment-level-variables-for-all-services-23.png) + +Here is a Shell Script step in a Workflow referencing them: + + +``` +echo "Encrypted text Env var: " ${serviceVariable.encryptedText} + +echo "Unencrypted text Env var: " ${serviceVariable.footext} + +echo "Encrypted text Env var: " ${environmentVariable.encryptedText} + +echo "Unencrypted text Env var: " ${environmentVariable.footext} + +cat < "$PROVISIONER_OUTPUT_PATH" +``` +The Harness environment variable `"$PROVISIONER_OUTPUT_PATH"` is initialized by Harness and stores the JSON collection returned by your script. + +Currently, Harness supports Bash shell scripts. PowerShell will be added soon.This script returns a JSON array describing the instances: + + +``` +{ + "Instances": [ + { + ... + "Status": "online", + "InstanceId": "4d6d1710-ded9-42a1-b08e-b043ad7af1e2", + "SshKeyName": "US-West-2", + "InfrastructureClass": "ec2", + "RootDeviceVolumeId": "vol-d08ec6c1", + "InstanceType": "t1.micro", + "CreatedAt": "2015-02-24T20:52:49+00:00", + "AmiId": "ami-35501205", + "PublicDnsName": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com", + "Hostname": "ip-192-0-2-0", + "Ec2InstanceId": "i-5cd23551", + "SubnetId": "subnet-b8de0ddd", + "SecurityGroupIds": [ + "sg-c4d3f0a1" + ... + }, + ] +} +``` +Next, in Harness, you map the keys from the JSON host objects to Shell Script Provisioner fields to tell Harness where to obtain the values for your infrastructure settings, such as hostname and subnet. + +![](./static/shell-script-provisioner-00.png) + +At runtime, Harness queries your provisioner using your script and stores the returned JSON collection on the Harness Delegate as a file. Harness then uses the JSON key values to define the instructure for your deployment environment as it creates that environment in your target platform. + +Here is a high-level summary of the setup steps involved: + +1. **Delegate and Cloud Provider** - Install a Harness Delegate where it can connect to your infrastructure provisioner and query it for the JSON infrastructure information. Add a Harness Cloud Provider that connects to the platform where the infrastructure will be deployed. +2. **Application and Service** - Create a Harness Application to manage your deployment. Add a Service to your Application. The type of Service you select determines how you map JSON keys in the Shell Script Provisioner **Service Mappings**. For example, an ECS Service will require different mapping settings than a Kubernetes Service. +3. **JSON and Script Prep** - Prepare the JSON file to be retrieved by Harness. Prepare the shell script to pull the JSON to Harness. +4. **Shell Script Provisioner** - Add a Shell Script provisioner to your Application. + 1. Add the shell script to the Shell Script provisioner to query your provisioner and retrieve the JSON infrastructure information. + 2. Add Service Mappings. The mapping method depends on the Service and Deployment Type you select. +5. **Environment** - Add an Environment to your Application that uses the Shell Script Provisioner in its Infrastructure Definition. +6. **Workflow** - Add a Workflow to your Application that applies the Shell Script Provisioner. + +### Delegate and Cloud Provider Setup + +To use the Harness Shell Script Provisioner there are certain Delegate and Cloud Provider setup steps that must be performed. + +#### Delegate Requirements + +To deploy using a Shell Script Provisioner, ensure the following Delegate configuration is set up: + +* Install a Harness Delegate on a host that can connect to the provisioner your shell script will query. Once you have installed your Delegate, open a terminal on its host and run your shell script to ensure that it will execute at runtime. +* Ensure the same Delegate, or another Delegate, can connect to your target deployment environment. + +For information on setting up the Harness Delegate, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +#### Cloud Provider Requirements + +Harness Cloud Providers are used in the Harness Environment, in **Infrastructure Definition**. You will select the **Cloud Provider** to use when deploying your Infrastructure Definition. + +To ensure that your Cloud Provider and Infrastructure Provisioner are in sync, this topic will show you how to do the following: + +1. Set up a Cloud Provider (AWS, Physical Data Center, etc) for your connection to your deployment environment. For more information, see [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). +2. Later, when you set up your Environment Infrastructure Definition, select the same Cloud Provider. + +#### Delegate and Cloud Provider Setup + +The simplest method to ensure that your Delegate and Cloud Provider support your Infrastructure Provisioner is to install a Delegate in your deployment environment, verify that its host can connect to the provisioner you plan to query, and then use the same Delegate for the Cloud Provider authentication credentials. This method uses Delegate Selectors or the Delegate name. For more information, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +To set up your Delegate and Cloud Provider, do the following: + +1. Install the Delegate. + 1. In Harness, click **Setup**, and then click **Harness Delegates**. + 2. Click **Download Delegate** and select the Delegate type.![](./static/shell-script-provisioner-01.png) + 3. There are different installation steps depending on which Delegate type you select. For details on setting up each type, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + 4. Once the Delegate is installed, open a terminal on its host and test the shell script you plan to use to pull the provisioner JSON collection. For example, the following script obtains the JSON for AWS EC2 instances: + ``` + apt-get -y install awscli + aws configure set aws_access_key_id $access_key + aws configure set aws_secret_access_key $secret_key + aws configure set region us-east-1 + aws ec2 describe-instances --filters Name=tag:Name,Values=harness-provisioner + ``` + 5. Verify that the script returns the JSON collection. If it does, then the Delegate will be successful when executing the script at runtime. If the script fails, troubleshoot the network connection between the Delegate host and the provisioner host or service. + +2. Add the Cloud Provider. + 1. In Harness, click **Setup**, and then click **Cloud Providers**. + 2. Click **Add Cloud Provider**. The **Cloud Provider** dialog appears. + 3. Select the Cloud Provider type you want to use. + 4. In **Display Name**, enter the name to identify the Cloud Provider when you select it in your Harness Environment later. + For a Physical Data Center Cloud Provider, no credentials are required here. Instead, you add an SSH secret in Harness Secrets Management, and select that later in your Harness Environment in **Connection Attributes**. For more information, see [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + 5. Click **SUBMIT** when you are done. + +##### SSH Connection Credentials + +When you set up a Physical Data Center Cloud Provider in Harness, you do not enter SSH credentials. Instead, you add SSH credentials in Harness Secrets Management. For example, here is a SSH Configuration from Secrets Management. + +![](./static/shell-script-provisioner-02.png) + +For steps on adding SSH credentials, see [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + +### Application and Service Setup + +Any Harness Application and Service setup can be used with a Harness Infrastructure Provisioner. If you do not already have an Application and Service set up, use the following articles: + +* [Application Checklist](../../applications/application-configuration.md) +* [Services](../../setup-services/service-configuration.md) + +### Shell Script Provisioner Setup + +This section will walk you through a detailed setup of a Shell Script Provisioner for a deployment to a Physical Data Center, and provide examples of the other supported platforms. + +For all of the supported platforms, setting up the Shell Script Infrastructure Provisioner involves the following steps: + +1. Add your shell script to pull the JSON collection from your provisioner. +2. Map the relevant JSON keys from the JSON to your Harness fields. + +To set up a Shell Script Infrastructure Provisioner, do the following: + +1. In your Harness Application, click **Infrastructure Provisioners**. +2. Click **Add Infrastructure Provisioner**, and then click **Shell Script**. + + ![](./static/shell-script-provisioner-03.png) + + In this dialog, you will enter the shell script to pull the JSON collection from your provisioner. + +3. In **Name**, enter a name for the Shell Script Provisioner, such as **Example Shell Script Provisioner**. You will use this name later when you select this Shell Script Provisioner in your Harness Environment and Workflow. +4. Click **NEXT**. The **Script** section appears. + + ![](./static/shell-script-provisioner-04.png) + +5. In **Script**, enter the shell script to pull the JSON collection from your provisioner. This shell script will be executed at runtime by the Harness Delegate on its host. This should be a shell script you have run on the Delegate host to ensure that the host can connect to your provisioner. + +Let's look at an example script: + + +``` +apt-get -y install awscli +aws configure set aws_access_key_id $access_key +aws configure set aws_secret_access_key $secret_key +aws configure set region us-west-1 +aws ec2 describe-instances --instance-ids i-0beacf0f260edd19f > "$PROVISIONER_OUTPUT_PATH" +``` +The script should return a JSON array containing the host information Harness needs to provision, such as: + + +``` +{ + "Instances": [ + { + ... + "Status": "online", + "InstanceId": "4d6d1710-ded9-42a1-b08e-b043ad7af1e2", + "SshKeyName": "US-West-2", + "InfrastructureClass": "ec2", + "RootDeviceVolumeId": "vol-d08ec6c1", + "InstanceType": "t1.micro", + "CreatedAt": "2015-02-24T20:52:49+00:00", + "AmiId": "ami-35501205", + "PublicDnsName": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com", + "Hostname": "ip-192-0-2-0", + "Ec2InstanceId": "i-5cd23551", + "SubnetId": "subnet-b8de0ddd", + "SecurityGroupIds": [ + "sg-c4d3f0a1" + ... + }, + ] +} +``` +The environment variable `"$PROVISIONER_OUTPUT_PATH"` is initialized by Harness and stores the JSON collection returned by your script. You are simply writing a file to `"$PROVISIONER_OUTPUT_PATH"`. + +Put quotes around `$PROVISIONER_OUTPUT_PATH` as a best practice. The quotes are only required if the value of the variable will have spaces in it, but they cause no problem in any case.The above example uses AWS, but the example is included here to demonstrate a simple script to obtain a JSON collection. + +There are two access key variables in the script example, `$access_key` and `$secret_key`. You can set these variables here and when this Infrastructure Provisioner is added to a Workflow, a user will select the Harness Encrypted Text secrets to use for each variable. We'll do this next. + +1. Once you have entered your script, click **NEXT**. The **Variables** section appears. + + ![](./static/shell-script-provisioner-05.png) + +2. Click in the **Name** column, and enter the key name without the `$`, such as `access_key`. +3. Click in the **Type** column, and choose **Encrypted Text**. +4. Repeat the steps for the other variable, `secret_key`. When you are done, the **Variables** section will look something like this: + +![](./static/shell-script-provisioner-06.png) + +When you select the Provisioner in a Harness Workflow, you will be prompted to provide the values for the variables. You can select secrets from the Harness Secrets Management. See [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). Selecting the Provisioner in a Harness Workflow is covered later in this topic.As an alternative, you can reference secrets directly in your script using the Harness variable, `${secrets.getValue("")}`. For example: + + +``` +apt-get -y install awscli +aws configure set aws_access_key_id ${secrets.getValue("access_key")} +aws configure set aws_secret_access_key ${secrets.getValue("secret_key")} +aws configure set region us-west-1 +aws ec2 describe-instances --instance-ids i-0beacf0f260edd19f > "$PROVISIONER_OUTPUT_PATH" +``` +Ensure that the Usage Scope for any Harness Secret you use is set to the Application using the Infrastructure Provisioner. For more information, see [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + +When you have entered your variables in the Shell Script Provisioner **Variables** section, click **NEXT**, and then click **SUBMIT**. The Shell Script Provisioner appears. + +![](./static/shell-script-provisioner-07.png) + +### Environment Setup and Infrastructure Definitions + +With Infrastructure Definitions, you map Shell Script script outputs as part of the Infrastructure Definition in a Harness Environment. + +![](./static/shell-script-provisioner-08.png) + +In this section we will show you how to set up an Infrastructure Definition that maps the keys from the JSON collection your provisioner script returns with the platform-specific keys needed by Harness for deployment. + +For Service mappings in an ​Infrastructure Definition, you map the keys from the JSON collection your script returns with the platform-specific keys needed by Harness for deployment. + +For example, to deploy to a Physical Data Center, Harness requires that you provide a key from the JSON collection and map it to a Harness **Hostname** field. You can add any additional mappings that will help your deployment. + +The following Service mapping maps a `PublicDnsName` key to the **Hostname** field in Harness, and a `SubnetId` key from the JSON to a **SubnetId** field in Harness. + +![](./static/shell-script-provisioner-09.png) + +The type of service mappings required depend on what Deployment Type and Cloud Provider Type you use in Infrastructure Definition. A Service mapping to a Physical Data Center will require different mappings than an AWS Cloud Provider. + +In this section, we provide examples of mappings for different Deployment Types and Cloud Providers. + +##### Physical Data Center + +This section describes how to configure a Service Mapping that uses a **Physical Data Center** Cloud Provider. + +The following information is required for the Service Mapping: + +* **Hostname** - Harness requires the JSON key that indicates the hostname value. + +To set up a Service Mapping for a Physical Data Center Cloud Provider, do the following: + +1. In your Harness Application, in your Environment , click add Infrastructure Definition. The Infrastructure Definition dialog appears. + + ![](./static/shell-script-provisioner-10.png) + +2. In **Name**, enter the name of the Infrastructure Definition. This is the name you will select when you create a Workflow or Workflow Phase. +3. In **Cloud Provider Type**, select **Physical Data Center**. +4. In **Deployment Type**, select **Secure Shell (SSH)**. +5. Select **Map Dynamically Provisioned Infrastructure** to use the Shell Script Infrastructure Provisioner you created. +6. In **Provisioner**, select the Shell Script Infrastructure Provisioner you created. +7. In **Cloud Provider**, select the Cloud Provider you set up to connect Harness to your Physical Data Center. +8. In **Host Connection Attributes**, select the SSH credentials you set up in [SSH Connection Credentials](shell-script-provisioner.md#ssh-connection-credentials). +9. In **Host Object Array Path**, enter the JSON path to the JSON array object for the host. + + For example, the following JSON object contains an Instances array with two items (the JSON is abbreviated): + + + ``` + { + "Instances": [ + { + "StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f", + ... + "InfrastructureClass": "ec2", + "RootDeviceVolumeId": "vol-d08ec6c1", + "SubnetId": "subnet-b8de0ddd", + "InstanceType": "t1.micro", + "CreatedAt": "2015-02-24T20:52:49+00:00", + "AmiId": "ami-35501205", + "Hostname": "ip-192-0-2-0", + "Ec2InstanceId": "i-5cd23551", + "PublicDns": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com", + "SecurityGroupIds": [ + "sg-c4d3f0a1" + ], + ... + }, + { + "StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f", + ... + "InfrastructureClass": "ec2", + "RootDeviceVolumeId": "vol-e09dd5f1", + "SubnetId": "subnet-b8de0ddd", + "InstanceProfileArn": "arn:aws:iam::123456789102:instance-profile/aws-opsworks-ec2-role", + "InstanceType": "c3.large", + "CreatedAt": "2015-02-24T21:29:33+00:00", + "AmiId": "ami-9fc29baf", + "SshHostDsaKeyFingerprint": "fc:87:95:c3:f5:e1:3b:9f:d2:06:6e:62:9a:35:27:e8", + "Ec2InstanceId": "i-8d2dca80", + "PublicDns": "ec2-192-0-2-1.us-west-2.compute.amazonaws.com", + "SecurityGroupIds": [ + "sg-b022add5", + "sg-b122add4" + ], + ... + } + ] + } + ``` + We want to point to the first item in the JSON file using its index, and so we use `Instances`. + + To ensure that you referring to the correct item in your array, test your **Host Object Array Path** using your JSON collection and an online validator such as [JSON Editor Online](https://jsoneditoronline.org/).In **Host Object Array Path**, the path will look like this: + + ![](./static/shell-script-provisioner-11.png) + + Now that you have provided a path to the host object, you can map its JSON keys in **Host Attributes**. For Physical Data Center, only the **Hostname** field is mandatory. + + ![](./static/shell-script-provisioner-12.png) + +10. In the row for **Hostname**, click **Enter JSON Path**, and enter the name of the key in the JSON array that lists the hostname you want to use. For example, you could use key name **PublicDnsName** from the earlier example: + + ``` + { + "Instances": [ + { + "StackId": "71c7ca72-55ae-4b6a-8ee1-a8dcded3fa0f", + ... + "SubnetId": "subnet-b8de0ddd", + "InstanceType": "t1.micro", + "CreatedAt": "2015-02-24T20:52:49+00:00", + "AmiId": "ami-35501205", + "Hostname": "ip-192-0-2-0", + "Ec2InstanceId": "i-5cd23551", + "**PublicDnsName**": "ec2-192-0-2-0.us-west-2.compute.amazonaws.com", + "SecurityGroupIds": [ + "sg-c4d3f0a1" + ], + ... + }, + ``` + +11. Map any other key names you want to use when creating the host(s) in the infrastructure. The following image shows how you can map multiple keys to **Host Attributes**. + + ![](./static/shell-script-provisioner-13.png) + + You can reference any mapped Field Name after the **Select Nodes** step in your Workflow using the expression `${host.properties.}`, such as `${host.properties.SubnetId}`. For example, you could add a Shell Script step to a Workflow that outputs the values for all the mapped Fields. + +12. Click **NEXT**, and then click **SUBMIT**. The Infrastructure Definition and its Service mapping is listed: + + ![](./static/shell-script-provisioner-14.png) + +Now that the Infrastructure Provisioner and an Infrastructure Definition with a Service mapping are created, you can use it in the Environment and Workflow of your Harness Application. + +##### AWS ECS + +You can set up mappings like you did with the Physical Data Center with other Harness Cloud Providers, such as AWS ECS. In every case, you simply need to provide the path to the JSON key you want to map to the required Harness fields. + +Here is an example of an AWS ECS EC2 mapping where each field contains a JSON path to a specific key. + +![](./static/shell-script-provisioner-15.png) + +##### Kubernetes on Google Cloud + +Mapping to Kubernetes on Google Cloud (GKE) simply requires the Kubernetes cluster name and namespace. + +![](./static/shell-script-provisioner-16.png) + +### Workflow Setup + +The Shell Script Provisioner is supported in Canary and Basic Deployment type Workflows. For AMI/ASG and ECS deployments, it is also supported in Blue/Green Deployment type Workflows.Once your Shell Script Infrastructure Provisioner has been added to an Environment in your Harness Application, it can be used in Workflows. + +For Canary Deployments, you add the Shell Script Infrastructure Provisioner as a pre-deployment step in the Workflow. + +In this section we will look at how to use a Shell Script Infrastructure Provisioner in a Canary Workflow. + +#### Canary Workflow + +Using the Shell Script Infrastructure Provisioner in a Canary Workflow involves adding the Shell Script Infrastructure Provisioner as a pre-deployment step before the phases of the Workflow, or within each phase. + +In this section, we'll create a Canary Workflow and add the Shell Script Infrastructure Provisioner as a pre-deployment step before the first phase of the Workflow. + +To use the Shell Script Infrastructure Provisioner in a Canary Workflow, do the following: + +1. In the Harness Application containing your Shell Script Infrastructure Provisioner, click **Workflows**. +2. Click **Add Workflow**. +3. In the **Workflow** dialog, add a name, and then, in **Workflow Type**, select **Canary Deployment**. The dialog fields change for a Canary deployment. +4. In **Environment**, select the Environment where you used your Shell Script Infrastructure Provisioner to dynamically provision the Infrastructure Definition. When you are done, the dialog will look something like this: + + ![](./static/shell-script-provisioner-17.png) + +5. Click **SUBMIT**. The Workflow appears. At the top of the Workflow steps is the **Pre-deployment Steps** where we will add the Shell Script Provisioner.![](./static/shell-script-provisioner-18.png) +6. In **Pre-deployment Steps**, click **Add Step**. The **Add Command** dialog appears. In the dialog, under **Provisioners**, the Shell Script Provisioner is listed. + + ![](./static/shell-script-provisioner-19.png) + +7. Click **Shell Script Provision**. The **Shell Script Provision** settings appear. + + ![](./static/shell-script-provisioner-20.png) + +8. In **Provisioner**, select the Shell Script Provisioner you created. +9. In **Timeout**, enter how long you want Harness to attempt to use the provisioner before failing the deployment. If the Delegate cannot reach the provisioner at all, or if the script does not work, it will fail immediately. +10. In **Delegate Selectors**, enter the Selectors of the Delegate(s) you want to execute this step. See [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). +11. Click **Output in the Context**. The **Output in the Context** settings appear. + + ![](./static/shell-script-provisioner-21.png) + + The **Output in the Context** settings let you take the shell script output from your Shell Script Provisioner and assign it to a variable. Next, you can scope the variable to Pipeline, Workflow, or Phase. +12. To use the **Output in the Context** settings, in **Variable Name**, enter a name such as **demo**, and in **Scope**, select **Workflow**, and then click **NEXT**. +Now you can display the output of the shell script within its scope by using the format `${context.var_name}`, such as `${context.demo}`. + +For example, here is the setup for the **Output in the Context** settings, the use of the variable `${context.demo}` in a Shell Script step elsewhere in the Workflow, and the output in the deployed Shell Script step.![](./static/shell-script-provisioner-22.png) + + +13. Click **NEXT**. The **Variables** section appears. If you used variables in your Shell Script Provisioner, the variables are listed in the **Variables** section. You must provide values for the variables. + +For example, the following image shows a Shell Script Provisioner with two variables, `access_key` and `secret_key`, on the right, and their corresponding settings in the **Variables** section of the Canary Workflow step on the left:![](./static/shell-script-provisioner-23.png) + + +14. For each variable, click in the **Value** column and add or select a value. If the variable is just text, enter a value. If the variable is encrypted text, the available values in the dropdown are taken from the Encrypted Text entries in Harness Secrets Management. For more information, see [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). When you are finished, the Variables section will look something like this:![](./static/shell-script-provisioner-24.png) + +You can use Workflow variables in the **Value** settings. See [Set Workflow Variables](../../workflows/add-workflow-variables-new-template.md).1. Click **NEXT**, and then click **SUBMIT**. The Shell Script Provisioner step is added to the Workflow.![](./static/shell-script-provisioner-25.png) + +The Shell Script Provisioner is now added as a step in the Workflow. + +For each Phase in a Canary Workflow, you specify a Service and Infrastructure Definition for the Phase execution. You can specify the same Infrastructure Definition that uses your Shell Script Provisioner. + +For example, in the Workflow, in **Deployment Phases**, click **Add Phase**. The **Workflow Phase** dialog appears. + +![](./static/shell-script-provisioner-26.png) + +In **Service**, select the Service to be deployed in this Phase. The Service must be the same Service type that is used in the Infrastructure Definition that uses your Shell Script Provisioner. + +In **Infrastructure Definition**, select the Infrastructure Definition that uses your Shell Script Provisioner. + +![](./static/shell-script-provisioner-27.png) + +Click **SUBMIT**. The Phase is created using the Infrastructure Definition that uses your Shell Script Provisioner. + +Add any other Canary Workflow Phases you require, and then Deploy your Workflow. The Workflow will use the Shell Script Provisioner to create the Service Mappings it requires and create the infrastructure for your deployment. + +### Deployment Example + +The following Canary Workflow deployment uses the Shell Script Provisioner as part of its deployment. + +![](./static/shell-script-provisioner-28.png) + +Let's look at each stage of the deployment. + +In the **Pre-Deployment** phase, you can see the **Shell Script Provision** step using the Shell Script Provisioner script to obtain the JSON array. + +![](./static/shell-script-provisioner-29.png) + +In this example, the **Shell Script Provision** step setting, **Output in the Context**, was used to put the Shell Script Provisioner script JSON output into a variable, and echo that variable in a **Shell Script** step in the Workflow. If you click the **Shell Script** step in the Workflow, the JSON obtained by the Shell Script Provisioner script is displayed: + +![](./static/shell-script-provisioner-30.png) + +Next, in **Phase 1** of the Canary Workflow, we can see in the result of the **Service Mapping** from our Shell Script Provisioner (or [Infrastructure Definition](../../environments/environment-configuration.md#add-an-infrastructure-definition)) in the **Select Nodes** step. The following images show how the JSON key `PublicDnsName` was mapped to the Harness field `Hostname`, which is then used to select the node for deployment. + +Here is an example using Service Mapping in the Infrastructure Provisioner: + +![](./static/shell-script-provisioner-31.png) + +Here is an example using the [Infrastructure Definition](../../environments/environment-configuration.md#add-an-infrastructure-definition) Service mapping: + +![](./static/shell-script-provisioner-32.png) + +Lastly, in the **Install** step of the Workflow, you can see that the same hostname identifies the target host where the artifact was deployed successfully. + +![](./static/shell-script-provisioner-33.png) + +Now you have seen an example of how the Shell Script Provisioner was used to provision the deployment environment and target host using a simple JSON array. + +### Next Steps + +* [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) +* [Continuous Verification](https://docs.harness.io/article/myw4h9u05l-verification-providers-list) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-00.png new file mode 100644 index 00000000000..6249c0fb0e0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-01.png new file mode 100644 index 00000000000..a1b1ea3f81e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-02.png new file mode 100644 index 00000000000..7389dfef30f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-03.png new file mode 100644 index 00000000000..b12dcb593c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-04.png new file mode 100644 index 00000000000..ee32f6dea94 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-05.png new file mode 100644 index 00000000000..9637b1053a7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-06.png new file mode 100644 index 00000000000..5a2169ba896 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-07.png new file mode 100644 index 00000000000..91eaa77511e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-08.png new file mode 100644 index 00000000000..b175567a854 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-09.png new file mode 100644 index 00000000000..568bbe540ae Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-10.png new file mode 100644 index 00000000000..1847b072d97 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-11.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-11.png new file mode 100644 index 00000000000..839d73a8206 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-11.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-12.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-12.png new file mode 100644 index 00000000000..323f37c0236 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-12.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-13.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-13.png new file mode 100644 index 00000000000..568bbe540ae Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-13.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-14.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-14.png new file mode 100644 index 00000000000..6dadfe64a6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-14.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-15.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-15.png new file mode 100644 index 00000000000..c8d9a56f35e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-15.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-16.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-16.png new file mode 100644 index 00000000000..b6f172f1ffc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-16.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-17.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-17.png new file mode 100644 index 00000000000..028c973917b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-17.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-18.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-18.png new file mode 100644 index 00000000000..20d6508fd59 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-18.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-19.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-19.png new file mode 100644 index 00000000000..b6104e4e833 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-19.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-20.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-20.png new file mode 100644 index 00000000000..4c788ce8050 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-20.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-21.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-21.png new file mode 100644 index 00000000000..8f06e003bb5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-21.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-22.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-22.png new file mode 100644 index 00000000000..4a924e01c8f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-22.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-23.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-23.png new file mode 100644 index 00000000000..e451004a4de Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-23.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-24.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-24.png new file mode 100644 index 00000000000..c69ebee36e3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-24.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-25.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-25.png new file mode 100644 index 00000000000..eded21b4071 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-25.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-26.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-26.png new file mode 100644 index 00000000000..e36bcba218e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-26.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-27.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-27.png new file mode 100644 index 00000000000..6f666799b55 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-27.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-28.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-28.png new file mode 100644 index 00000000000..0c54eb0e174 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-28.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-29.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-29.png new file mode 100644 index 00000000000..d9f7e3c37d1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-29.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-30.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-30.png new file mode 100644 index 00000000000..20ee6a54c89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-30.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-31.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-31.png new file mode 100644 index 00000000000..83a50fab308 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-31.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-32.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-32.png new file mode 100644 index 00000000000..d9c402ab84a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-32.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-33.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-33.png new file mode 100644 index 00000000000..eca3c376122 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/ssh-provisioner-category/static/shell-script-provisioner-33.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/add-an-infra-provisioner-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/add-an-infra-provisioner-10.png new file mode 100644 index 00000000000..3c9f0d975d1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/add-an-infra-provisioner-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-00.png new file mode 100644 index 00000000000..7f6171f20df Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-01.png new file mode 100644 index 00000000000..2dffb13a7e8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-02.png new file mode 100644 index 00000000000..8072aa943f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-03.png new file mode 100644 index 00000000000..2dffb13a7e8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-04.png new file mode 100644 index 00000000000..b16195de2d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-05.png new file mode 100644 index 00000000000..de8c38b128d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-06.png new file mode 100644 index 00000000000..4dfdfb1d21c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-07.png new file mode 100644 index 00000000000..6c239e8af53 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-08.png new file mode 100644 index 00000000000..2b0c7953edd Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-09.png new file mode 100644 index 00000000000..b27468ee60f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/infrastructure-provisioner/static/provision-infrastructure-without-deploying-to-it-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/_category_.json new file mode 100644 index 00000000000..cbc727b25b2 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Add Pipelines", + "position": 50, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Add Pipelines" + }, + "customProps": { + "helpdocs_category_id": "aa3bkrzgqi" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/pipeline-configuration.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/pipeline-configuration.md new file mode 100644 index 00000000000..5c2c5b5bc19 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/pipeline-configuration.md @@ -0,0 +1,170 @@ +--- +title: Create a Pipeline +description: Describes how to create a Pipeline. +sidebar_position: 10 +helpdocs_topic_id: zc1u96u6uj +helpdocs_category_id: aa3bkrzgqi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Pipelines define your release process using multiple Workflows and Approvals in sequential and/or parallel stages. Pipelines can involve multiple Services, Environments, and Workflows. A Pipeline can be triggered either manually or using [Triggers](../triggers/add-a-trigger-2.md). + +### Before You Begin + +* [Create an Application](../applications/application-configuration.md) +* [Add a Service](../setup-services/service-configuration.md) +* [Add an Environment](../environments/environment-configuration.md) +* [Add a Workflow](../workflows/workflow-configuration.md) + +### Visual Summary + + + + + +### Step 1: Add a Pipeline + +To add a Pipeline, perform the following steps: + +1. Click **Setup** and then select the Application that you want to deploy. +2. Click **Pipelines** and then click **Add Pipeline**. The **Add Pipeline** settings appear. + + ![](./static/pipeline-configuration-00.png) + + For the **Rollback on failure or approval rejection** option, see [Rollback on failure or approval rejection](#rollback_on_failure_or_approval_rejection) below. + +3. Enter a **Name** for your Pipeline. This name is used for selecting the Pipeline on the Deployments page. +4. Enter **Description** for your Pipeline and click **Submit**. The following settings appear.![](./static/pipeline-configuration-01.png) +5. Select the **Execution Step** to execute this stage when the Pipeline runs. + + 1. In **Step Name**, enter the name for this stage. This name acts as a subheading to the stage name. You can also select the **Auto Generate Name** checkbox to generate the name automatically. + + 2. In **Execute Workflow**, select the Workflow to execute in this Pipeline. The Workflows in your Application are listed. + + 3. Select **Execute in Parallel with Previous Step** checkbox to execute steps in parallel. + + 4. Select **Do not skip**, **Skip always**, or **Skip based on assertion expression** for setting the skip option. For more information, see [Skip Execution](skip-conditions.md#skip-execution). + + 5. Enter Assertion Expression. It enables you to specify an expression that determines whether the stage should execute. For more information, see [Assertion Expression](skip-conditions.md#skip-based-on-assertion-expression). + + 6. For **Runtime Input Settings**, see [Option: Runtime Input Settings](#option_runtime_input_settings). + +6. Select **Approval Step** to require approval before the next stage executes. You can use Harness UI, Jira, Custom Shell Script, or ServiceNow Approval mechanisms. For more information on Approvals, see [Add Approvals](https://docs.harness.io/category/add-approvals). +7. Click **Submit**. The Pipeline Stage and its steps are added to the Pipeline.![](./static/pipeline-configuration-02.png) +8. You can add multiple Stages, and insert new Stages between the existing Stages. Pipelines Stages can be Workflows or Approvals. To add another stage to the Pipeline, in **Pipeline Stages**, click **+** and then follow the same [Steps](https://docs.harness.io/article/8j8yd7hky4-create-a-pipeline#step_1_add_a_pipeline). + +#### Rollback on failure or approval rejection + +Currently, this feature is behind the feature flag `SPG_PIPELINE_ROLLBACK`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.When you create a new Pipeline you have the option of enabling the **Rollback on failure or approval rejection** setting. + +![](./static/pipeline-configuration-03.png) + +When the **Rollback on failure or approval rejection** is enabled, if any stage in the Pipeline fails to execute, all previous stages are also rolled back. + +For example, in the following Pipeline, there are two stages, **deploy** and an **Approval** stage: + +![](./static/pipeline-configuration-04.png) + +During execution, the Approval stage was rejected, so the previous stage, **deploy**, was rolled back by stage 3, **Rollback-deploy**. + +![](./static/pipeline-configuration-05.png) + +If the Workflows in previous stages have Rollback steps, those steps will be run. + +### Step 2: Deploy a Pipeline + +Once you have set up a Pipeline, you can deploy it, running all the stages within the Pipeline. + +1. In your Application, select the Pipeline that you want to deploy. The **Pipeline Overview** page appears. +2. Click **Deploy** to run the Pipeline. The **Start New Deployment** dialog appears. + + When you set up the Pipeline stage(s), you picked the Workflow(s) the Pipeline will execute. The Workflow you selected is linked to a Service, Environment, and Infrastructure Definition. + + **Start New Deployment** is configured with the settings linked to the Workflows included in the Pipeline. + + If you have [Workflow Variables](../workflows/add-workflow-variables-new-template.md) or [templatized settings](templatize-pipelines.md), you are prompted to provide values for them. + + ![](./static/pipeline-configuration-06.png) + + You can enter a value for the variable or use a Harness expression. See [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables). + +3. Click **Submit**. The Pipeline is deployed and the deployment details are displayed in the **Deployments** page. You can click each phase in the process to see logs and other details of the process. + + ![](./static/pipeline-configuration-07.png) + +### Option: Runtime Input Settings + +Sometimes, the inputs and settings for all of the Workflows in a Pipeline are not known before you deploy. Some inputs and settings can depend on the execution of the previous stages in the Pipeline. + +For example, you might have an Approval step as part of the Workflow or Pipeline. Once the approval is received, you want to resume the next stage of the Pipeline execution by providing new inputs. + +To do this, when you add an Execution Step stage to your Pipeline, use **Runtime Input Settings**. + +1. Select **Runtime Input** for each Workflow variable that you want to make a runtime input. +2. For each variable value, enter a variable name or use a Harness expression. See [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables). When the Pipeline is deployed, you can add runtime values. +3. In **Timeout**, enter how long Harness should wait for you to submit values before applying the selection in **Action after Timeout**. +4. In **Action after Timeout**, select what Harness should do if the timeout occurs: + 1. **End Execution:** This option will fail the stage of the deployment and initiate rollback. + 2. **Continue with Default Values:** This option will use the values you entered in the Workflow Variable **Default Value** setting. If there are no default values, you must provide values. +5. In **User Groups**, select the [Harness User Group(s)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) to notify of the time. + +During Pipeline execution, you are prompted for any Workflow variables you selected as a **Runtime Input**. + +![](./static/pipeline-configuration-08.png) + +You can enter a value for the variable or use a Harness expression. + +You will be prompted for each Workflow in the Pipeline where you have used **Runtime Inputs**. + +Later, if you choose to rerun a Pipeline, the Pipeline will run using the runtime inputs you provided the last time it ran. + +### Step 3: Abort or Rollback a Running Deployment + +If you deploy a Pipeline and choose the **Abort** option during the running deployment, the **Rollback Steps** for the Workflow(s) in the Pipeline are not executed. Abort stops the deployment execution without rollback or clean up. To execute the **Rollback Steps**, click the **Rollback** button. + +| | | +| --- | --- | +| **Abort Button** | **Rollback Button** | +| ![](../workflows/static/_abort-button-left.png) | ![](../workflows/static/_rollback-button-right.png) | + + +### Incomplete Pipelines + +If a Workflow in a Pipeline is incomplete (missing information in a Workflow step), then the Pipeline Stage for that Workflow will indicate that the Workflow is incomplete: + +![](./static/pipeline-configuration-09.png) + +Open the incomplete Workflow, complete it, and then return to the Pipeline. + +#### Workflow Variables and Incomplete Pipelines + +Another example of an incomplete Pipeline is a Workflow stage with Workflow variables that have not been given default values. + +Also, if you add a new Workflow variable to a Workflow that is already in a Pipeline, that Pipeline is marked incomplete. You must open the Workflow in the Pipeline, enter a value for the new Workflow variable, and then submit the stage. + +If you try to deploy a Pipeline with an incomplete Workflow, Harness will prevent you. + +![](./static/pipeline-configuration-10.png) + +Simply fix the Workflow and then deploy the Pipeline. + +### Review: RBAC for Pipelines + +Pipeline follows standard Harness RBAC as described in [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). + +Your ability to read, create, delete, and update Pipelines depends on the **Pipeline** Application Permissions of your User Group. + +![](./static/pipeline-configuration-11.png) + +Your ability to deploy Pipelines also depends on your **Deployments** Application Permissions. + +![](./static/pipeline-configuration-12.png) + +### Next Steps + +* [Create Pipeline Templates](templatize-pipelines.md) +* [Resume Pipeline Deployments](https://docs.harness.io/article/4dvyslwbun-resume-a-pipeline-deployment) +* [Pipeline Governance](https://docs.harness.io/article/zhqccv0pff-pipeline-governance) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/skip-conditions.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/skip-conditions.md new file mode 100644 index 00000000000..352cdee73c1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/skip-conditions.md @@ -0,0 +1,151 @@ +--- +title: Pipeline Skip Conditions +description: Skip Pipeline stages based on conditions and variable expressions. +sidebar_position: 30 +helpdocs_topic_id: 6kefu7s7ne +helpdocs_category_id: aa3bkrzgqi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You might want to skip the execution of specific Pipeline Stages based on different conditions, such as a script output, a specific Git branch, or a specific release. + +You can use skip conditions to control how Pipeline stages execute. For example, you could evaluate a branch name to determine which Stage to run. If the branch name is not **master**, you could skip a Workflow that deploys from the master branch. + + +You can also apply skip conditions to steps in a Workflow. See [Skip Workflow Steps](../workflows/skip-workflow-steps.md). + +### Before You Begin + +* [Pipelines](pipeline-configuration.md) +* [Approvals](../approvals/approvals.md) + +### Limitations + +* Harness does not support the Ternary conditional `?:` operator from [JEXL](http://commons.apache.org/proper/commons-jexl/reference/syntax.html#Operators). +* You can use the following variable expressions in **Assertion Expression**: + + Approval input variables from earlier Approval Stages. This is described below. + + Published output variables from Workflow [Shell Script](../workflows/capture-shell-script-step-output.md) and [HTTP](../workflows/using-the-http-command.md) steps that are scoped to **Pipeline**. + + [Built-in Harness Application and Pipeline variable expressions](https://docs.harness.io/article/9dvxcegm90-variables). These are available because they are readable before deploying the Pipeline. +* In **Assertion Expression**, you can only use variable expressions that Harness can resolve before Workflow deployment. Pipeline Skip Conditions are evaluated before the Workflow is deployed. +For example, artifact information is only readable when you select, or Harness pulls, the artifact at Workflow deployment runtime. Therefore, artifact expressions should not be used in the **Assertion Expression**. +* The **Assertion Expression** setting allows you to test using multiple operators, not just equality. For example, `${release.test} > 1` would also be a valid **Assertion Expression** entry. + +### Review: Skip Conditions + +Skip conditions enable you to disable the execution of individual Pipeline Stages (such as Workflows). + +You can set a Stage to be skipped or conditionally execute Stages based on expressions evaluated when the Pipeline runs. You can use them in scenarios like these: + +* Set a Stage to never execute. This is useful when you want to add new Stages to a Pipeline’s structure, but prevent those Stages from executing while they are still under development. +* Set a Stage to execute conditionally, based on the value assigned to a variable in an earlier stage of the same Pipeline. + +Use the **Option to Skip Step** setting to select the condition under which this Pipeline Stage should execute: + +![](./static/skip-conditions-26.png) + +The options are: + +* **Do not skip** — This is the default behavior. Harness will not override execution. The Stage will simply execute. +* **Skip always** — The stage will never execute. With this option, you can maintain a Stage within a Pipeline, but disable it temporarily or indefinitely. +* **Skip based on assertion expression** — This option enables you to specify an expression that determines whether the Stage should execute. + +The Do not skip and Skip always options are easy to understand. The rest of this topic will focus on the **Skip based on assertion expression** option. + +#### Skip Always and Workflow Variables + +Normally, when you execute a Pipeline containing a Workflow that uses [Workflow variables](../workflows/add-workflow-variables-new-template.md), such as a [templated Workflow](../workflows/templatize-a-workflow-new-template.md), the Workflow variables are mandatory: values for the variables must be provided in order to deploy the Pipeline. + +If you select the **Skip always** option in **Option to skip execution** for a Workflow in a Pipeline, then Workflow variables in the skipped Workflow are no longer mandatory. The Pipeline may be deployed without providing variables. + +### Skip Based on Assertion Expression + +Use the **Skip based on assertion expression** option to conditionally skip a Pipeline stage based on an expression. + +Harness supports the [Java Expression Language (JEXL)](https://commons.apache.org/proper/commons-jexl/reference/syntax.html). You can use JEXL operators and regex in your expressions. + +To enable this option, you can define a new variable in an Approval stage, or use the variables listed in [Limitations](#limitations) above. + +Next, in any subsequent stage, you evaluate the expression to determine whether to execute the stage. + +Remember when an assertion evaluates to **false**, the Stage is not skipped. When it evaluates to **true**, it is skipped.For example, let's look at a Pipeline with a variable defined in Stage 1 and then evaluated in Stage 3: + +![](./static/skip-conditions-27.png) + +In the following image, you can see the variable defined in Stage 1, and then in Stage 3 you can see an expression using the variable: + +![](./static/skip-conditions-28.png) + +Let's walk through the steps of setting up this variable and using it in **Skip based on assertion expression**. + +#### Step 1: Create Input Variable + +In an [Approval](../approvals/approvals.md) stage within a Pipeline, you define an input variable as follows: + +1. Click **Advanced Settings**. + + ![](./static/skip-conditions-29.png) + +2. In **Additional Input Variables**, click **Add**. +3. Give your new variable a **Name** and **Default Value**. You will use the name to reference the variable in one or more subsequent Execution or Approval Stages in the Pipeline. You will use the value in an expression to determine whether the Stage should execute. + + ![](./static/skip-conditions-30.png) + +4. In **Publish Variable Name**, enter a parent name for the **Additional Input Variables**. The parent name helps to avoid conflicts when referencing variables of the same name within the same Pipeline. + + In this example, the parent name is set to `releaseInfo`, and the input variable's name is `releaseTarget`. So, in subsequent Pipeline Stages, you can reference this variable as `releaseInfo.releaseTarget`. Its default value is set as `PROD`.![](./static/skip-conditions-31.png) + + :::note + If you use multiple Approval Steps, ensure that the names entered in **Publish Variable Name** are unique in each Approval Step. If you use the same name in **Publish Variable Name** in multiple Approval Steps, and error will occur and the step will fail. + ::: + +5. To add more input variables, click **Add Variable**. + All variables defined in this Stage share the same parent name, prepended from the **Publish Variable Name** field. +6. Click **Submit** to save this Stage, along with its variables. Now you can use these variables in assertion expressions within Execution and Approval stages, or modify the variables, as described below. + +#### Step 2: Modify Value During Deployment + +When you deploy this Pipeline, the Approval Stage lets authorized approvers modify the variable value. Authorized approvers are members of User Groups selected in the Approval Stage's **User Group(s)** setting. + +In the example below, the `releaseTarget` variable appears with its **Default Value** of `PROD`. Approvers can change the value here before clicking **Approve**. + +![](./static/skip-conditions-32.png) + +#### Step 3: Evaluate Assertion Expression + +To use the variable value to determine whether a Stage executes, in a subsequent Stage in the same Pipeline, in **Option to skip execution**, select **Skip based on assertion expression**. + +In **Assertion Expression**, reference the `releaseInfo.releaseTarget` variable that we defined earlier in the Approval Stage using the expression `${releaseInfo.releaseTarget}`. + +In this example, the value `QA` is used in the assertion: + +![](./static/skip-conditions-33.png) + +When we first created this input variable, we set its value to `PROD`. + +During deployment, if we assume that this value has not been changed in any intermediate Stage—remember that users change a variable's value in an Approval Stage—the assertion expression will be evaluated as *false*. + +In this case, this Pipeline stage will *not* be skipped, because the skip condition (the assertion expression) has not been met. Therefore, Harness will attempt to execute the stage. + +Remember when an assertion evaluates to **false**, the Stage is not skipped. When it evaluates to **true**, it is skipped. + +### Review: Skip Condition Info In Deployments Page + +On a Harness deployment's page, when you hover a Pipeline stage that uses a skip condition, the skip conditions expressions are displayed: + +![](./static/skip-conditions-34.png) + +### Notes + +If a skip condition uses the templated variable of a User Groups or New Relic setting, the condition should use the ID of the User Group and New Relic setting, and not the name. + +For example, in the following [templated New Relic Workflow step](../../continuous-verification/new-relic-verification/4-verify-deployments-with-new-relic.md#review-templatize-new-relic-verification), the **New Relic Server** (Harness Verification Provider) and **Application Name** settings are templated: + +![](./static/skip-conditions-35.png) + +If you reference those settings in a Pipeline skip condition, you must use the ID of the server (Harness Verification Provider) or Application in your condition expression. Do not use the Harness Verification Provider or Application name. + +### See Also + +* [Using Variables in Workflow Approvals](../approvals/use-variables-for-workflow-approval.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-00.png new file mode 100644 index 00000000000..545de48dbc2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-01.png new file mode 100644 index 00000000000..6be0b9cbead Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-02.png new file mode 100644 index 00000000000..66683298a13 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-03.png new file mode 100644 index 00000000000..8b6cd325825 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-04.png new file mode 100644 index 00000000000..57f53728f7c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-05.png new file mode 100644 index 00000000000..6641c6d8e41 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-06.png new file mode 100644 index 00000000000..561487940e3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-07.png new file mode 100644 index 00000000000..e800db63cb5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-08.png new file mode 100644 index 00000000000..6effd42d7b5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-09.png new file mode 100644 index 00000000000..4356bb4ee52 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-10.png new file mode 100644 index 00000000000..c355a4124b0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-11.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-11.png new file mode 100644 index 00000000000..b9812b7738c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-11.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-12.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-12.png new file mode 100644 index 00000000000..691426fed2d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/pipeline-configuration-12.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-26.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-26.png new file mode 100644 index 00000000000..65866130384 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-26.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-27.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-27.png new file mode 100644 index 00000000000..35aa89a859d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-27.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-28.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-28.png new file mode 100644 index 00000000000..b5780e6c4e4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-28.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-29.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-29.png new file mode 100644 index 00000000000..7b6512a4cac Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-29.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-30.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-30.png new file mode 100644 index 00000000000..0b1b5eed331 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-30.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-31.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-31.png new file mode 100644 index 00000000000..1b9ae66d23d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-31.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-32.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-32.png new file mode 100644 index 00000000000..ab005a0c723 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-32.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-33.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-33.png new file mode 100644 index 00000000000..a0acfb93c53 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-33.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-34.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-34.png new file mode 100644 index 00000000000..1b151f844c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-34.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-35.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-35.png new file mode 100644 index 00000000000..880482ab916 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/skip-conditions-35.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-13.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-13.png new file mode 100644 index 00000000000..59b6c1672da Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-13.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-14.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-14.png new file mode 100644 index 00000000000..202a6839649 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-14.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-15.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-15.png new file mode 100644 index 00000000000..044112abff5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-15.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-16.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-16.png new file mode 100644 index 00000000000..bc472bef654 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-16.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-17.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-17.png new file mode 100644 index 00000000000..df8177d75ee Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-17.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-18.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-18.png new file mode 100644 index 00000000000..feb00cc1679 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-18.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-19.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-19.png new file mode 100644 index 00000000000..5375851ec6b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-19.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-20.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-20.png new file mode 100644 index 00000000000..e7d285b0de7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-20.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-21.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-21.png new file mode 100644 index 00000000000..fc82de28965 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-21.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-22.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-22.png new file mode 100644 index 00000000000..6ad5c7e9db9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-22.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-23.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-23.png new file mode 100644 index 00000000000..c266c050bcd Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-23.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-24.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-24.png new file mode 100644 index 00000000000..f45b102c10b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-24.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-25.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-25.png new file mode 100644 index 00000000000..4f96f5371d7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/static/templatize-pipelines-25.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/templatize-pipelines.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/templatize-pipelines.md new file mode 100644 index 00000000000..75a126d7fa9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/pipelines/templatize-pipelines.md @@ -0,0 +1,167 @@ +--- +title: Create Pipeline Templates +description: Create Pipeline templates for multiple Services, Environments, and Infrastructure Definitions. +sidebar_position: 20 +helpdocs_topic_id: 60j7391eyy +helpdocs_category_id: aa3bkrzgqi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Pipeline templates allow you to use one Pipeline with multiple Services and Infrastructure Definitions and a single Environment. + +You template a Pipeline by replacing these settings with variable expressions. Each time you run the Pipeline, you provide values for these expressions. + +### Before You Begin + +* [Pipelines](pipeline-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) + +### Visual Summary + +Here's an example of two Pipelines Stages that have their Services, Environments, and Infrastructure Definitions settings replaced by variable expression. + +At the bottom, the **Start New Deployment** settings for the Pipeline show how the expressions are replaced with values. + +![](./static/templatize-pipelines-13.png) + +During deployment, these values will be applied to all of the Workflows in the Pipeline. + +### Limitations + +You cannot template multi-value (drop-down) **Allowed Values** Workflow variables. + +### Step 1: Review Workflow Variables in Deployment + +To template a Pipeline, you have to template one or more Workflows in the Pipeline. Let's review this Workflow template requirement. + +To template a Workflow, you replace its Service, Environment, and Infrastructure Definition settings with variable expressions. Here's an example: + +![](./static/templatize-pipelines-14.png) + +When you deploy a Workflow template, you are prompted to provide values for the variables. This is true whether you deploy a Workflow alone, as part of a Pipeline, or by a Trigger. + +Here is an example of a Pipeline Stage that deploys the templated Workflow above: + +![](./static/templatize-pipelines-15.png) + +As you can see, values are provided for each Workflow individually. Even in a Pipeline where all the Workflows are using the same Service, Environment, and Infrastructure Definition. + +A better solution is to make a template out of the Pipeline. In these cases, you can template the variables for **all the Workflows** in the Pipeline. + +With Pipeline templates, you can use one variable for each type of Workflow setting across the Pipeline. For example, you can use a single variable expression for all Services deployed in the Pipeline. When you run the Pipeline, you only need to supply one Service value. + +### Step 2: Template the Workflows + +To template a Pipeline, first you need to template settings in the Workflows the Pipeline will execute. + +You can template all of the settings for the Workflows in your Pipeline. Also, you can simply template some of the Workflow settings. + +When deciding what settings to template, consider how the Pipeline template will be used. If it will always deploy to the same Environment and Infrastructure Definition for every Workflow, then perhaps the **Service** setting is the only setting you need to template. + +To template a Workflow, do the following: + +1. Once you create the Workflow, open the Workflow settings. +2. Click the **[T]** button next to the **Service**, **Environment**, and **Infrastructure Definition** settings. + + ![](./static/templatize-pipelines-16.png) + + If you are running a Canary Workflow, you can template the **Environment** setting in the Workflow settings and the **Service** and **Infrastructure Definition** settings in the Phases of the Workflow. + + ![](./static/templatize-pipelines-17.png) + +If your Workflow contains Continuous Verification steps, you can template several of their settings also. + +For example, here is an AppDynamics step in the Verify section of a Workflow: + +![](./static/templatize-pipelines-18.png) + +For Continuous Verification steps, you will only add and configure them after your Workflow has been deployed once. This allows the step to compare deployments and flag anomalies.Once the settings are templated their related variables are listed in **Workflow Variables**. Here is an example where all Rolling Workflow and AppDynamics step settings are templated, and there is also a Workflow variable named **wfvar**: + +![](./static/templatize-pipelines-19.png) + +### Step 3: Template the Pipeline + +To template the Pipeline, as you add your Workflows in Pipeline Stages, you will enter variable expressions as values for one or more of the templated Workflow settings. + +For example, here is a templated Workflow added to a Pipeline Stage with each of its templated settings replaced by a Pipeline variable expression, such as `${env}`. + +![](./static/templatize-pipelines-20.png) + +The variable expression name `${env}` is arbitrary. You can use any name. + +Variable names may only contain `a-z, A-Z, 0-9, _`. They cannot contain hyphens or or dots (`.`) between names. The following keywords are reserved, and cannot be used as a variable name or property.Again, not all of the Workflow variables in each Stage need to be replaced by a Pipeline variable expression. For any Workflow variables you want to make static, simply provide a value. + +To template the entire Pipeline, use variable expressions for all Workflow variables in all Pipeline Stages. Use the same expression names for the same settings. + +Here is an example of a Pipeline where both the Dev and Prod Stages use the same expressions for the same settings: + +![](./static/templatize-pipelines-21.png) + +When this Pipeline is deployed, only three values need to be provided for both Stages of the Pipeline. This can be done with Pipelines of any size. + +Alternatively, you can have one or more Stages in the Pipeline use different variable expressions for the same settings. + +For example, below are two Pipeline Stages. The **Dev Rolling Deployment** Stage uses the expression `${infraDev}` for its Infrastructure Definition setting. The **Prod Rolling Deployment** Stage uses `${infraProd}` for its Infrastructure Definition setting. + +![](./static/templatize-pipelines-22.png) + +When the Pipeline is deployed, both Infrastructure Definitions must be given a value. + +![](./static/templatize-pipelines-23.png) + +The same is true if you execute this Pipeline using a Harness Trigger. + +### Step 4: Deploy the Pipeline + +When you deploy a templated Pipeline you supply values for any expressions. + +Below are the **Dev** and **Prod** Stages in a Pipeline along with two **Start New Deployment** settings. The + +Start New Deployment settings show different deployments of the same Pipeline template using different values. + +One **Start New Deployment** replaces the expressions with values for the the **Development** infrastructure. Another **Start New Deployment** replaces the same expressions for the **Production** infrastructure. + +![](./static/templatize-pipelines-24.png) + +Every deployment of the same Pipeline template can deploy to a different infrastructure by simply replacing the expressions. + +Now more Services, Environments, and Infrastructure Definitions can be added to the Application and used in the same Pipeline template. + +### Limitations + +Pipeline templates have the following limitations. + +#### Workflow Settings + +You can only template the following Workflow settings in a Pipeline Stage: + +* Service +* Environment +* Infrastructure Definition +* Some Verification step settings — For example, here are several AppDynamics settings templated: + + ![](./static/templatize-pipelines-25.png) + +* Custom Workflow variables — If your Workflows contain other Workflow variables, you can use expressions for those variables when you add the Workflow to a Stage in the Pipeline. When you deploy, you must provide values. + +#### Variable Name Restrictions + +Variable names may only contain `a-z, A-Z, 0-9, _`. They cannot contain hyphens or dots (`.`) between names. The following keywords are reserved, and cannot be used as a variable name or property: `or and eq ne lt gt le ge div mod not null true false new var return`. + +#### Single Environment Across Pipeline + +A Pipeline template may only have one Environment expression across all Stages. It must be the same expression. + +If you attempt to use an expression in a Stage's **Environment** setting that is different from the other Stage(s) **Environment** setting, an error will occur: + +`Error: Invalid argument(s): A Pipeline may only have one Environment expression across all Workflows.` + +Simply change the Environment expression to match the other Environment expressions. + +### Next Steps + +* [Passing Variables into Workflows and Pipelines from Triggers](../expressions/passing-variable-into-workflows.md) +* [Deploy a Workflow to Multiple Infrastructures Simultaneously](https://docs.harness.io/article/bc65k2imoi-deploy-to-multiple-infrastructures) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/_category_.json new file mode 100644 index 00000000000..b1c5b23930d --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Add Services", + "position": 20, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Add Services" + }, + "customProps": { + "helpdocs_category_id": "u4eimxamd3" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-a-docker-image-service.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-a-docker-image-service.md new file mode 100644 index 00000000000..c027fa2229c --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-a-docker-image-service.md @@ -0,0 +1,136 @@ +--- +title: Add a Docker Image +description: Outlines how to add a Docker Artifact Source to a Harness Service. +sidebar_position: 80 +helpdocs_topic_id: gxv9gj6khz +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to add a Docker Artifact Source in a Harness Service. + +Do not use Docker Registry to connect to a Docker resource on Artifactory. See [Artifactory](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server#artifactory).For details on using Docker with Kubernetes, see [Add Container Images for Kubernetes Deployments](https://docs.harness.io/article/6ib8n1n1k6-add-container-images-for-kubernetes-deployments). For details on using Docker with Helm, see [Helm Deployments Overview](https://docs.harness.io/article/583ojfgg49-helm-deployments-overview). + +### Before You Begin + +* Read the [Create an Application](../applications/application-configuration.md) topic to get an overview of how Harness organizes Services. +* Read the [Add a Service](service-configuration.md) topic to understand the process to add a Service to an Application. +* Read [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) to see how you can quickly configure your Harness Service using your existing YAML in Git. +* [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server) + +### Limitations + +For pulling Docker images from Docker repos, Harness has the following limits: + +* 10000 for private Docker repos +* 250 for public (no username or password required) Docker repos + +### Review: Metadata Used for Docker Image + +If the Artifact Source is **Docker**, at runtime the Harness Delegate will use the metadata to initiate a pull command from the deployment target host(s), and pull the artifact from the registry (Docker Registry, ECR, etc) to the target host. + +To ensure the success of this artifact pull, the host(s) running the target host(s) **must** have network connectivity to the registry. + +See [Service Types and Artifact Sources](service-types-and-artifact-sources.md). + +### Review: Sorting of Artifacts + +You add a Docker image to Harness by connecting to your repo using a [Harness Artifact Server](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server), and then by entering its name in a Harness Service's Artifact Source **Docker Image Name** setting. + +Once you enter the name and click **Submit**, Harness checks to make sure the artifact exists. To verify the name, Harness collects all of the artifacts from the repo you entered. + +Many of the APIs do not support sorted results, and so the artifacts are returned in a random order. Consequently, Harness has to sort the artifacts after it collects them. + +Harness sorts the artifacts alphanumerically and then displays them in the Harness Manager with the most recent artifacts listed first. + +If you push a new artifact to your repo, or rename an artifact, and then refresh the list using the **Artifact History** feature in the Harness Service, the new artifact will be listed first. + +![](./static/add-a-docker-image-service-05.png) + +### Step: Add a Docker Artifact Source + +A Docker Image artifact can be used in a number of different Harness Service types (Kubernetes, Helm, etc). You can specify container commands for the artifact, enter configuration variables and files, and use YAML for specific Service types. + +To specify a Docker Image Artifact Source for your Harness Service, do the following: + +1. Ensure you have set up an Artifact Server. See [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). +2. In the **Service Overview**, click **Add Artifact Source** and select the type of artifact source for your service. +The **Artifact Source** dialog appears with settings specific to the artifact source type you selected. For instructions on the artifact source you selected. + +### Option 1: Docker Registry Artifact Source + +Do not use Docker Registry to connect to a Docker resource on Artifactory. Use  [Artifactory](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server#artifactory).The Docker Registry Artifact Source has the following fields. + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Name** | You can enter a name or have Harness generate one for you automatically. | +| **Source Server** | In **Source Server**, select the name of the artifact source server you added in [Add an Artifact Server](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). | +| **Docker Image Name** | Click in **Docker Image Name** and enter the name of the artifact you want to deploy, such as **library/tomcat**.Wildcards are not supported. | + +If you click **Artifact History,** you will see the build history that Harness pulled from the source server. + +### Option 2: ECR Artifact Source + +The ECR Artifact Source has the following fields. + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Name** | You can enter a name or have Harness generate one for you automatically. | +| **Cloud Provider** | Select the name of the artifact source server you added in [Cloud Providers](https://docs.harness.io/article/whwnovprrb-infrastructure-providers). | +| **Region** | Select the region where the artifact source is located. | +| **Docker Image Name** | Click in **Docker Image Name** and select or enter the name of the artifact you want to deploy. By default, Harness automatically populates the field with the artifacts available from the ECR source server. Often, images in repos need to reference a path, for example: **app/myImage**. | + +If you click **Artifact History,** you will see the build history that Harness pulled from the source server. + +### Option 3: Azure Container Registry Artifact Source + +The Azure Container Registry Artifact Source has the following fields. + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Name** | You can enter a name or have Harness generate one for you automatically. | +| **Cloud Provider** | Select the name of the artifact source server you added in [Cloud Providers](https://docs.harness.io/article/whwnovprrb-infrastructure-providers). | +| **Subscription** | Harness will automatically pull the available GUIDs. Select an Azure Subscription GUID. If you don't see it, the API might have timed out. Enter the GUID and Harness will query for it. | +| **Azure Registry Name** | Harness will automatically pull the available names. Select a name. If you don't see it, the API might have timed out. Enter its name and Harness will query for it. | +| **Repository Name** | Harness will automatically pull the available repository names. Select a name. If you don't see it, the API might have timed out. Enter its name and Harness will query for it. | + +If you click **Artifact History,** you will see the build history that Harness pulled from the source server. + +### Option 4: GCR Artifact Source + +The Google Cloud Container Registry (GCR) Artifact Source has the following fields. + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **Display** **Name** | You can enter a name or have Harness generate one for you automatically. | +| **Cloud Provider** | Select the name of the artifact source server you added in [Cloud Providers](https://docs.harness.io/article/whwnovprrb-infrastructure-providers). | +| **Registry Host Name** | Once you select a Cloud Provider, the list of registries is populated automatically. Select the registry where the artifact source is located. | +| **Docker Image Name** | Enter the name of the artifact you want to deploy. Images in repos need to reference a path starting with the project ID that the artifact is in, for example: **myproject-id/image-name**. | + +Here is an example: + +![](./static/add-a-docker-image-service-06.png) + +If you click **Artifact History,** you will see the build history that Harness pulled from the source server. + +### Option 5: Artifactory Artifact Source + +See [Artifactory Artifact Sources](https://docs.harness.io/article/63gnfa6i8z-artifactory-artifact-sources). + +### Option 6: Nexus Artifact Source + +See [Nexus Artifact Sources](https://docs.harness.io/article/rdhndux2ab-nexus-artifact-sources). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-an-azure-dev-ops-artifact-source.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-an-azure-dev-ops-artifact-source.md new file mode 100644 index 00000000000..8c93f958f20 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-an-azure-dev-ops-artifact-source.md @@ -0,0 +1,61 @@ +--- +title: Add an Azure DevOps Artifact Source +description: Add an Azure DevOps Artifact Source to a Harness Service. +sidebar_position: 90 +helpdocs_topic_id: rbfjmko1og +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To add an Azure DevOps Artifact source to a Harness Service, you add an Azure DevOps Artifact Server as a Harness Connector, and then use that Connector in your Service to add the Azure DevOps organization, project, feed, and package name. + + +### Before You Begin + +* [Azure DevOps Artifacts](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server#azure_dev_ops_artifacts) + +### Step 1: Ensure an Azure Artifacts Connector is Set Up + +Before you can add an Azure DevOps artifact feeds to your Harness Service, you need to add a Harness Connector for your Azure DevOps organization. + +Use the information in [Azure DevOps Artifacts](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server#azure_dev_ops_artifacts) to set up the Connector in Harness. + +### Step 2: Add Azure Artifacts Package Feed to the Harness Service + +Azure Artifacts are available for Secure Shell (SSH), AWS CodeDeploy, WinRM, and Pivotal Cloud Foundry (PCF) Harness Service deployment types. + +The package types supported currently are NuGet and Maven. If you choose the Maven package type you can also use ZIP or WAR. If you use ZIP or WAR, then select ZIP or WAR as the type in your Harness Service Artifact Type.To use a Docker image on Azure, you can use Azure Container Registry Artifact Source. See [Add a Docker Artifact Source](add-a-docker-image-service.md). + +In your Harness Service, do the following: + +1. In **Service Overview**, click **Add Artifact Source**, and then click **Azure Artifacts**. + + ![](./static/add-an-azure-dev-ops-artifact-source-65.png) + + **Azure Artifacts** appears. + + ![](./static/add-an-azure-dev-ops-artifact-source-66.png) + +4. In **Name**, enter a name that identifies the artifact feed you are adding. +5. In **Source Server**, select the Azure DevOps Artifact Server you added to connect Harness to your Azure DevOps Artifacts. For more information, see [Azure DevOps Artifacts](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server#azure_dev_ops_artifacts). +6. In **Package Type**, select the package type. Only supported types are listed. +7. In **Scope**, select **Project** or **Organization**. If you selected Organization, Harness uses the organization specified in the **Azure DevOps URL** setting in the **Source Server** you selected. + + a. If you selected **Project**, in **Project**, select the name of the Azure Artifact project containing the feed you want to add. + +8. In **Feed Name**, select the name of the feed for your artifact. +9. In **Package Name**, select the name of the package for your artifact. + + Here is an example of a completed Azure Artifacts setup. + + ![](./static/add-an-azure-dev-ops-artifact-source-67.png) + +10. Click **Submit**. The Artifact Source is added to the Service. + +You can use Artifact History to manually pull a list of builds and version. + +### Next Steps + +* [Service Types and Artifact Sources](service-types-and-artifact-sources.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-service-level-config-variables.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-service-level-config-variables.md new file mode 100644 index 00000000000..4ecdaa12bb8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-service-level-config-variables.md @@ -0,0 +1,95 @@ +--- +title: Add Service Config Variables +description: Add Service Config variables to use throughout your Service configuration settings. +sidebar_position: 40 +helpdocs_topic_id: q78p7rpx9u +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add Service Config variables to use throughout your Service configuration settings, and referenced anywhere the Service is used in a Workflow. + +During deployment, the Service Config variables are created as environment variables wherever the commands are executed (on the target hosts or Delegate hosts). + +Only Service Config Variables are added as environment variables and can be output with `env`. Workflow and other variables are not added as environment variables.Service variables can be overwritten at the Environment level. See [Override a Service Configuration](../environments/environment-configuration.md#override-a-service-configuration). + +For information about how configuration variables and files are used in a Kubernetes deployment, see [Kubernetes Deployments Overview](https://docs.harness.io/article/pc6qglyp5h-kubernetes-deployments-overview). For information on using Harness variables and expressions, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +### Before You Begin + +* Read the [Create an Application](../applications/application-configuration.md) topic to get an overview of how Harness organizes Services. +* Read the [Add a Service](service-configuration.md) topic to understand the process to add a Service to an Application. +* Read [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) to see how you can quickly configure your Harness Service using your existing YAML in Git. + +### Limitations + +* You cannot use an empty value for a Service Config Variable. + +### Step 1: Use Config Variables + +To use Service-level variables, do the following: + +1. In **Configuration**, in **Config Variables**, click **Add Variable**. The **Config Variable** dialog appears. +2. In **Name**, enter a name for your variable. This is the name you will use to reference your variable anywhere in your service. + + :::note + Config variable names may only contain `a-z, A-Z, 0-9, _`. They cannot contain hyphens. The following keywords are reserved, and cannot be used as a variable name or property when using the dot operator: `or and eq ne lt gt le ge div mod not null true false new var return`. + ::: + +3. In **Type**, select **Text** or **Encrypted Text**. + + If you selected **Text**, enter a string in **Value**. When you reference this variable in the Service configuration, you use `${serviceVariable.var_name}`. + + If you select **Encrypted Text**, the Value field becomes a drop-down and you can select any of the Encrypted Text you have configured in Secrets Management. For more information, see [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + You can also click **Add New Encrypted Text** to create a new secret. This brings up the option of selecting the Secret Manager to use when storing the new encrypted variable. + + ![](./static/add-service-level-config-variables-28.png) + + When you reference this encrypted variable in the Service configuration, you use `${serviceVariable.var_name}`. + + You can also use Harness variables in the **Value** field. To use a variable, enter **$** and see the available variables. + + ![](./static/add-service-level-config-variables-29.png) + + For example, to add Workflow variables, enter `${workflow` and the available Workflow variables are displayed. For more information about variables, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +4. Click **SUBMIT**. The variable is added to **Config Variables**. + +5. To reference the variable is any text field in your Service configuration, type **$** and Harness will provide a drop down of all variables. Begin typing the name of your variable to find it and select it. + +### Step 2: Reference Service Variables and Secrets + +Use a Service Variable either as a text or as a secret (using one of the secrets in [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management)) in other Harness Application components such as Workflows by entering `${serviceVariable.` and selecting the variables. + +![](./static/add-service-level-config-variables-30.png) + +:::note +* Service variables cannot be referenced in the **Pre-deployment Steps** of a Workflow. Services are not used in pre-deployment and, consequently, their variables are not available. Service variables can be used in **Prepare Infra** steps in a Basic deployment Workflow. + +* Build Workflows do not use Harness Services. Consequently, Service variables and Service variable overrides cannot be used in a Build Workflow. +::: + +### Referencing Service Config Variables as Environment Variables + +The Service Config Variables are added as environment variables wherever the commands using them are executed. + +If the command is executed on the Delegate host, then the Service Config Variables are added as environment variables on that host. If the command is executed on the target host, then the Service Config Variables are added as environment variables on that host. + +:::note +Only Service Config Variables are added as environment variables and can be output with `env`. Workflow and other variables are not added as environment variables. +::: + +For example, if you have a Service Config Variable named **jarName**, Harness creates an environment variable name **jarName** that can be referenced in two ways: + +* As a Service variable: `${serviceVariable.jarName}`. There is no escaping done by Harness for Service variables. +* As an environment variable: `${jarName}`. Escaping is done by Harness automatically. + +### Override Values YAML + +For information on Helm Values YAML, see [Override Harness Kubernetes Service Settings](https://docs.harness.io/article/ycacqs7tlx-override-harness-kubernetes-service-settings) and [Helm Deployments Overview](https://docs.harness.io/article/ii558ppikj-helm-deployments-overview). + +### Notes + +If you create a new secret from within Service Config Variables, Harness automatically sets the scope of the secret to your Harness Application. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-service-level-configuration-files.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-service-level-configuration-files.md new file mode 100644 index 00000000000..f3751072e95 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/add-service-level-configuration-files.md @@ -0,0 +1,103 @@ +--- +title: Add Service Config Files +description: As a part of managing the Services you created, add Service Config Files to use throughout your configuration settings. +sidebar_position: 50 +helpdocs_topic_id: iwtoq9lrky +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +On the Services page, as a part of managing the Services you created, you can add Service Config Files that can be used throughout your configuration settings. + +For information about how configuration files are used in a Kubernetes deployment, see [Kubernetes Deployments Overview](https://docs.harness.io/article/pc6qglyp5h-kubernetes-deployments-overview). + +### Before You Begin + +* Read the [Create an Application](../applications/application-configuration.md) topic to get an overview of how Harness organizes Services. +* Read the [Add a Service](service-configuration.md) topic to understand the process to add a Service to an Application. +* Read [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) to see how you can quickly configure your Harness Service using your existing YAML in Git. + +### Limitations + +Files must be 1MB or less. + +All file types are supported. + +### Review: Required Permissions + +Make sure you have the **update** permission on the Service before you try to add the Service Config File. See [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) for more information about assigning permissions. + +### Review: String and Base64 Options + +Harness can encode your file as a string or Base64. + +When selecting an option, consider the content of the file and which encoding method will work best. + +Base64 encoding schemes are commonly used when there is a need to encode binary data (credentials are a common example), but they are also preferable when the file content contains special characters which cannot be parsed as a string. + +If you try string encoding and receive an error like the following, try Base64: + + +``` +Invalid values file. Error tokenizing YAML. Line 6, column 15: Found a mapping value where it is not allowed +``` +### Review: Use Base64 to Avoid New Lines + +If you are going to use a Config File in a spec or manifest, be aware that `${configFile.getAsString()}` can cause problems by adding new lines to your spec or manifest (unless you have formatted the file very carefully). + +Instead, use `${configFile.getAsBase64()}`. This will ensure that the contents of the file are rendered as a single line. + +### Step 1: Add Config Files + +Files added in the **Config Files** section are referenced using the `configFile.getAsString("fileName")` Harness functor: + +* `${configFile.getAsString("fileName")}` – standard text string. +* `${configFile.getAsBase64("fileName")}` – Base64 encoded. + +For example, let's add a **Config Files** file named **config-file-example.txt**. + +![](./static/add-service-level-configuration-files-00.png) + +You would reference this file in a Workflow that uses this Service like this: + + +``` +${configFile.getAsString("config-file-example.txt")} +``` +For example, here is an a config file named **example.txt** containing the string `This is a config file from the Service` added to a Service and then referenced in the Workflow that uses the Service. Finally, in the completed Workflow deployment, you can see the contents of the file output. + +![](./static/add-service-level-configuration-files-01.png) + +### Step 2: Use Copy Configs Command + +In most cases, use the **Copy Configs** command to copy the Config Files to your target hosts. You can add Copy Configs from the command menu in the Service: + +![](./static/add-service-level-configuration-files-02.png) + + In Copy Configs, you can change the location on the target host(s) where the files are added: + +![](./static/add-service-level-configuration-files-03.png) + +By default, it uses the [Application Defaults](https://docs.harness.io/article/9dvxcegm90-variables#application_default_variables) path `$WINGS_RUNTIME_PATH`. + +When the Workflow using this Service is deployed, the Copy Configs command copies the Service Config File to the target host: + +![](./static/add-service-level-configuration-files-04.png) + +### Review: Use Config Files in Delegate Profiles + +The `${configFile.getAsString("fileName")}` and `${configFile.getAsBase64("fileName")}` expressions are not supported in [Delegate Profiles](https://docs.harness.io/article/yd4bs0pltf-run-scripts-on-the-delegate-using-profiles). + +Instead, encode the file in base64 and then add the file to Harness as an [Encrypted File Secret](https://docs.harness.io/article/nt5vchhka4-use-encrypted-file-secrets). + +Next, in the Delegate Profile script, reference the secret, pipe it to base64 and output it to the path where you need it: + + +``` +echo ${secrets.getValue("secret_name")} | base64 -d > /path/to/file +``` +### Notes + +* If you sync your Harness Application as described in [Harness Application-Level Git Sync](https://docs.harness.io/article/6mr74fm55h-harness-application-level-sync), the Config Files are also synched to your remote repo. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/custom-artifact-source.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/custom-artifact-source.md new file mode 100644 index 00000000000..ed4c41d2a5d --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/custom-artifact-source.md @@ -0,0 +1,190 @@ +--- +title: Using Custom Artifact Sources +description: Define a Custom Artifact Source to collect artifacts from your custom repository. +sidebar_position: 60 +helpdocs_topic_id: jizsp5tsms +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +For enterprises that use a custom repository, Harness provides the Custom Artifact Source to add their custom repository to the Service. + +To use this artifact source, you provide a script to query your artifact server via its API (for example, REST) and then Harness stores the output on the Harness Delegate in the Harness-initialized variable `${ARTIFACT_RESULT_PATH}`. The output must be a JSON array, with a mandatory key for a Build Number. You then map a key from your JSON output to the Build Number. + +### Before You Begin + +* Read the [Create an Application](../applications/application-configuration.md) topic to get an overview of how Harness organizes Services. +* Read the [Add a Service](service-configuration.md) topic to understand the process to add a Service to an Application. +* Read [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) to see how you can quickly configure your Harness Service using your existing YAML in Git. + +### Step 1: Add an Artifact Source + +To add a Custom Artifact Source, do the following: + +1. In your Harness Application, open the Service where you want to use a Custom Artifact Source. +2. Click **Add Artifact Source**, and then click **Custom Repository**.![](./static/custom-artifact-source-44.png) + +The **Add Custom Artifact Source** dialog appears.![](./static/custom-artifact-source-45.png) + +Fill out the Add Custom Artifact Source dialog fields. + +### Step 2: Source Type + +Select **Custom**. + +### Step 3: Display Name + +Enter a name to identify this custom artifact source. You will use this name when picking the artifact builds and versions during deployment. + +### Step 4: Script + +Enter a shell script that pulls the artifact from the custom repo to a file path on the Harness Delegate host. + +You can leave the **Script** empty by not selecting **Auto Collect Artifacts**. + +![](./static/custom-artifact-source-46.png) + +You cannot use Harness [Service Configuration](add-service-level-config-variables.md) variables in the script.The shell script you enter will query the custom artifact repository for your artifact, and output the result to a file on the Harness Delegate host using the environment variable `ARTIFACT_RESULT_PATH`, initialized by Harness. `ARTIFACT_RESULT_PATH` is a random, unique file path created on the Delegate by Harness. + +You must delete the Artifact Source and re-add it to re-collect the Artifacts if the Artifact Source or its script information has been changed.![](./static/custom-artifact-source-47.png) + +The script you enter should result in a JSON array, for example: + + +``` +{ + "items" : [ { + "id" : "bWF2ZW4tcmVsZWFzXXXXXXXXXXXXXkOGVmMzU2YWE0ZTliMmZlNDY", + "repository" : "maven-releases", + "format" : "maven2", + "group" : "mygroup", + "name" : "myartifact", + "version" : "1.0", + "assets" : [ { + "downloadUrl" : "http://nexus3.harness.io:8081/repository/maven-releases/mygroup/myartifact/1.0/myartifact-1.0.war", + "path" : "mygroup/myartifact/1.0/myartifact-1.0.war", + "id" : "bWF2ZW4tcmVsZWFzXXXXXXXXXXXXXkOGVmMzU2YWE0ZTliMmZlNDY", + "repository" : "maven-releases", + "format" : "maven2", + "checksum" : { + "sha1" : "da39a3eXXXXXXXXXXXXX95601890afd80709", + "md5" : "d41d8cdXXXXXXXXXXXXX998ecf8427e" + } + }, { + "downloadUrl" : "http://nexus3.harness.io:8081/repository/maven-releases/mygroup/myartifact/1.0/myartifact-1.0.war.md5", + "path" : "mygroup/myartifact/1.0/myartifact-1.0.war.md5", + "id" : "bWF2ZW4tcmVXXXXXXXXXXXXXYmE3YTE1OTYwNzUxZTE4ZjQ", + "repository" : "maven-releases", + "format" : "maven2", + "checksum" : { + "sha1" : "67a74306XXXXXXXXXXXXX570f4d093747", + "md5" : "74beXXXXXXXXXXXXX56088456" + } +.... +``` +Harness will read the file, process it, and make the artifacts available for deployment in your Workflows and Pipelines. + +The following example pulls an artifact from a repo and outputs it to the `ARTIFACT_RESULT_PATH`: + + +``` +curl -X GET "http://nexus3.harness.io:8081/service/rest/v1/components?repository=maven-releases" \ +-H "accept: application/json" > ${ARTIFACT_RESULT_PATH} +``` +Here is an example using a Harness encrypted text secret for credentials: + + +``` +curl -u 'harness' ${secrets.getValue("repo_password")} https://myrepo.example.io/todolist/json/ > ${ARTIFACT_RESULT_PATH} +``` +For more information, see [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + +The expected output from the shell script is a JSON structure with an array, where every element represents an artifact object. You map the array object using **Query Result to Artifact Mapping**. + +### Step 5: Delegate Selectors + +Enter Delegate Selector names of one or more Harness Delegates to use when executing the script. The Delegates you identify should have network access to the custom repo in order to obtain any artifacts. See [Delegate Selectors](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_selectors). + +### Step 6: Timeout + +Enter a time limit for the script to execute before failing the artifact retrieval. + +### Step 7: Query Result to Artifact Mapping + +Use the **Query Result to Artifact Mapping** section to map the JSON key from your script to the artifact Build Number. + +![](./static/custom-artifact-source-48.png) + +In **Artifacts Array Path**, enter the root object of the array. For example, if your array object starts with `{"items" : [ {...},{...}` then enter `$.items` in **Artifacts Array Path**. + +Next, in **Build No. Path**, enter the key to use as the buildNo, such as `version`. + +Once mapped, you can reference the build number using the expression `${artifact.buildNo}`. + +#### Additional Attributes + +In **Additional Attributes**, you can map any additional values from your JSON array. + +![](./static/custom-artifact-source-49.png) + +For example, if a subsection of your array contains the download URL, such as `"assets":[ {"downloadUrl"...` you enter `assets[0].downloadUrl` in **Path** and `URL` in **Name**. Later in your Workflow you can reference the URL with `${artifact.metadata.URL}`. + +### Option 1: Use the Artifact Collection Command + +Typically, a Build Workflow is used in a Pipeline to perform standard CI (continuous integration) of an artifact and is followed by another Workflow to perform the deployment of the built artifact. + +The way the Build Workflow performs CI is to run a **Jenkins** or **Shell Script** command to execute the build and store a variable with the build information, and then the **Artifact Collection** command collects the built artifact using the output variable and deposits it to the artifact repo. For information on the Jenkins and Shell Script commands, see [Using the Jenkins Command](../workflows/using-the-jenkins-command.md) and [Using the Shell Script Command](../workflows/capture-shell-script-step-output.md). + +The following image shows the Jenkins step build and store a variable named **Jenkins**, and then the Artifact Collection step uses the Custom Artifact Source and the `${Jenkins.description}` variable to reference the new build. + +![](./static/custom-artifact-source-50.png) + +Now that the artifact is collected, a second Workflow can deploy the artifact. + +The `${Jenkins.description}` requires that Jenkins have the [Description Setter](https://wiki.jenkins.io/display/JENKINS/Description+Setter+Plugin) plugin installed. See [Harness Built-in Parameter Variables](../workflows/using-the-jenkins-command.md#harness-built-in-parameter-variables) for Jenkins. + +### Option 2: Use the Shell Script Command + +For any Workflow type, you can reference the Custom Artifact Source using the Shell Script command and a variable reference to the build number, `${artifact.buildNo}`: + +![](./static/custom-artifact-source-51.png) + +### Step 8: Select the Build for Deployment + +When you deploy the Workflows with a Service that uses a Custom Artifact Source, you can select which artifact build to select. + +If the [Custom artifact source script](#step_4_script) is empty, the deployment proceeds with the version you enter, and the same details are available in the artifact variable, which can be accessed using `${artifact.*}`.​​ + +![](./static/custom-artifact-source-52.png) + +### Artifactory Example + +This topic used Nexus for its examples, but another example might be helpful. + +For example, here is a script for Artifactory: + + +``` +curl -X POST \ + https://harness.jfrog.io/harness/api/search/aql \ + -H 'Authorization: Basic xxxxxxx=' \ + -H 'Content-Type: text/plain' \ + -H 'cache-control: no-cache' \ + -d 'items.find({"repo":{"$eq":"example-maven"}})' | jq '.' > ${ARTIFACT_RESULT_PATH} +``` +For the remaining settings, you would use the following: + +* Artifacts Array Path: `$.results` +* Build No. Path: `name` +* Additional Attributes: + + Path (Relative): `path` + + Name (Optional): `${path}` + +This example uses jq which is in most operating system packaging repositories. If you need to install it, see your operating system instructions. For Ubuntu, installation is: + + +``` +sudo apt update +sudo apt install jq +``` diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/disable-artifact-collection.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/disable-artifact-collection.md new file mode 100644 index 00000000000..ca40dd93d5e --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/disable-artifact-collection.md @@ -0,0 +1,77 @@ +--- +title: Enable and Disable Artifact Auto Collection +description: Enable or disable automatic artifact collection whenever you need. +sidebar_position: 100 +helpdocs_topic_id: etpaokj9iv +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `ARTIFACT_COLLECTION_CONFIGURABLE`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.In some cases, automatic collection of artifacts can cause issues like rate limit increases because of too many calls from the Harness Delegate to your artifact servers. + +You can enable or disable automatic artifact collection whenever you need. This topic will walk you through the process. + +### Before You Begin + +* [Add Specs and Artifacts using a Harness Service](service-configuration.md) + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Review: Artifact Auto Collection + +Here's a summary of how the Artifact Auto Collection works in Harness: + +* **Enabled:** If auto collection is enabled, Harness collects the artifacts using metadata when you set up the Artifact Source in a Harness Service. +* **Disabled:** If auto collection is disabled, Harness collects the artifact metadata from the repo on demand when you select the Artifact Source during deployment. Harness then stores the metadata in the Harness database. + + For example, when you deploy a Workflow/Pipeline and select an artifact that has auto collection disabled, Harness will fetch the artifact metadata (tags/versions) on demand at that moment. + +### Step 1: Set Artifact Auto Collection on the Service + +In the Service settings, in **Artifact Source**, you can enable/disable the **Auto Collect Artifact** setting. + +![](./static/disable-artifact-collection-39.png) + +When **Auto Collect Artifact** is disabled for an artifact, a stop icon is listed next to the artifact: + +![](./static/disable-artifact-collection-40.png) + +If **Auto Collect Artifact** is disabled for an artifact, you will see the following: + +* No artifacts listed for that artifact in the **Artifact History** in that Service. +* If **Auto Collect Artifact** is disabled for all artifacts, **Artifact History** does not appear at all. +* No artifacts listed for that artifact in the **Manually Select an Artifact** settings. + +#### Artifact Source YAML + +The **Auto Collect Artifact** setting is described in the Artifact Servers YAML using the `collectionEnabled` label: + +* Enabled: `collectionEnabled: true` +* Disabled: `collectionEnabled: false` + +![](./static/disable-artifact-collection-41.png) + +### Step 2: Select the Artifact Build/Version on New Deployments + +When you deploy a Workflow/Pipeline that uses a Service with an artifact source that has **Auto Collect Artifact** disabled, you can select the artifact source in **Start New Deployment** (or when re-running a Workflow/Pipeline), and Harness will fetch the artifacts on demand. + +![](./static/disable-artifact-collection-42.png) + +If an artifact tag/version is not listed, but you know it is in the repo, you can enter its name in the **Build/Version** setting and Harness will fetch it during deployment. If Harness cannot find the artifact, you will see an error in the following format in the Pre-deployment stage of the Workflow: `Invalid request: Could not find requested build number [artifact name] for artifact stream type [artifact type]`. + +### Review: Artifact Collection Workflow Step + +In the deployed **Artifact Collection** Workflow step, you can see the metadata collected: + +![](./static/disable-artifact-collection-43.png) + +### Option: Triggers + +In the **On New Artifact** and **On Webhook Event** Trigger types, only artifact sources with the **Auto Collect Artifact** setting enabled are listed. + +### See Also + +* [Using Custom Artifact Sources](custom-artifact-source.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/service-configuration.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/service-configuration.md new file mode 100644 index 00000000000..b0c7ea6c205 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/service-configuration.md @@ -0,0 +1,249 @@ +--- +title: Add Specs and Artifacts using a Harness Service +description: Add a Harness Service to represent your microservices/apps. +sidebar_position: 10 +helpdocs_topic_id: eb3kfl8uls +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Services represent your microservices/apps. You define where the artifacts for those services come from, and you define the container specs, configuration variables, and files for those services. + +This is a general overview of a Harness Service. For detailed deployment information for each type of Service, see [Deployments Overview](https://docs.harness.io/article/i3n6qr8p5i-deployments-overview) and select your deployment type. + + +### Before You Begin + +* Read the [Create an Application](../applications/application-configuration.md) topic to get an overview of how Harness organizes Services. +* Read the [Add a Service](service-configuration.md) topic to understand the process to add a Service to an Application. +* Read [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) to see how you can quickly configure your Harness Service using your existing YAML in Git. + +### Step 1: Add a Service to a Harness Application + +To add a service, do the following: + +1. Click **Setup**, and then click the name of your application. The application entities appear. + + ![](./static/service-configuration-53.png) + +2. Click **Services**, and then click **Add Service**. The **Service** dialog appears. + + ![](./static/service-configuration-54.png) + +3. Select a **Deployment Type**. Each deployment type will set up a Service for its deployment platform. For example, selecting **Kubernetes** creates a Service with default Kubernetes manifest files. + +4. Click **SUBMIT**. The **Service Overview** appears. + + ![](./static/service-configuration-55.png) + + In this new Service you will set up artifacts, container specs, and configuration files and variables. + +### Step 2: Select Deployment Type + +There are multiple Service types used in Harness, listed in the **Deployment Type** field, such as Kubernetes and Helm. + +![](./static/service-configuration-56.png) + +These Service types are discussed in detail in the deployment guides for those platforms. For more information, see [Deployments Overview](https://docs.harness.io/article/i3n6qr8p5i-deployments-overview). + +### Step 3: Add an Artifact Source + +Different Service types support different artifact sources, such as Docker or Amazon AMI. For details on different service types and artifact sources, see [Service Types and Artifact Sources](service-types-and-artifact-sources.md). + +For most deployment types, you add an artifact source and then reference it in the Service or Workflow. + +If you do not reference the artifact source somewhere in your Harness entities, via a Service spec or Shell Script Workflow step, for example, Harness does not use it.For example, in a Kubernetes Service, you add an Artifact Source for the Docker image you want to deploy. Here's an example using an nginx image on Docker Hub: + +![](./static/service-configuration-57.png) + +Next, you reference the artifact **image** and **dockercfg** secret in the Kubernetes manifest **values.yaml** file. Here's the default that is created automatically when you add a new Kubernetes Service: + +![](./static/service-configuration-58.png) + +And here are the values referenced using the Go templating expressions `{{.Values.image}}` and `{{.Values.dockercfg}}` in the Secret and Deployment manifests in the same Service: + + +``` +{{- if .Values.createImagePullSecret}} +apiVersion: v1 +kind: Secret +metadata: + name: {{.Values.name}}-dockercfg + annotations: + harness.io/skip-versioning: true +data: + .dockercfg: {{.Values.dockercfg}} +type: kubernetes.io/dockercfg +--- +{{- end}} +... +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{.Values.name}}-deployment +spec: + replicas: {{int .Values.replicas}} + selector: + matchLabels: + app: {{.Values.name}} + template: + metadata: + labels: + app: {{.Values.name}} + spec: + {{- if .Values.createImagePullSecret}} + imagePullSecrets: + - name: {{.Values.name}}-dockercfg + {{- end}} + containers: + - name: {{.Values.name}} + image: {{.Values.image}} + {{- if or .Values.env.config .Values.env.secrets}} + envFrom: + {{- if .Values.env.config}} + - configMapRef: + name: {{.Values.name}} + {{- end}} + {{- if .Values.env.secrets}} + - secretRef: + name: {{.Values.name}} + {{- end}} + {{- end}} +``` +The above referencing of the artifact source is done automatically when you create a new Kubernetes Service, but it is important to remember that you must reference the artifact when you create your own manifests in Harness. + +### Option: Configure Artifact Collection + +Currently, this feature is behind the Feature Flag `ARTIFACT_COLLECTION_CONFIGURABLE`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.You can control how Harness collects artifacts using the **Auto Collect Artifact** option in **Artifact Source**. + +![](./static/service-configuration-59.png) + +By default, artifact collection runs every 3 minutes. If you have shared resources such as artifact servers and secret managers you might notice a lot of API calls. This number of calls can lead to an increase in load and theoretically impact system stability. Also, some services have rate limiting that can be impacted. + +The **Auto Collect Artifact** option in **Artifact Source** allows you to enable and disable artifact collection to prevent any issues. Also, Harness provides the following artifact collection information: + +* **Failed Attempts:** the number of times artifact collection failed. + +**Auto Collect Artifact** is automatically disabled if the number of failed collections exceeds 3500. You can reset artifact collection using the **Reset Artifact Collection** button in the status message.* **Last Polled At:** the last time Harness attempted to collect the artifact. +* **Last Collected At:** the last time Harness successfully collected the artifact. +* **Artifact Collection Status:** shows if the collection is successful, retrying, or failed. + +Let's look at some examples. + +#### Disabled + +When the **Auto Collect Artifact** option in **Artifact Source** is not selected, artifact collection is disabled. + +Hover over the Artifact Source to see the status. + +![](./static/service-configuration-60.png) + +**Auto Collect Artifact** is automatically disabled if the number of failed collections exceeds 3500. You can reset artifact collection using the **Reset Artifact Collection** button in the status message.#### Enabled + +When the **Auto Collect Artifact** option in **Artifact Source** is selected, artifact collection is enabled. + +Hover over the Artifact Source to see the status. + +![](./static/service-configuration-61.png) + +#### Enabled and Retrying + +When the **Auto Collect Artifact** option in **Artifact Source** is selected but collection failed, Harness will retry collection. + +Hover over the Artifact Source to see the status. + +![](./static/service-configuration-62.png) + +#### Reset Artifact Collection when Maximum Failed Attempts Reached + +If artifact collection failures have reached the maximum of 3500 attempts, you can reset artifact collection using the **Reset Artifact Collection** button in the status message. + +![](./static/service-configuration-63.png) + +#### Artifact Collection with Harness GraphQL + +You can enable/disable artifact collection for an Artifact Source using Harness GraphQL API. + +First, you'll need the Artifact Source Id. Obtain the Artifact Source Id by querying the Service: + + +``` +query{ + service(serviceId: "60HIgOKdTayQUh3F2dndVw"){ + artifactSources{ + id + name + artifacts(limit:10, offset:0){ + nodes { + id + buildNo + } + } + } + } +} +``` +This will give you a response with all Artifact Sources for the Service. Save the Id of the Artifact Source you want to enable/disable collections for. + +Next, use `setArtifactCollectionEnabled` to enable or disable the Artifact Source collection. + +You set the `artifactCollectionEnabled` to `true` to enable collection or `false` to disable collection. + + +``` +mutation { + setArtifactCollectionEnabled (input:{ + clientMutationId:"abc", + appId:"d2GfddtSRHSmW2TpFhIreA", + artifactStreamId:"pXESnvkXSA210zrAlbCxPw", + artifactCollectionEnabled:true + }) + { + clientMutationId + message + } +} +``` +The Artifact Source Id you collected earlier is used by `artifactStreamId`. The Application Id is entered in `appId`. + +The response of the mutations will be something like this: + + +``` +{ + "data": { + "setArtifactCollectionEnabled": { + "clientMutationId": "abc", + "message": "Successfully set artifact collection enabled" + } + } +} +``` +### Option: Manually Select an Artifact + +You can have Harness pull a list of builds and versions from an Artifact Source. Click **Artifact History**, and then **Manually pull artifact**. The **Manually Select An Artifact** dialog appears. + +![](./static/service-configuration-64.png) + +Select the artifact source in **Artifact Stream**, and then the artifact build in **Artifact**. + +#### Deleted Artifacts Refreshed in Harness + +Currently, Harness will update the list of artifacts from the following repo types and remove deleted items: + +* Docker +* AMI +* Artifactory +* ECR +* GCR +* ACR +* Nexus +* Azure Machine Image + +If you delete artifacts in another repo, such as S3, the list of artifacts in Harness is not updated. + +### Container Specs and Manifests + +You can use container specifications and manifests to configure a Service. For information on the specs and manifests available for the different Service types, see [Deployments Overview](https://docs.harness.io/article/i3n6qr8p5i-deployments-overview). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/service-types-and-artifact-sources.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/service-types-and-artifact-sources.md new file mode 100644 index 00000000000..ae6262e303b --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/service-types-and-artifact-sources.md @@ -0,0 +1,258 @@ +--- +title: Service Types and Artifact Sources +description: Provides a matrix of Harness artifact types and their artifact sources, displaying which types support metadata-only versus file sourcessources, and which types support both me. +sidebar_position: 70 +helpdocs_topic_id: qluiky79j8 +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides a matrix of Harness Service artifact types and their artifact sources. The matrix shows which Service artifact types support metadata-only sources, and which types support both metadata and file sources. + +It also includes information on how to manage the different source types, and on how to copy, download, and install artifacts. + +### Metadata and File Artifact Sources + +Harness Services allows you to identify artifacts using their metadata. At deployment runtime, Harness uses the metadata to copy or download the artifact to the target hosts/containers. + +![](./static/service-types-and-artifact-sources-07.png) + +Artifact files without metadata were supported previously, and were downloaded to the Harness store, but are no longer supported. + +#### Switching to Metadata Only + +If you have file-based Artifact Sources currently set up in your Harness Services (meaning, without the Metadata Only option selected), these are still supported. + +In the near future, only metadata sources will be supported. You can switch your current file-based sources to metadata in advance of this change. + +If you switch to **Metadata Only**, it is not applied to previously collected artifacts. + +#### Artifact Sizes and Limitations + +Harness has a limit of a 1GB file upload. However, Harness directly streams from the artifact server if the file size is larger (even larger than 25GB). + +For artifacts larger than 1GB, use the **Metadata Only** option in the Harness Service **Artifact Source** settings. + +### Artifact Sources and Artifact Type Matrix + +The following table lists the artifact types, such as Docker Image and AMI, and the Artifact Sources' support for metadata. + +Legend: + +* **M** - Metadata. This includes Docker image and registry information. For AMI, this means AMI ID-only. +* **Blank** - Not supported. + + + +| | | | | | | | | | | | | +| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +| **Sources** | **Docker Image**(Kubernetes/Helm/TAS) | **AWS** **AMI** | **AWS CodeDeploy** | **AWS Lambda** | **JAR** | **RPM** | **TAR** | **WAR** | **ZIP** | **TAS** | **IIS** | +| Jenkins | | | M | M | M | M | M | M | M | M | M | +| Docker Registry | M | | | | | | | | | M | | +| Amazon S3 | | | M | M | M | M | M | M | M | M | M | +| Amazon AMI | | M | | | | | | | | | | +| Elastic Container Registry (ECR) | M | | | | | | | | | M | | +| Azure Reg | M | | | | | | | | | | | +| Azure DevOps Artifact | | | M | | M | M | M | M | M | M | M | +| GCS | | | M | | | M | M | M | M | M | M | +| GCR | M | | | | | | | | | M | | +| Artifactory | M | | M | M | M | M | M | M | M | M | M | +| Nexus | M | | M | M | M | M | M | M | M | M | M | +| Bamboo | | | M | | M | M | M | M | M | M | M | +| SMB | | | M | | M | M | M | M | M | M | M | +| SFTP | | | M | | M | M | M | M | M | M | M | + +### Docker Image Artifacts + +If the Artifact Source is **Docker**, at runtime the Harness Delegate will use the metadata to initiate a pull command from the deployment target host(s), and pull the artifact from the registry (Docker Registry, ECR, etc) to the target host. + +To ensure the success of this artifact pull, the host(s) running the target host(s) **must** have network connectivity to the registry. + +The **Copy Artifact** script in a Service does not apply to Docker artifacts. For Services using Docker artifacts, at runtime the Harness Delegate will execute a Docker pull on the target host to pull the artifact. Ensure the target host has network connectivity to the Docker artifact server.### Copy and Download of Metadata Artifact Sources + +When copying or downloading artifacts, Harness uses metadata in the following ways. + +![](./static/service-types-and-artifact-sources-08.png) + +#### Copy Artifact + +The Copy Artifact command is supported for Artifact Sources that use Artifactory, Amazon S3, Jenkins, Bamboo, and Nexus. + +The Copy Artifact command is added by default when you create a Harness Service using the deployment type **Secure Shell (SSH)**. + +![](./static/service-types-and-artifact-sources-09.png) + +During deployment runtime, Harness uses the metadata to download the artifact to the Harness Delegate. The Delegate then copies the artifact to the target host(s). + +Ensure that the Delegate has network connectivity to the Artifact Server. + +#### Download Artifact + +For SSH and WinRM Service types, the Download Artifact script is supported for many artifact sources, such as Amazon S3, Artifactory, Azure, and SMB and SFTP (Powershell-only) artifact sources. + +The Download Artifact command is added by default when you create a Harness Service using the deployment type **Windows Remote Management (WinRM)**. + +![](./static/service-types-and-artifact-sources-10.png) + +At deployment runtime, the Harness Delegate executes commands on the target host(s) to download the artifact directly to the target host(s). This is the process for file-based and metadata Artifact Sources. **The target host(s) must have network connectivity to the artifact server.** + +Ensure that the target host has network connectivity to the Artifact Server. + +### Exec Script + +For all Service types, the Exec script can be added to the Service to use the artifact source metadata and copy or download the artifact. + +#### Add an Exec Script + +1. In the Service, click **Add Command**. The **Add Command** dialog appears. + + In **Command Type**, select the **Install**. + + ![](./static/service-types-and-artifact-sources-11.png) + + The command is added to the Service. There is also an **Install** command in the Template Library that is preset with common variables. + + ![](./static/service-types-and-artifact-sources-12.png) + + For information on using the Template Library, see [Use Templates](https://docs.harness.io/article/ygi6d8epse-use-templates). + +2. Hover over the **Add** button to see the available scripts. + + ![](./static/service-types-and-artifact-sources-13.png) + +3. Under **Scripts**, click **Exec**. The **Exec** dialog appears. + + ![](./static/service-types-and-artifact-sources-14.png) + +4. In **Working Directory**, enter the working directory on the target host from which the Harness Delegate will run the Bash or PowerShell script, such as **/tmp** on Linux and **%TEMP%** on Windows. By default, and if **Working Directory** is left empty, the script is executed in the home directory. + +5. Add the commands needed to install your microservice using the metadata and click **SUBMIT**. The **Exec** script is added to the Service. + + ![](./static/service-types-and-artifact-sources-15.png) + +To build your **Exec** script, for example a cURL script, you can use the built-in Harness variables to refer to the Artifact Sources. For example, the built-in variable `${artifact.url}`. Simply enter `${` in the **Command** field to see the list of variables. + +![](./static/service-types-and-artifact-sources-16.png) + +When you create a Workflow using this Service, such as a Basic Workflow, it will include an **Install** step that will execute the **Exec** script. + +![](./static/service-types-and-artifact-sources-17.png) + +For a list of artifact-related built-in variables, see **Artifact** in the table in [Variables List](https://docs.harness.io/article/9dvxcegm90-variables#variables_list). + +### Copy Artifact vs Download Artifact + +The difference between the Copy Artifact and Download Artifact scripts is important to understand to ensure your deployment is successful: + +* **Copy Artifact:** With Copy Artifact, the Delegate downloads the artifact (e.g. JAR file) and then copies it via SCP (secure copy) it to the target host(s). The Delegate **must** have connectivity with the Artifact Source (e.g. Nexus, Jenkins, Bamboo, etc). +* **Download Artifact:** With Download Artifact, Harness directly downloads the file onto the target host(s). + + **Connectivity:** The target host **must** have connectivity with the Artifact Source. + + **Credentials:** Since the target host performs the download and not the Delegate, the Harness AWS Cloud Provider used for the connection must use Access/Secret keys for its credentials. It cannot use the **Inherit IAM Role from Delegate** option. If you must use the **Inherit IAM Role from Delegate** option for your Connector, then use **Copy Artifact**. + +![](./static/service-types-and-artifact-sources-18.png) + +For WinRM, always use Metadata-only.![](./static/service-types-and-artifact-sources-19.png) + +### Copy Artifact Script + +The Copy Artifact command is supported for Artifact Sources that use Artifactory, Amazon S3, Jenkins, Bamboo, and Nexus. The command behaves as follows: + +1. During deployment runtime, Harness uses the metadata to download the artifact to the Harness Delegate. +2. The Delegate then copies the artifact to the target host(s), such as Amazon S3. +To ensure the success of this download, the host(s) running the Harness Delegate(s) **must** have network connectivity to the artifact server and the target deployment host(s). + +For Jenkins and Bamboo to work with Copy Artifact, ensure that in the Jenkins or Bamboo artifact source settings, you enter the exact path to the *artifact* and not just a path to a folder. For example, `artifacts/target` will not work, but `artifacts/target/todolist.war` will work. For example:![](./static/service-types-and-artifact-sources-20.png) + +#### Adding a Copy Artifact Script + +To add a Copy Artifact script in a Service, do the following: + +1. Click **Add Command** and in **Command Type**, select the **Install**. + + ![](./static/service-types-and-artifact-sources-21.png) + + The command is added to the Service. + +2. Hover over the **Add** step to see the **Copy** script, and click **Copy**. + + ![](./static/service-types-and-artifact-sources-22.png) + + The **Copy** script appears. + + ![](./static/service-types-and-artifact-sources-23.png) + + The `$WINGS_RUNTIME_PATH` is the destination path for the artifact. The variable is a constant used at runtime. For more information, see [Constants](https://docs.harness.io/article/9dvxcegm90-variables#constants). + +3. In **Source**, select **Application Artifacts**, and click **SUBMIT**. The **Copy** script is added to the Service. + + ![](./static/service-types-and-artifact-sources-24.png) + +### Download Artifact Script + +For SSH and WinRM Service types, the **Download Artifact** script is supported for many artifact sources, such as Amazon S3, Artifactory, Azure, and SMB and SFTP (Powershell-only) artifact sources. + +For other Service types and artifact sources, add a new command and use the Exec script to download the artifact. + +For SSH and WinRM Services that use artifact sources that are supported by Download Artifact, at deployment, the Harness Delegate will run the Download Artifact script on the deployment target host and download the artifact. + +Here are the Download Artifact script settings: + +![](./static/service-types-and-artifact-sources-25.png) + +In **Artifact Download Directory**, a path using [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) points to the **Application Defaults** variables. + +![](./static/service-types-and-artifact-sources-26.png) + +You can add an Application Defaults variable for the artifact sources that will be referenced by the Download Artifact script, and then use it wherever Download Artifact is used. For more information, see [Application Defaults Variables](https://docs.harness.io/article/9dvxcegm90-variables#application_default_variables). + +You can also enter a path in **Artifact Download Directory** manually. + +If the path in **Artifact Download Directory** is missing from the target host, then the script will fail. + +![](./static/service-types-and-artifact-sources-27.png) + +### Copy and Download Artifact Provider Support + +The following table lists the providers supported by the Copy and Download Artifact commands in a Service. + +Legend: + +* Y: Yes +* N: No +* N/A: Not Applicable + + + +| | | | | +| --- | --- | --- | --- | +| **Provider** | **Repository/Package Types** | **Download Artifact****(WinRM or SSH Service types only)** | **Copy Artifact****(SSH Service type only)** | +| AWS S3 | All | Y | Y | +| Artifactory (JFrog) | Non-Docker | Y | Y | +| | Docker | N/A | N/A | +| SMB | IIS related | Y | N/A | +| SFTP | IIS related | Y | N/A | +| Jenkins:Metadata-only | AllYou must specify an artifact in the **Artifact Paths** setting. [Jenkins and Bamboo Metadata and Artifact Paths](#jenkins_and_bamboo_metadata_and_artifact_paths) | Y | Y | +| Jenkins:Artifact is saved to Harness | All | Y | Y | +| Docker Registry | Docker | N/A | N/A | +| AWS AMI | AMI | N/A | N/A | +| AWS ECR | Docker | N/A | N/A | +| Google Cloud Storage | All | N/A | N/A | +| Google Container Registry | Docker | N/A | N/A | +| Nexus 2.x/ 3.x:Artifact is saved to Harness | Maven 2.0 | Y | Y | +| | NPM | Y | Y | +| | NuGet | Y | Y | +| | Docker | N/A | N/A | +| Nexus 2.x/ 3.x:Metadata-only | Maven 2.0 | Y | Y | +| | NPM | Y | Y | +| | NuGet | Y | Y | +| | Docker | N/A | N/A | +| Bamboo:Metadata-only | AllYou must specify an artifact in the **Artifact Paths** setting. See [Jenkins and Bamboo Metadata and Artifact Paths](#jenkins_and_bamboo_metadata_and_artifact_paths). | Y | Y | +| Bamboo:Artifact is saved to Harness | All | Y | Y | +| Azure Artifacts | Maven 2.0, NuGet | Y | Y | +| Custom Repository | All | N/A | N (use the Exec script to use the metadata to copy artifact to target host) | + +### Jenkins and Bamboo Metadata and Artifact Paths + +If you want to download the Jenkins or Bamboo artifact using the Download Artifact script, you must select an artifact path in the **Artifact Path** setting. This enables Harness to obtain the exact URLs for downloading the artifacts. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-a-docker-image-service-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-a-docker-image-service-05.png new file mode 100644 index 00000000000..b1cf268abad Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-a-docker-image-service-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-a-docker-image-service-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-a-docker-image-service-06.png new file mode 100644 index 00000000000..ea0fbbf5609 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-a-docker-image-service-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-65.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-65.png new file mode 100644 index 00000000000..2c9d7c58ad6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-65.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-66.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-66.png new file mode 100644 index 00000000000..0e61a5ba5fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-66.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-67.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-67.png new file mode 100644 index 00000000000..ff560d35b5d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-an-azure-dev-ops-artifact-source-67.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-28.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-28.png new file mode 100644 index 00000000000..103fe1c1e3d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-28.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-29.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-29.png new file mode 100644 index 00000000000..67e174860d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-29.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-30.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-30.png new file mode 100644 index 00000000000..c315f5df7ef Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-config-variables-30.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-00.png new file mode 100644 index 00000000000..f02e156bfe1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-01.png new file mode 100644 index 00000000000..8a0e68fab6d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-02.png new file mode 100644 index 00000000000..8cff45c70d4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-03.png new file mode 100644 index 00000000000..bacefc71497 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-04.png new file mode 100644 index 00000000000..21d3ff1cb0d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/add-service-level-configuration-files-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-44.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-44.png new file mode 100644 index 00000000000..ae184e7d765 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-44.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-45.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-45.png new file mode 100644 index 00000000000..fa0441d5f3a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-45.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-46.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-46.png new file mode 100644 index 00000000000..94a6d29a37c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-46.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-47.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-47.png new file mode 100644 index 00000000000..c4f3496e796 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-47.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-48.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-48.png new file mode 100644 index 00000000000..953b41dd921 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-48.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-49.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-49.png new file mode 100644 index 00000000000..7bbb99689c9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-49.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-50.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-50.png new file mode 100644 index 00000000000..4b699341eda Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-50.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-51.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-51.png new file mode 100644 index 00000000000..dce7a6bc169 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-51.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-52.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-52.png new file mode 100644 index 00000000000..7547d89af7f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/custom-artifact-source-52.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-39.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-39.png new file mode 100644 index 00000000000..82e80bae3f3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-39.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-40.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-40.png new file mode 100644 index 00000000000..8b5500f84ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-40.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-41.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-41.png new file mode 100644 index 00000000000..676c64aa866 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-41.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-42.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-42.png new file mode 100644 index 00000000000..b75f77964d8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-42.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-43.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-43.png new file mode 100644 index 00000000000..1885f2cff7d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/disable-artifact-collection-43.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-53.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-53.png new file mode 100644 index 00000000000..4168a03606e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-53.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-54.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-54.png new file mode 100644 index 00000000000..deb29635805 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-54.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-55.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-55.png new file mode 100644 index 00000000000..383eaaf6b75 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-55.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-56.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-56.png new file mode 100644 index 00000000000..dc164afe381 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-56.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-57.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-57.png new file mode 100644 index 00000000000..2ad590e4a27 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-57.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-58.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-58.png new file mode 100644 index 00000000000..1bb4cf871b2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-58.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-59.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-59.png new file mode 100644 index 00000000000..18a45d7fd56 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-59.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-60.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-60.png new file mode 100644 index 00000000000..12d5b3c82d4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-60.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-61.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-61.png new file mode 100644 index 00000000000..fb5c66130d7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-61.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-62.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-62.png new file mode 100644 index 00000000000..64b8388e469 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-62.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-63.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-63.png new file mode 100644 index 00000000000..a113e4be182 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-63.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-64.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-64.png new file mode 100644 index 00000000000..3a8c94bf96d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-configuration-64.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-07.png new file mode 100644 index 00000000000..73017ee1370 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-08.png new file mode 100644 index 00000000000..5b64e3ab869 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-09.png new file mode 100644 index 00000000000..9c8b562cbd2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-10.png new file mode 100644 index 00000000000..e4cfb41bed5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-11.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-11.png new file mode 100644 index 00000000000..99ee954c8cf Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-11.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-12.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-12.png new file mode 100644 index 00000000000..91e98d8914c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-12.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-13.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-13.png new file mode 100644 index 00000000000..5245c52d454 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-13.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-14.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-14.png new file mode 100644 index 00000000000..1a56112d43d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-14.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-15.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-15.png new file mode 100644 index 00000000000..95ac80dfd33 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-15.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-16.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-16.png new file mode 100644 index 00000000000..d5c885c0b5d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-16.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-17.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-17.png new file mode 100644 index 00000000000..fcb12302f8c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-17.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-18.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-18.png new file mode 100644 index 00000000000..c69cda9bf21 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-18.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-19.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-19.png new file mode 100644 index 00000000000..5b64e3ab869 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-19.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-20.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-20.png new file mode 100644 index 00000000000..6d4ba2054b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-20.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-21.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-21.png new file mode 100644 index 00000000000..99ee954c8cf Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-21.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-22.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-22.png new file mode 100644 index 00000000000..e11c9bdfbeb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-22.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-23.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-23.png new file mode 100644 index 00000000000..d0926dc8795 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-23.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-24.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-24.png new file mode 100644 index 00000000000..df74c3e771e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-24.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-25.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-25.png new file mode 100644 index 00000000000..7e3651f6b05 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-25.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-26.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-26.png new file mode 100644 index 00000000000..f1ff4df3fca Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-26.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-27.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-27.png new file mode 100644 index 00000000000..4626890a2af Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/service-types-and-artifact-sources-27.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-31.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-31.png new file mode 100644 index 00000000000..9af633849d8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-31.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-32.gif b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-32.gif new file mode 100644 index 00000000000..3ac556ddede Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-32.gif differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-33.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-33.png new file mode 100644 index 00000000000..710ca0d91eb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-33.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-34.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-34.png new file mode 100644 index 00000000000..a900401e8c0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-34.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-35.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-35.png new file mode 100644 index 00000000000..7f45c5d8953 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-35.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-36.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-36.png new file mode 100644 index 00000000000..26d8d618a02 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-36.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-37.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-37.png new file mode 100644 index 00000000000..570e3ac94c7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-37.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-38.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-38.png new file mode 100644 index 00000000000..914a9b3ce8f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/static/use-script-based-service-38.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/use-script-based-service.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/use-script-based-service.md new file mode 100644 index 00000000000..3ba96c181f7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup-services/use-script-based-service.md @@ -0,0 +1,78 @@ +--- +title: Use Script Based Services +description: Create and add Bash and PowerShell scripts as Services using the options and templates available in Harness. +sidebar_position: 20 +helpdocs_topic_id: 1329n00z5e +helpdocs_category_id: u4eimxamd3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can create and add Bash and PowerShell scripts as Services. You can use the options and templates available in Harness to use these scripts in your deployment. + + +### Before You Begin + +* Read the [Create an Application](../applications/application-configuration.md) topic to get an overview of how Harness organizes Services. +* Read the [Add a Service](service-configuration.md) topic to understand the process to add a Service to an Application. +* Read [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code) to see how you can quickly configure your Harness Service using your existing YAML in Git. + +### Option 1: Edit Default Commands + +When you create a script-based Service, Harness automatically generates the commands needed to install and run the application in the Artifact Source on the target host(s). You can edit these default commands. + +![](./static/use-script-based-service-31.png)You can also add templates from the template library or add commands as shown in the following options. + +#### Moving Commands + +![](./static/use-script-based-service-32.gif)As part of the Deployment Specification steps (**Start**, **Install**, or **Stop**), you can drag and move a command in the place of another command or before or after another command. All the other commands are moved accordingly and the script is updated automatically. + +### Option 2: Use Template Library + +You can use the wealth of scripts available in the Template Library to rapidly develop your script. For more information, see [Use Templates](https://docs.harness.io/article/ygi6d8epse-use-templates). + +1. In the **Script** section of **Deployment Specification**, click **Add Command**. The **Add Command** dialog appears. +2. Click **From Template Library**. The Template Library is displayed. Find the template you need.![](./static/use-script-based-service-33.png) +3. Click **Link** or click the drop-down menu and choose **Copy**. You can link to a template or copy a template to your service. If you link to the template, when that version of the template is updated in the Template Library, your script is also updated. If you copy a template, there is no link to the Template Library. If you link to a template, you may only edit the template from the template dialog. You cannot edit the template in your service. +4. Edit the script if needed and click **SUBMIT**. The script is added to your service.Every time you edit a template, you create a new version of it. To switch versions, click the three dots ⋮ on the script title bar and choose **Manage Versions**. +5. To edit the variables used in the script, click **Variables**. The **Edit Command** dialog opens. Edit the variables and click **SUBMIT**. + +### Option 3: Create a New Command + +Harness provides multiple commands to manage the deployment of your application. + +1. In the **Script** section of **Deployment Specification**, click **Add Command**. The **Add Command** dialog appears. +2. Click **Create New**. +3. In **Name**, enter a name for the new command that describes its function. +4. From **Command Type**, select the function of the command, such as **Start**, **Install**, **Disable**, etc. +5. Click **Submit**. The new command is added under **Script**. +6. Hover over the Add button to see the available commands.![](./static/use-script-based-service-34.png) +7. Click a command from the list. The dialog for the command appears. For example, here is the dialog for the Docker Start command. +The dialog contains a default script relating to it type. The script is prefigured for variables for common application information. +8. From Script Type, select **BASH** or **POWERSHELL**. +9. Modify the script is needed and click **SUBMIT**. The command is added to the Script section:![](./static/use-script-based-service-35.png) +10. Repeat the above steps to add more commands to your script. + +### Run Service Commands in a Workflow + +One of the steps you can include in a Harness Workflow is a **Service Command** step. With the Service Command step, you can execute Workflow on Harness Delegate. You can use Delegate Selectors to identify which Harness Delegate to use. + +This topic describes how to publish a Service Command output in a variable and use the published variable in a subsequent Workflow step. + +1. In a Harness Application, open a Workflow and then click **Add Step**. +2. Select **N****ew Step** to select the Service Command.![](./static/use-script-based-service-36.png) +3. Click **Submit**. The step is added. +4. Click on the step.![](./static/use-script-based-service-37.png) +5. Set the **Timeout** period for your Service Command. If the command execution hangs beyond the timeout, Harness will fail the step.![](./static/use-script-based-service-38.png) +6. Select **Execute on Delegate** option if you wish to execute Workflow on Harness Delegate. This option allow users to select Delegates on Service commands.If you do not select **Execute on Delegate** option then the node is selected from the Select Node. The Select Nodes step selects the target hosts from the [Infrastructure Definition](../environments/infrastructure-definitions.md) you defined. For more information, see [Select Nodes Workflow Step](https://docs.harness.io/article/9h1cqaxyp9-select-nodes-workflow-step). +7. In **Delegate Selector** enter the Selectors of the Delegates you want to use. + +You can use Selectors to select which Harness Delegates to use when executing the Service Command step. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + * Harness will use Delegates matching the Selectors you select. + * If you use one Selector, Harness will use any Delegate that has that Selector. + * If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + * Selectors can be used whether **Execute on Delegate** is enabled or not. The Shell Script command honors the Selector and executes the SSH connection to the specified target host via the selected Delegate. + An example where Selectors might be useful when **Execute on Delegate** is disabled: When you specify an IP address in **Target Host**, but you have 2 VPCs with the same subnet and duplicate IP numbers exist in both. Using Selectors, you can scope the the shell session towards the delegate in a specific VPC. + * You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables). For example, if you have a Workflow variables named delegate, you can enter `$(workflow.variables.delegate)`. When you deploy the Workflow, you can provide a value for the variable that matches a Delegate Selector. +8. Click **Submit**. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/setup/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup/_category_.json new file mode 100644 index 00000000000..36a0e61d6f0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/setup/_category_.json @@ -0,0 +1 @@ +{"label": "Harness CLI", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Harness CLI"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "zpuqjaa3kf", "helpdocs_parent_category_id": "ywqzeje187"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/_category_.json new file mode 100644 index 00000000000..8c73dcb9d6f --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Add Triggers", + "position": 70, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Add Triggers" + }, + "customProps": { + "helpdocs_category_id": "weyg86m5qp" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/add-a-trigger-2.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/add-a-trigger-2.md new file mode 100644 index 00000000000..1889e9f569d --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/add-a-trigger-2.md @@ -0,0 +1,42 @@ +--- +title: Trigger Workflows and Pipelines (FirstGen) +description: Triggers automate deployments using conditions like Git events, new artifacts, schedules, or the success of other Pipelines. +sidebar_position: 10 +helpdocs_topic_id: xerirloz9a +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Triggers automate deployments using a variety of conditions, such as Git events, new artifacts, schedules, and the success of other Pipelines. + +### Important Notes + +* To trigger Workflows and Pipeline using the Harness GraphQL API, see [Trigger Workflows or Pipelines Using GraphQL API](https://docs.harness.io/article/s3leksekny-trigger-workflow-or-a-pipeline-using-api). +* Currently, [YAML-based Triggers](https://docs.harness.io/article/21kgaw4h86-harness-yaml-code-reference#triggers) are behind the feature flag `TRIGGER_YAML`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + + +You can always execute a Workflow or Pipeline manually, and a Trigger does not change any approval requirements in a Workflow or Pipeline. + +When you configure a Trigger, you set the condition that executes the Trigger, whether to execute a Workflow or Pipeline, and then the specific actions of the Trigger, such as what artifact source to use. + +For a list of the different Triggers and options in Harness, see the following: + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) +* [Disable Triggers for an entire Application](disable-triggers-for-an-entire-application.md) + +For information on using Triggers as part of Harness Git integration, see [Onboard Teams Using Git](../../harness-git-based/onboard-teams-using-git-ops.md). + +To prevent too many Workflows or Pipelines from being deployed to the same infrastructure at the same time, Harness uses Workflow queuing. See [Workflow Queuing](../workflows/workflow-queuing.md). + +### Troubleshooting Trigger Permissions + +See [Triggers and RBAC](https://docs.harness.io/article/su0wpdarqi-triggers-and-rbac) and [Troubleshooting](https://docs.harness.io/article/g9o2g5jbye-troubleshooting-harness). \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/disable-triggers-for-an-entire-application.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/disable-triggers-for-an-entire-application.md new file mode 100644 index 00000000000..f13a56601d3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/disable-triggers-for-an-entire-application.md @@ -0,0 +1,80 @@ +--- +title: Disable Triggers for an entire Application +description: Disable Triggers across the entire Application to ensure that none of its Workflow or Pipelines are run. +sidebar_position: 110 +helpdocs_topic_id: 73vuic0j0l +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `SPG_ALLOW_DISABLE_TRIGGERS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.By default, [Triggers](add-a-trigger-2.md) can be added and run on Harness Application Workflows and Pipelines. + +In some cases, you might want to disable Triggers across the entire Application to ensure that none of its Workflow or Pipelines are run. + +This topic describes how to disable all Triggers for an Application. + +### Disable Triggers in the Harness Manager + +To disable all Triggers, you simply enable the **Disable Triggers** setting in the Application settings: + +1. In your Application, click more options (⋮). +2. Select **Disable Triggers**. + + ![](./static/disable-triggers-for-an-entire-application-10.png) + +3. Click **Submit**. + +The Triggers section of the Application is now hidden. + +![](./static/disable-triggers-for-an-entire-application-11.png) + +If another User in your another tries to create a new Trigger before their UI is updated, they will see the error `Invalid request:` `Triggers are disabled for the application [Application Name]`. + +For example: + +![](./static/disable-triggers-for-an-entire-application-12.png) + +Any Triggers can still be viewed in the Application YAML, but they cannot be run in the manager. + +![](./static/disable-triggers-for-an-entire-application-13.png) + +### Disable Triggers using Harness GraphQL API + +You can use the `disableTriggers: Boolean` field of the `Application` object to enable or disable Triggers for an Application. + +![](./static/disable-triggers-for-an-entire-application-14.png) + +For example, here is the mutation and variables for a new Application that disables Triggers. + +Mutation: + + +``` +mutation createapp($app: CreateApplicationInput!) { + createApplication(input: $app) { + clientMutationId + application { + name + id + disableTriggers + } + } +} +``` +Query variables: + + +``` +{ + "app": { + "clientMutationId": "myapp22", + "name": "AMI App", + "description": "Application to deploy AMIs", + "disableTriggers": true + } +} +``` +You can see the new Application does not have Triggers enabled: + +![](./static/disable-triggers-for-an-entire-application-15.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/freeze-triggers.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/freeze-triggers.md new file mode 100644 index 00000000000..362188b2b46 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/freeze-triggers.md @@ -0,0 +1,16 @@ +--- +title: Pause All Triggers using Deployment Freeze +description: You can stop all of your Harness Triggers from executing deployments using Harness Deployment Freeze. Deployment Freeze is a Harness Governance feature that stops all Harness deployments, including t… +sidebar_position: 100 +helpdocs_topic_id: 6vlut5qvlf +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can stop all of your Harness Triggers from executing deployments using Harness Deployment Freeze. + +Deployment Freeze is a Harness Governance feature that stops all Harness deployments, including their Triggers. A deployment freeze helps ensure stability during periods of low engineering and support activity, such as holidays, trade shows, or company events. + +For details on how to add and enable a deployment freeze window, see [Deployment Freeze](https://docs.harness.io/article/wscbhd20ca-deployment-freeze). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/get-deployment-status-using-rest.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/get-deployment-status-using-rest.md new file mode 100644 index 00000000000..9c6d1064bad --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/get-deployment-status-using-rest.md @@ -0,0 +1,140 @@ +--- +title: Get Deployment Status using REST (FirstGen) +description: For Build Workflows or a Build and Deploy Pipeline , you can trigger deployments in response to a Git event using Webhooks. This is described in Trigger Deployments using Git Events. Once you have cr… +sidebar_position: 70 +helpdocs_topic_id: uccck6kq5m +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +For [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) or a  [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview), you can trigger deployments in response to a Git event using Webhooks. This is described in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +Once you have created a Harness [On Webhook Event](trigger-a-deployment-on-git-event.md) Trigger, Harness creates a Manual Trigger for it. + +You can do the following with a Manual Trigger: + +* Start a deployment using a URL provided by Harness. See [Trigger a Deployment using a URL](trigger-a-deployment-using-a-url.md). +* Start a deployment using a curl command. See [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md). +* Use a REST call to get deployment status. + +In this topic, we will cover using a REST call to get deployment status. + +### Before You Begin + +* [API Keys](https://docs.harness.io/article/smloyragsm-api-keys) +* [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) +* [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) + +### Step 1: Create Harness API Key + +To use a REST call to get deployment status, you need to generate a Harness API key first. + +The API key is used in the cURL command GET call for deployment status, described below. + +1. In Harness Manager, click **Security**, and then click **Access Management**. +2. Click **API Keys**. +3. Click **Add API Key**. +4. In the **Add API Key** settings, enter a name and select your User Group. +5. Click **Submit**. The new API key is created. +6. To copy the API key, first click the Eye icon to reveal the key's value. +7. Next, click the Copy icon beside the key. This copies the key's value to your clipboard. +8. To delete an API key, click the **Delete** icon. + +### Step 2: Show cURL Command + +The cURL command for executing a deployment is provided by every Trigger of type **On Webhook Event**. + +In **Triggers**, locate the Trigger you want run. + +Click **Manual Trigger**. + +In the **Manual Trigger** settings, click **Show Curl Command**. The cURL command is displayed. + +When you created a Trigger, if you selected values for parameters that are represented by placeholders in the cURL command, you do not need to add values for the cURL placeholders. + +If you add values for the cURL placeholders, you will override manual settings in the Trigger. + +This is also true for Triggers that execute templated Workflows and Pipelines. If you create a Trigger that executes a templated Workflow or Pipeline, you can select values for the templated settings in the Trigger, but you can still override them in the cURL command. + +Let's look at a placeholder example: + + +``` +curl -X POST -H 'content-type: application/json' \ + --url https://app.harness.io/api/webhooks/xxxxxx \ + -d '{"application":"xxxxxx","artifacts":[{"service":"micro-service","buildNumber":"micro-service_BUILD_NUMBER_PLACE_HOLDER"}]}' +``` +For `service`, enter the name of the Harness Service. + +For `buildNumber`, enter the artifact build number from the Artifact History in the Service. + +[![](./static/get-deployment-status-using-rest-01.png)](./static/get-deployment-status-using-rest-01.png) + +For example: + + +``` +curl -X POST -H 'content-type: application/json' \ + --url https://app.harness.io/api/webhooks/xxxxxx \ + -d '{"application":"xxxxxx","artifacts":[{"service":"Service-Example","buildNumber":"1.17.8-perl"}]}' +``` +### Step 3: Run cURL Command + +Once you have replaced the placeholders, run the cURL command. + +The output will be something like this (private information has been replaced with **xxxxxx**): + + +``` +{ + "requestId":"-tcjMxQ_RJuDUktfl4AY0A", + "status":"RUNNING", + "error":null, + "uiUrl":"https://app.harness.io/#/account/xxxxxx/app/xxxxxx/pipeline-execution/-xxxxxx/workflow-execution/xxxxxx/details", + "apiUrl":"https://app.harness.io/gateway/api/external/v1/executions/-xxxxxx/status?accountId=xxxxxx&appId=xxxxxx" + } + +``` +The **uiUrl** can be used directly in a browser. **apiUrl** can be used to track deployment status programmatically, such as using a REST call. + +### Step 4: Use the API URL + +To get deployment status using a REST call (in this example, cURL), use the following cURL command, replacing **API\_URL** with the URL from **apiUrl**, and **API\_KEY** with the API key you generated in Harness: + + +``` +curl -X GET -H 'X-Api-Key:**API\_KEY**' --url "**API\_URL**" +``` +For example (private information has been replaced with **xxxxxx**): + + +``` +curl -X GET -H 'X-Api-Key:a1b2c3' --url "https://app.harness.io/gateway/api/external/v1/executions/xxxxxx/status?accountId=xxxxxx&appId=xxxxxx" +``` +The output from the curl command will contain the status of the deployment. These are the same status messages you can see in the **Continuous Deployments** dashboard, such as:  + + +``` +{"status":"SUCCESS"}, {"status":"FAILED"}, {"status":"ABORTED"}, {"status":"QUEUED"}. +``` +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + +### Related Topics + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness Git integration, see [Onboard Teams Using Git](../../harness-git-based/onboard-teams-using-git-ops.md). +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/_push-event-left.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/_push-event-left.png new file mode 100644 index 00000000000..c75925e9b19 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/_push-event-left.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/_push-pull-request-right.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/_push-pull-request-right.png new file mode 100644 index 00000000000..2c1c0cce997 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/_push-pull-request-right.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-10.png new file mode 100644 index 00000000000..0a6b91679a2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-11.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-11.png new file mode 100644 index 00000000000..b4d182643b2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-11.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-12.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-12.png new file mode 100644 index 00000000000..f83d73cb231 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-12.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-13.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-13.png new file mode 100644 index 00000000000..80274998950 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-13.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-14.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-14.png new file mode 100644 index 00000000000..d1e0abd6bdc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-14.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-15.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-15.png new file mode 100644 index 00000000000..220f65493ea Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/disable-triggers-for-an-entire-application-15.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/get-deployment-status-using-rest-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/get-deployment-status-using-rest-01.png new file mode 100644 index 00000000000..2811cc25da6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/get-deployment-status-using-rest-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/get-deployment-status-using-rest-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/get-deployment-status-using-rest-02.png new file mode 100644 index 00000000000..2811cc25da6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/get-deployment-status-using-rest-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-19.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-19.png new file mode 100644 index 00000000000..c53258e6c0e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-19.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-20.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-20.png new file mode 100644 index 00000000000..27219272598 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-20.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-21.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-21.png new file mode 100644 index 00000000000..7e79dc3da64 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-21.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-22.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-22.png new file mode 100644 index 00000000000..5de5d90f209 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-22.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-23.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-23.png new file mode 100644 index 00000000000..97969a5fed4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-git-event-23.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-06.png new file mode 100644 index 00000000000..493d99689d9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-07.png new file mode 100644 index 00000000000..f99d0601ff0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-08.png new file mode 100644 index 00000000000..9e831a760c1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-09.png new file mode 100644 index 00000000000..942f50f65ff Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-on-new-artifact-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-16.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-16.png new file mode 100644 index 00000000000..af11c2a388d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-16.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-17.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-17.png new file mode 100644 index 00000000000..2811cc25da6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-17.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-18.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-18.png new file mode 100644 index 00000000000..2811cc25da6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-a-url-18.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-03.png new file mode 100644 index 00000000000..af11c2a388d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-04.png new file mode 100644 index 00000000000..2811cc25da6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-05.png new file mode 100644 index 00000000000..38c5b45210c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-using-c-url-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-when-a-file-changes-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-when-a-file-changes-00.png new file mode 100644 index 00000000000..a856fc7267e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/static/trigger-a-deployment-when-a-file-changes-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-a-time-schedule.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-a-time-schedule.md new file mode 100644 index 00000000000..5c07c22e7d3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-a-time-schedule.md @@ -0,0 +1,112 @@ +--- +title: Schedule Deployments using Triggers +description: You can trigger Harness Workflow and Pipeline deployments on a time schedule. You can select how often to execute the Trigger by hour, days, etc. All the cron jobs are executed in Universal Time Coor… +sidebar_position: 40 +helpdocs_topic_id: tb66fmh4iz +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can trigger Harness Workflow and Pipeline deployments on a time schedule. You can select how often to execute the Trigger by hour, days, etc. + +All the cron jobs are executed in Universal Time Coordinated (UTC). You can also apply the time condition to new artifacts only. + +## Before You Begin + +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) + + +:::note +On Time Schedule Triggers must be equal to or greater than 5 minutes. This includes CRON expressions. If the CRON expression uses a schedule less than 5 minutes, you will see a warning such as: +`Deployments must be triggered at intervals greater than or equal to 5 minutes.` +::: + +## Step 1: Add a Trigger + +Typically, Triggers are set up after you have successfully deployed and tested a Workflow or Pipeline. + +To add a trigger, do the following: + +1. Ensure that you have a Harness Service, Environment, and Workflow set up. If you want to Trigger a Pipeline, you'll need one set up also. +2. In your Harness Application, click **Triggers**. +3. Click **Add Trigger**. The **Trigger** settings appear. +4. In **Name**, enter a name for the Trigger. This name will appear in the **Deployments** page to indicate the Trigger that initiated a deployment. +5. Click **Next**. + +## Step 2: Schedule Trigger Execution + +You set the schedule for the Trigger using a quartz expression. The Harness Manager uses the schedule you set to execute the Trigger. The Universal Time Coordinated (UTC) time zone is used. + +1. In **Condition**, select **On Time Schedule**. +2. In **Trigger Every**, select the schedule. +3. Click **Next**. + +If you select **Custom CRON Expression**, the time format must be a cron **quartz** expression. + +Harness implicitly adds a prefix for seconds so it does not have to be specified explicitly. + +For example, to execute the Trigger every 12 hours, the quartz expression would be `0 0 0/12 ? * * *`, but you would enter `0 0/12 ? * * *` because Harness adds the `0` prefix. + +Let's look at another example. If you want to invoke a trigger at a certain time, say at **03:10 at 4 day at February month at 2022 year UTC**, then you can provide custom CRON expression as **10 3 4 FEB ? 2022.** + +Harness does not support seconds-level granularity in cron expressions when firing Triggers.For a quartz expression calculator and examples, see  [Cron Expression Generator & Explainer](https://www.freeformatter.com/cron-expression-generator-quartz.html). + +### Option: On New Artifact Only + +If you want the scheduled Trigger to execute with a new artifact, select **On New Artifact Only**. + +If you enable this setting, the Trigger will continue to be executed on schedule, and it will use the last artifact collected when it runs. + +There must be at least one successful deployment with the specific artifact for it to be qualified as the new artifact for the Trigger. Then, Harness checks every new artifact against the last deployed artifact. + +If the last artifact failed to deploy, Harness will use the last successfully deployed artifact. + +Artifact metadata is collected automatically every minute by Harness. + +You can also manually collect artifact metadata using the Service's **Manually pull artifact** feature. + +## Step 3: Select the Workflow or Pipeline to Deploy + +1. In **Execution Type**, select **Workflow** or **Pipeline**. +2. In **Execute Workflow**/**Pipeline**, select the Workflow or Pipeline to deploy. + +## Step 4: Provide Values for Workflow Variables + +If the Workflow or Pipeline you selected to deploy uses Workflow variables, you will need to provide values for these variables. + +You can also use variable expressions for these values. See [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md). + +## Step 5: Select the Artifact to Deploy + +Since Workflows deploy Harness Services, you are also prompted to provide the Artifact Source for the Service(s) the Workflow(s) will deploy. + +### Last Collected + +Select this option to use the last artifact collected by Harness in the Harness Service. Artifact metadata is collected automatically every minute by Harness. + +You can also manually collect artifact metadata using the Service's **Manually pull artifact** feature. + +### Last Successfully Deployed + +The last artifact that was deployed by the Workflow you select. + +## Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + +## Related Topics + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness Git integration, see [Onboard Teams Using Git](../../harness-git-based/onboard-teams-using-git-ops.md). +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-git-event.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-git-event.md new file mode 100644 index 00000000000..f2733f6a637 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-git-event.md @@ -0,0 +1,211 @@ +--- +title: Trigger Deployments using Git Events (FirstGen) +description: For GitHub, GitLab, and Bitbucket, you can trigger Build Workflows or a Build and Deploy Pipeline in response to a Git event using Webhooks using a Harness On Webhook Event Trigger. For example, the… +sidebar_position: 50 +helpdocs_topic_id: ys3cvwm5gc +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +For GitHub, GitLab, and Bitbucket, you can trigger [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) or a [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) in response to a Git event using Webhooks using a Harness On Webhook Event Trigger. + +For example, the first stage of the Build and Deploy Pipeline is a Build Workflow that builds the artifact from a Git repo. You can set a Harness Trigger to run once the artifact is built. + +For Custom Git providers, you can trigger any type of Harness Workflow using a Harness On Webhook Event Trigger. + +For GitHub, GitLab, and Bitbucket, this option is used to execute a Build Workflow or a Build Pipeline only. GitHub, GitLab, and Bitbucket Webhook-based Triggers are not intended for Workflows and Pipelines that **deploy** artifacts. They are designed for Build Workflows and Pipelines that build artifacts in response to Gt events. + + +## Before You Begin + +* [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) +* [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) + +## Important Notes + +* In the **Actions** section of the Trigger, the **Deploy only if files have changed** option is available for Workflows deploying Kubernetes or Native Helm Services only. +* Data retention for Webhook event details is 3 days. +To see the event details, in the response of a Webhook request the `data` field contains the Id of the registered WebHook event. You can use the following API and the `eventId` to get the details of the WebHook Event: +``` +curl -i -X GET \ + 'https://app.harness.io/gateway/pipeline/api/webhook/triggerProcessingDetails?accountIdentifier=&eventId=' \ + -H 'x-api-key: ' +``` + +## Review: Git Webhook Triggers + +You can create Harness Triggers that respond to certain Git events, and then add the Harness Trigger Git Webhook to your repo. When the specified event happens in your repo, the Harness Trigger is run. + +Let's review Git Webhooks. + +For GitHub, GitLab, and Bitbucket, this option is used to execute a Build Workflow or a Build Pipeline only.GitHub, GitLab, and Bitbucket Webhook-based Triggers are not intended for Workflows and Pipelines that **deploy** artifacts. They are designed for Build Workflows and Pipelines that build artifacts in response to Gt events. + +For Custom Git providers, you can trigger any type of Workflow using a Harness On Webhook Event Trigger.Git Webhooks allow you to build or set up apps which subscribe to certain events in your git repo on github.com, bitbucket.org, and gitlab.com. + +Each event corresponds to a certain set of actions that can happen to your organization and/or repository. + +When one of those events is triggered, Git sends a HTTP POST payload to the Webhook's configured URL. + +You can use a Harness Trigger **GitHub/Bitbucket/Gitlab Webhook** URL and execute a Harness deployment in response to a Git event. + +The most common example: A Git event that merges code initiates the Trigger for a Harness Build and Deploy Pipeline. + +The first stage of the Pipeline is a Build Workflow that builds and collects the artifact from the Artifact Source (which is linked to the Git repo). The final stage deploys the newly built artifact from the artifact source. + +For details on the payloads of the different repo Webhooks, see GitHub [Event Types & Payloads](https://developer.github.com/v3/activity/events/types/), Bitbucket [Event Payloads](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html), and Gitlab [Events](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events). + +## Step 1: Add a Trigger + +Typically, Triggers are set up after you have successfully deployed and tested a Workflow or Pipeline. + +To add a trigger, do the following: + +1. Ensure that you have a Harness Service, Environment, and Workflow set up. If you want to Trigger a Pipeline, you'll need one set up also. +2. In your Harness Application, click **Triggers**. +3. Click **Add Trigger**. The **Trigger** settings appear. +4. In **Name**, enter a name for the Trigger. This name will appear in the **Deployments** page to indicate the Trigger that initiated a deployment. +5. Click **Next**. + +## Step 2: Select Repo and Event Type + +A Git event that merges code initiates the Trigger for the Build and Deploy Pipeline. + +The first stage of the Pipeline is a Build Workflow that builds and collects the artifact from the Artifact Source (which is linked to the Git repo). + +In the Actions section, you select the Git repo provider and the event type you want to run the Trigger. + +1. In **Condition**, select **On Webhook Event**. +2. In **Payload Type**, select the repository type (GitHub, Bitbucket, GitLab). +3. In **Event Type**, select the event type for the Webhook event. +There are different options depending on the repo selected in **Payload Type**. See [Review: Payload and Event Type Matrix](#review_payload_and_event_type_matrix) below.![](./static/trigger-a-deployment-on-git-event-19.png) + +If you are using a repo other than GitHub, Bitbucket, or Gitlab (such as Jenkins or Bamboo), leave the **Payload Type** menu blank. + +Harness will still generate a Webhook that you can use in your repo. + +### Review: Payload and Event Type Matrix + +The following table displays the payload and event types supported when you select the **On Webhook Event** option in a Trigger's **Condition** setting. + +This option is used to execute a  [Build Workflow](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) or a  [Build Pipeline](https://docs.harness.io/article/181zspq0b6-build-and-deploy-pipelines-overview) only.For details on each event type and its actions, please consult the provider's documentation. + + + +| | | +| --- | --- | +| **Payload Type** | **Event Type** | +| BitBucket | On Pull Request
On Repository
On Issue | +| GitHub | On Pull Request
On Push
On Delete
On Release
On Package | +| GitLab | On Pull Request

On Push | +| Custom or no selection.
This option is for repos other than the default Git providers.
For example, Bamboo or Jenkins. | On Pull Request
On Push | + +## Option: Authenticate the Webhook + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.Your Git provider includes secret tokens that enable you to validate requests. + +You can use a Harness secret in your Webhook secret setting. When the Git provider sends a POST request to the Harness URL in the Webhook, Harness will use the secret to validate the request. + +In **Select Encrypted Webhook Secret**, create or select a Harness secret. See [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +Later, when you set up the Webhook for this Trigger in your Git provider, enter the value of the Harness secret in the Webhook secret's settings. Do not enter the secret name. + +![](./static/trigger-a-deployment-on-git-event-20.png) + +If the secret value in the Webhook does not match the secret value in the Trigger, you will get a 400 response in your Git provider: + +![](./static/trigger-a-deployment-on-git-event-21.png) + +For more information on Webhook secrets, see the following Git provider docs: + +* [GitHub](https://developer.github.com/webhooks/securing/) +* [GitLab](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html) +* [Bitbucket](https://confluence.atlassian.com/bitbucketserver/managing-webhooks-in-bitbucket-server-938025878.html) + +For details on using the Harness API to set up Trigger authentication, see [Use Trigger APIs](https://docs.harness.io/article/u21rkuzfod-use-trigger-apis). + +### Authentication and Delegate Scoping + +When Harness authenticates the Trigger, the Harness Delegate you have installed in your environment connects to the Secrets Manager you have set up in Harness. + +If you have scoped the Delegate in anyway (such as to specific Applications), it might be too limited to retrieve the secret. + +Either remove the limitation (remove all scopes) or map the **Key Management Service** task to the Delegate. See [Delegate Task Category Mapping](https://docs.harness.io/article/nzuhppobyg-map-tasks-to-delegates-and-profiles). + +The **Task Category Map** feature replaces the **Command** setting in Delegate Scopes, which is deprecated and will be removed soon. + +## Step 3: Select the Workflow or Pipeline to Deploy + +For GitHub, GitLab, and Bitbucket, you can trigger [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) or a [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) in response to a Git event using Webhooks using a Harness On Webhook Event Trigger. + +For Custom Git providers, you can trigger any type of Workflow using a Harness  Trigger. + +1. In **Execution Type**, select **Workflow** or **Pipeline**. +2. In **Execute Workflow**/**Pipeline**, select the Workflow or Pipeline to deploy. + +## Step 4: Provide Values for Workflow Variables + +If the Workflow or Pipeline you selected to deploy uses Workflow variables, you will need to provide values for these variables. + +You can also use variable expressions for these values. See [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md). + +## Option: Manual Triggers + +You can manually deploy a Harness Workflow or Pipeline using a Manual Trigger. You can run a Trigger manually in the following ways: + +* Using a URL provided by Harness. +* Using a curl command. +* Use a REST call to get deployment status. + +See the following: + +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) + +## Step 5: Set Up the Github Webhook + +Once your On Webhook Event Trigger is completed, the next step is to integrate it into your Git repo so that the Trigger is executed in response to the Git event. + +1. In your Harness application, click **Triggers**. +2. In the bottom of each listed Trigger is a link for **GitHub Webhook**. + + ![](./static/trigger-a-deployment-on-git-event-22.png) + +3. For the Trigger you want to use, click **GitHub Webhook**. The Trigger dialog appears. +4. Copy the Webhook and use it in GitHub to trigger the deployment. +5. Add the webhook to your Git repo. +In GitHub and the other Git repos, when configuring the Webhook, you can choose which events you would like to receive payloads for. You can even opt-in to all current and future events. +6. In Content type, ensure you select **application/json**.![](./static/trigger-a-deployment-on-git-event-23.png) + +When you set up the Webhook in GitHub, modify the **Content type** to **application/json**. In **Which events would you like to trigger this webhook?**, you can select **Push events** and/or **Pull requests**. + +| | | +| --- | --- | +| **Just the Push Event** | **Pushes and Pull Requests** | +| ![](./static/_push-event-left.png) | ![](./static/_push-pull-request-right.png) | + +Harness will examine any incoming payload to ensure that it meets the **Action** you set. You do not need to use the repo's Webhook event settings to match the **Action**. Simply use the Harness Webhook URL in the repo Webhook URL field. + +## Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + +## Related Topics + +* [Use Trigger APIs](https://docs.harness.io/article/u21rkuzfod-use-trigger-apis) +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness GitOps, see [Harness GitOps](../../harness-git-based/harness-git-ops.md). +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-new-artifact.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-new-artifact.md new file mode 100644 index 00000000000..0b822c33a0c --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-new-artifact.md @@ -0,0 +1,153 @@ +--- +title: Trigger Deployments When a New Artifact is Added to a Repo +description: You can trigger Harness Workflow and Pipeline deployments in response to a new artifact being added to a repository. For example, every time a new Docker image is uploaded to your Docker hub account,… +sidebar_position: 20 +helpdocs_topic_id: s2m2ksxn6a +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can trigger Harness Workflow and Pipeline deployments in response to a new artifact being added to a repository. + +For example, every time a new Docker image is uploaded to your Docker hub account, it triggers a Workflow that deploys it automatically. + +### Before You Begin + +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) +* [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server) + +### Important Notes + +#### One Artifact Triggers Deployment + +If more than one artifact is collected during the polling interval (two minutes), only one deployment will be started and will use the last artifact collected. + +#### All Artifacts Trigger Deployment + +Currently, this feature is behind the feature flag `TRIGGER_FOR_ALL_ARTIFACTS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.All the artifacts collected during the polling interval will trigger a deployment, with one deployment triggered for each artifact collected. + +#### Trigger is based on File Name + +The Trigger is executed based on **file names** and not metadata changes. + +### Step 1: Add a Trigger + +Typically, Triggers are set up after you have successfully deployed and tested a Workflow or Pipeline. + +To add a trigger, do the following: + +1. Ensure that you have a Harness Service, Environment, and Workflow set up. If you want to Trigger a Pipeline, you'll need one set up also. +2. In your Harness Application, click **Triggers**. +3. Click **Add Trigger**. The **Trigger** settings appear. +4. In **Name**, enter a name for the Trigger. This name will appear in the **Deployments** page to indicate the Trigger that initiated a deployment. +5. Click **Next**. + +### Step 2: Select the Artifact Source that Triggers Deployments + +Next you can identity the Artifact Source that will initiate the Trigger when a new artifact is added. + +You will select an Artifact Source from one of the Harness Services in the Application. + +![](./static/trigger-a-deployment-on-new-artifact-06.png) + +In addition, you can specify a build or tag to filter artifacts in the Artifact Source. For example, the above Artifact Source uses a publicly available Docker image of NGINX and its tags can be seen on Docker Hub: + +![](./static/trigger-a-deployment-on-new-artifact-07.png) + +The simplest way to see the build names for your artifacts, is to use **Artifact History** in the Harness Service. + +![](./static/trigger-a-deployment-on-new-artifact-08.png) + +Here you can see the build names. + +:::note +Selecting the **Manually pull artifact** option in a Harness Service does not initiate a Trigger set up with **On New Artifact**. +::: + +To specify the triggering artifact source, do the following: + +1. In your Trigger, in **Condition**, in **Type**, click **On New Artifact**. The **Artifact Source** and **Build/Tag Filter** settings appear. +2. In **Artifact Source**, select the Harness Service Artifact Source that points to your Artifact repo. The names are listed with Artifact Source name (Service name).The Trigger is executed based on **file names** and not metadata changes.For example, the Artifact Source `library_nginx (k8sv2)` references a Service's Artifact Source like this: + + ![](./static/trigger-a-deployment-on-new-artifact-09.png) + +3. In **Build/Tag Filter**, you can enter the build name or tag to use to identify the artifact. Look at your repo to see the tags applied to artifacts. + +#### Wildcards and Regex + +You can use wildcards in the Build/Tag Filter, and you can enable **Regex** to enter a build name or filter using [regex](https://regexr.com/). + +For example, if the build is `todolist-v2.0.zip` : + +* With **Regex** not enabled, `todolist*` or `*olist*` +* or, with **Regex** enabled, the regex `todolist-v\d.\d.zip` + +If the regex expression does not result in a match, Harness ignores the value. + +Harness supports standard Java regex. For example, if **Regex** is enabled and the intent is to match any branch, the wildcard should be `.*` instead of simply a wildcard `*`. If you wanted to match all of the files that end in `-DEV.tar` you would enter `.*-DEV\.tar`.When you are done, click **Next**. + +Now you can select the Workflow or Pipeline to deploy whenever the Artifact Source you selected receives a new artifact matching your criteria. + +### Step 3: Select the Workflow or Pipeline to Deploy + +You can select the Workflow or Pipeline to execute when the Trigger's criteria is met (a new artifact is posted to the Artifact Source you selected in **Condition**). + +When you select the Workflow or Pipeline, you are prompted to provide values for any required parameters. + +1. In **Execution Type**, select **Workflow** or **Pipeline**. +2. In **Execute Workflow** or **Execute Pipeline**, select the Workflow or Pipeline to run. + +If the Workflow or Workflows in the Pipeline you selected have Workflow variables, you are prompted to provided values for them. + +### Step 4: Provide Values for Workflow Variables + +If the Workflow or Pipeline you selected to deploy uses Workflow variables, you will need to provide values for these variables. + +You can also use variable expressions for these values. See [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md). + +### Step 5: Select the Artifact to Deploy + +Since Workflows deploy Harness Services, you are also prompted to provide the Artifact Source for the Service(s) the Workflow(s) will deploy. + +There are three main settings: + +#### From Triggering Artifact Source + +Select this option to use the artifact identified in Artifact Source you selected in **Condition**. + +Harness ties Artifact Sources to their Harness Services. If you have a Pipeline with three Workflows that deploy three Services that use the same Artifact Source (say, an artifact in a Nexus repo), you should not create three Triggers. You can simply create a Trigger on the first Service deployed. You can then get the `${artifact.buildNo}` [expression](https://docs.harness.io/article/aza65y4af6-built-in-variables-list#artifact) and [pass it on to the subsequent Workflows](../expressions/how-to-pass-variables-between-workflows.md). + +#### Last Collected + +Select this option to use the last artifact collected by Harness in the Harness Service. Artifact metadata is collected automatically every two minutes by Harness. + +You can also manually collect artifact metadata using the Service's **Manually pull artifact** feature. + +#### Last Successfully Deployed + +The last artifact that was deployed by the Workflow you select. + +### Best Practices + +Do not trigger on the **latest** tag of an artifact, such as a Docker image. With **latest**, Harness only has metadata, such as the tag name, which has not changed, and so Harness does not know if anything has changed. The Trigger will not be executed. Do not use a static tag for Triggers, use an artifact source instead. Create a Trigger that deploys On New Artifact. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + +### Related Topics + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness GitOps, see [Harness GitOps](../../harness-git-based/harness-git-ops.md). +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-pipeline-completion.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-pipeline-completion.md new file mode 100644 index 00000000000..cf666fbf3fa --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-on-pipeline-completion.md @@ -0,0 +1,90 @@ +--- +title: Trigger Deployments when Pipelines Complete +description: You can trigger Harness Workflow and Pipeline deployments when specific Harness Pipelines complete their deployments. For example, you might create a Pipeline to test a deployment in one environment.… +sidebar_position: 30 +helpdocs_topic_id: nihs2y2z61 +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can trigger Harness Workflow and Pipeline deployments when specific Harness Pipelines complete their deployments. + +For example, you might create a Pipeline to test a deployment in one environment. When it completes its deployment, a Trigger executes a second Pipeline to deploy to your stage environment. + +### Before You Begin + +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) + +### Supported Platforms and Technologies + +See  [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Step 1: Add a Trigger + +Typically, Triggers are set up after you have successfully deployed and tested a Workflow or Pipeline. + +To add a trigger, do the following: + +1. Ensure that you have a Harness Service, Environment, and Workflow set up. If you want to Trigger a Pipeline, you'll need one set up also. +2. In your Harness Application, click **Triggers**. +3. Click **Add Trigger**. The **Trigger** settings appear. +4. In **Name**, enter a name for the Trigger. This name will appear in the **Deployments** page to indicate the Trigger that initiated a deployment. +5. Click **Next**. + +### Step 2: Select the Pipeline the Initiates this Trigger + +1. In **Condition**, select **On New Pipeline**. +2. In **Pipeline**, select the Pipeline that will initiate this Trigger when the Pipeline completes its deployment. +3. Click **Next**. + +### Step 3: Select the Workflow or Pipeline to Deploy + +1. In **Execution Type**, select **Workflow** or **Pipeline**. +2. In **Execute Workflow**/**Pipeline**, select the Workflow or Pipeline to deploy. + +### Step 4: Provide Values for Workflow Variables + +If the Workflow or Pipeline you selected to deploy uses Workflow variables, you will need to provide values for these variables. + +You can also use variable expressions for these values. See [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md). + +### Step 5: Select the Artifact to Deploy + +Since Workflows deploy Harness Services, you are also prompted to provide the Artifact Source for the Service(s) the Workflow(s) will deploy. + +There are three main settings: + +#### From Triggering Artifact Source + +Select this option to use the artifact identified in Artifact Source you selected in **Condition**. + +#### Last Collected + +Select this option to use the last artifact collected by Harness in the Harness Service. Artifact metadata is collected automatically every minute by Harness. + +You can also manually collect artifact metadata using the Service's **Manually pull artifact** feature. + +#### Last Successfully Deployed + +The last artifact that was deployed by the Workflow you select. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + +### Related Topics + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness Git integration, see [Onboard Teams Using Git](../../harness-git-based/onboard-teams-using-git-ops.md). +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-using-a-url.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-using-a-url.md new file mode 100644 index 00000000000..15a491fbbb5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-using-a-url.md @@ -0,0 +1,128 @@ +--- +title: Trigger a Deployment using a URL +description: For Build Workflows or a Build and Deploy Pipeline , you can trigger deployments in response to a Git event using Webhooks. This is described in Trigger Deployments using Git Events. Once you have cr… +sidebar_position: 90 +helpdocs_topic_id: 3key6nybou +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +For [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) or a  [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview), you can trigger deployments in response to a Git event using Webhooks. This is described in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +Once you have created a Harness [On Webhook Event](trigger-a-deployment-on-git-event.md) Trigger, Harness creates a Manual Trigger for it. + +You can do the following with a Manual Trigger: + +* Start a deployment using a curl command. See [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md). +* Use a REST call to get deployment status. See [Get Deployment Status using REST](get-deployment-status-using-rest.md). +* Start a deployment using a URL provided by Harness. + +In this topic, we will cover triggering a deployment using a URL provided by Harness. + +This option is used to execute a Build Workflow or a Build Pipeline only. + +### Before You Begin + +* [API Keys](https://docs.harness.io/article/smloyragsm-api-keys) +* [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) +* [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) + +### Limitations + +In the **Actions** section of the Trigger, the **Deploy only if files have changed** option is available for Workflows deploying Kubernetes or Native Helm Services only. + +### Step 1: Create the Git Webhook Trigger + +Follow the steps in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +When you are finished, the Trigger is displayed in the Triggers list, and includes a Manual Trigger option. + +![](./static/trigger-a-deployment-using-a-url-16.png) + +### Step 3: Show cURL Command + +The cURL command for executing a deployment is provided by every Trigger of type **On Webhook Event**. + +In Triggers, locate the Trigger you want run. + +Click **Manual Trigger**. + +In the Manual Trigger settings, click **Show Curl Command**. The cURL command is displayed. + +When you created a Trigger, if you selected values for parameters that are represented by placeholders in the cURL command, you do not need to add values for the cURL placeholders. + +If you add values for the cURL placeholders, you will override manual settings in the Trigger. + +This is also true for Triggers that execute templated Workflows and Pipelines. If you create a Trigger that executes a templated Workflow or Pipeline, you can select values for the templated settings in the Trigger, but you can still override them in the cURL command. + +Let's look at a placeholder example: + + +``` +curl -X POST -H 'content-type: application/json' \ + --url https://app.harness.io/api/webhooks/xxxxxx \ + -d '{"application":"xxxxxx","artifacts":[{"service":"micro-service","buildNumber":"micro-service_BUILD_NUMBER_PLACE_HOLDER"}]}' +``` +For `service`, enter the name of the Harness Service. + +For `buildNumber`, enter the artifact build number from the Artifact History in the Service. + +[![](./static/trigger-a-deployment-using-a-url-17.png)](./static/trigger-a-deployment-using-a-url-17.png) + +For example: + + +``` +curl -X POST -H 'content-type: application/json' \ + --url https://app.harness.io/api/webhooks/xxxxxx \ + -d '{"application":"xxxxxx","artifacts":[{"service":"Service-Example","buildNumber":"1.17.8-perl"}]}' +``` +### Step 3: Run the cURL Command + +Once you have replaced the placeholders, run the cURL command. + +The output will be something like this (private information has been replaced with **xxxxxx**): + + +``` +{ + "requestId":"-tcjMxQ_RJuDUktfl4AY0A", + "status":"RUNNING", + "error":null, + "uiUrl":"https://app.harness.io/#/account/xxxxxx/app/xxxxxx/pipeline-execution/-xxxxxx/workflow-execution/xxxxxx/details", + "apiUrl":"https://app.harness.io/gateway/api/external/v1/executions/-xxxxxx/status?accountId=xxxxxx&appId=xxxxxx" + } + +``` +The **uiUrl** can be used directly in a browser. **apiUrl** can be used to track deployment status programmatically, such as using a REST call. + +### Step 4: View Deployment Using the URL + +The **uiUrl** from the cURL command output can be used directly in a browser. + +To run a deployment from a browser, paste the URL from **uiUrl** into the browser location field and hit **ENTER**. + +The browser will open **app.harness.io** and display the running deployment. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + +### Related Topics + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness Git integration, see [Onboard Teams Using Git](../../harness-git-based/onboard-teams-using-git-ops.md). +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-using-c-url.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-using-c-url.md new file mode 100644 index 00000000000..d42c787f13f --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-using-c-url.md @@ -0,0 +1,246 @@ +--- +title: Trigger a Deployment using cURL (FirstGen) +description: Once you have On Webhook Event Trigger you can use a Manual Trigger to start a deployment using a cURL command. +sidebar_position: 60 +helpdocs_topic_id: mc2lxsas4c +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +For GitHub, GitLab, and Bitbucket, you can trigger [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) or a [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) in response to a Git event using Webhooks using a Harness [On Webhook Event](trigger-a-deployment-on-git-event.md) Trigger. + +For Custom Git providers, you can trigger any type of Workflow using a Harness [On Webhook Event](trigger-a-deployment-on-git-event.md) Trigger. This is described in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +Once you have created a Harness On Webhook Event Trigger, Harness creates a **Manual Trigger** for it. + +You can do the following with a Manual Trigger: + +* Start a deployment using a cURL command. +* Use a REST call to get deployment status. See [Get Deployment Status using REST](get-deployment-status-using-rest.md). +* Start a deployment using a URL provided by Harness. See [Trigger a Deployment using a URL](trigger-a-deployment-using-a-url.md). + +This topic describes how to obtain and use the cURL command. + +:::note +For GitHub, GitLab, and Bitbucket, this option is used to execute a Build Workflow or a Build Pipeline only. +::: + +### Before You Begin + +* [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) +* [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) + +:::note +In the **Actions** section of the Trigger, the **Deploy only if files have changed** option is available for Workflows deploying Kubernetes or Native Helm Services only. +::: + +#### Data retention + +Data retention for Webhook event details is 3 days. + +#### Trigger Processing Details + +To see the event details, in the response of a Webhook request the `data` field contains the Id of the registered WebHook event. You can use the following API and the `eventId` to get the details of the WebHook Event: + +``` +curl -i -X GET \ + 'https://app.harness.io/gateway/pipeline/api/webhook/triggerProcessingDetails?accountIdentifier=&eventId=' \ + -H 'x-api-key: ' +``` +If you want to access the `triggerProcessingDetails` endpoint, you should sleep your cURL command for up to 1 minute after the Trigger is fired. The result should be non-null quickly, but sleeping the cURL command ensures that you receive the data. + +The process happens asynchronously: upon receiving the Trigger call, Harness registers it to a queue which is consumed every 5 seconds by one of its iterators. It is only after the event is consumed by the iterator that the data returned in `triggerProcessingDetails` is populated. + +### Step 1: Create the Git Webhook Trigger + +Follow the steps in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +When you are finished, the Trigger is displayed in the Triggers list, and includes a Manual Trigger option. + +![](./static/trigger-a-deployment-using-c-url-03.png) + +### Step 2: Get and Run the cURL Command + +If you add an artifact source to the custom trigger payload that is different from the artifact source configured in the Trigger, it will not be honored. Harness will only honor the artifact source configured in the Trigger.Click **Manual Trigger**. + +In the Manual Trigger, click **Show Curl Command**. + +The curl command is displayed. It will look something like the this (private information has been replaced with **xxxxxx**): + + +``` +curl -X POST -H 'content-type: application/json' \ + --url https://app.harness.io/api/webhooks/xxxxxx \ + -d '{"application":"xxxxxx","artifacts":[{"service":"micro-service","buildNumber":"micro-service_BUILD_NUMBER_PLACE_HOLDER"}]}' +``` +Copy the curl command, replace the placeholders with actual values, and run it in a terminal. + +To avoid a `zsh: no matches found` error, add quotes to the URL if it includes a `?`. For example: + + +``` +curl -X POST -H 'content-type: application/json' --url 'https://app.harness.io/gateway/api/webhooks/2LdxLfQ71nZaOV4P3kluS3EBJb4TnFW7tS3KMMNM?accountId=xxxx' -d '{"application":"SgF_NViyTSKf74WkhGd0ZA"}' +``` +### Review: Placeholders and Manual Settings + +When you created a Trigger, if you selected values for parameters that are represented by placeholders in the cURL command, you do not need to add values for the cURL placeholders. + +If you add values for the cURL placeholders, you will override manual settings in the Trigger. + +This is also true for Triggers that execute templated Workflows and Pipelines. If you create a Trigger that executes a templated Workflow or Pipeline, you can select values for the templated settings in the Trigger, but you can still override them in the cURL command. + +Let's look at a placeholder example: + + +``` +curl -X POST -H 'content-type: application/json' \ + --url https://app.harness.io/api/webhooks/xxxxxx \ + -d '{"application":"xxxxxx","artifacts":[{"service":"micro-service","buildNumber":"micro-service_BUILD_NUMBER_PLACE_HOLDER"}]}' +``` +The `artifacts` setting is optional. If you have an artifact hardcoded in your manifest and do not use a Harness Artifact Source, you will not need `artifacts`. Remove the **entire** `artifacts` section: `"artifacts":[{"service":"micro-service","buildNumber":"micro-service_BUILD_NUMBER_PLACE_HOLDER"}]}'`For `service`, enter the name of the Harness Service. + +For `buildNumber`, enter the artifact build number from the Artifact History in the Service. + +![](./static/trigger-a-deployment-using-c-url-04.png) + +For example: + + +``` +curl -X POST -H 'content-type: application/json' \ + --url https://app.harness.io/api/webhooks/xxxxxx \ + -d '{"application":"xxxxxx","artifacts":[{"service":"Service-Example","buildNumber":"1.17.8-perl"}]}' +``` +The output will be something like this (private information has been replaced with **xxxxxx**): + + +``` +{ + "requestId":"-tcjMxQ_RJuDUktfl4AY0A", + "status":"RUNNING", + "error":null, + "uiUrl":"https://app.harness.io/#/account/xxxxxx/app/xxxxxx/pipeline-execution/-xxxxxx/workflow-execution/xxxxxx/details", + "apiUrl":"https://app.harness.io/gateway/api/external/v1/executions/-xxxxxx/status?accountId=xxxxxx&appId=xxxxxx" + } + +``` +If the Service has multiple Artifact Sources, then you will have to enter the artifact you want to use.The **uiUrl** can be used directly in a browser. + +See [Trigger a Deployment using a URL](trigger-a-deployment-using-a-url.md). + +**apiUrl** can be used to track deployment status programmatically, such as using a REST call. + +See [Get Deployment Status using REST](get-deployment-status-using-rest.md). + +#### Placeholders and Workflow Variables + +In the cURL command, placeholder values are added for any Workflow variables rather than the variables selected when creating the Trigger. + +The placeholder values added to the cURL command override any existing values, including Workflow variables selected in the Trigger. + +You can choose to ignore the placeholders and not pass values for any variables in the cURL command. + +### Review: Workflow and Pipeline Links in Response + +In the JSON response of a Workflow/Pipeline executed by a Webhook Trigger, the `uiSetupUrl` label displays the URL or the Workflow/Pipeline that was run. + +Workflow example: + + +``` +{ + "requestId":"wmuS6c2pQBaiX38eOcN5fg", + "status":"RUNNING", + "error":null, + "uiUrl":"https://qa.harness.io/#/account/xxxxx/app/jL7IEwaKTPmfTlAVWIlHKQ/env/emINy3NOS-ONA1iTqf7wyQ/executions/wmuS6c2pQBaiX38eOcN5fg/details", + "uiSetupUrl":"https://qa.harness.io/#/account/xxxxx/app/jL7IEwaKTPmfTlAVWIlHKQ/workflows/X8rbeQ2oTDWkCyza3DR3Hg/details", + "apiUrl":"https://qa.harness.io/api/external/v1/executions/wmuS6c2pQBaiX38eOcN5fg/status?accountId=xxxxx&appId=jL7IEwaKTPmfTlAVWIlHKQ", + "message":null +} +``` +Pipeline example: + + +``` +{ + "requestId":"KmbgijMHQ76qyoRhtW-oBA", + "status":"RUNNING", + "error":null, + "uiUrl":"https://qa.harness.io/#/account/xxxxx/app/jL7IEwaKTPmfTlAVWIlHKQ/pipeline-execution/KmbgijMHQ76qyoRhtW-oBA/workflow-execution/undefined/details", + "uiSetupUrl":"https://qa.harness.io/#/account/xxxxx/app/jL7IEwaKTPmfTlAVWIlHKQ/pipelines/oMD0__89TpWAUQiru0jSpw/edit", + "apiUrl":"https://qa.harness.io/api/external/v1/executions/KmbgijMHQ76qyoRhtW-oBA/status?accountId=xxxxx&appId=jL7IEwaKTPmfTlAVWIlHKQ", + "message":null +} +``` +### Option: Enforce API Keys for Manual Triggers + +Currently, this feature is behind the feature flag `WEBHOOK_TRIGGER_AUTHORIZATION`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Once you have On Webhook Event Trigger you can use a Manual Trigger to start a deployment using a cURL command provided by Harness. + +By default, the cURL command does not require that you include Harness API keys. Harness provides the option to enforce the use of API keys in Manual Trigger cURL commands per Application. + +By using Harness API keys in your cURL commands, you can enforce **authorization** on manual execution of your Triggers. Harness API keys let you select which Harness User Groups can run the Trigger. + +#### Permissions Required + +* **Account Permissions:** to enforce API keys for manual triggers, your Harness User account must belong to a User Group with the **Manage Applications** Account Permissions option enabled. See [Managing Users and Groups (RBAC)](https://docs.harness.io/article/ven0bvulsj-users-and-permissions). +* **Application Permissions:** to initiate any manual triggers (with or without using API keys), the Harness User account must belong to a User Group with the **Deployments** Permission Type and the **Execute Workflow** and/or **Execute Pipeline** Application Permissions. + +#### Enforce API Keys for Manual Triggers + +To enforce API keys for manual triggers, do the following: + +In Harness, in **Security**, click **Access Management**. + +Click **API Keys**, and then follow the steps in [API Keys](https://docs.harness.io/article/smloyragsm-api-keys) to create an API key. Or you can select an existing key. + +Make sure your API key is assigned a User Group that only contains the Harness Users that you want to use this API key to run the Manual Trigger cURL command. + +In the new or existing Harness Application containing your Triggers, click more options (︙) and select **Edit**. + +![](./static/trigger-a-deployment-using-c-url-05.png) + +Select **Authorize Manual Triggers**. Harness presents a warning: + + +> Warning: When you select "Authorize Manual Triggers" it will become mandatory to provide API keys in headers to authorize Manual Triggers invocation. + +This means that every Manual Trigger cURL command now includes a placeholder for the API key (`x-api-key_placeholder`): + + +``` +curl -X POST -H 'content-type: application/json' +-H 'x-api-key: x-api-key_placeholder' +--url https://app.harness.io/trigger-authorization/api/webhooks/o5w0h6H8Xmllbdp?accountId=wAwixrZwvnhuGVQOJ3sD +-d '{"application":"T6fPlBO4TWKT-fCPXxamxA"}' +``` +Replace the `x-api-key_placeholder` with the Harness API key. + +That's it. Now you have authorization set up on your Manual Trigger. + +Let's look at the two use cases: + +* **Authorize Manual Triggers** **is disabled:** the standard Manual Trigger cURL command configuration is used. Manual Trigger cURL commands will not contain the API placeholder or require an API key. If an API key is still provided, Harness checks if the provided key is authorized to invoke the Trigger. Meaning, it checks to see if the User Group assigned to the key has the required Application Permissions (listed above). If User Group assigned to the key does not have the required Application Permissions, then the Trigger cannot be invoked and an error message appears. +* **Authorize Manual Triggers is enabled:** if the User Group assigned to the API key has the required Application Permissions (listed above), then the Trigger is invoked. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + +### Related Topics + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness GitOps, see [Harness GitOps](../../harness-git-based/harness-git-ops.md). +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Trigger a Deployment when a File Changes](trigger-a-deployment-when-a-file-changes.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Trigger a Deployment using a URL](trigger-a-deployment-using-a-url.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-when-a-file-changes.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-when-a-file-changes.md new file mode 100644 index 00000000000..11f225dcdb3 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/triggers/trigger-a-deployment-when-a-file-changes.md @@ -0,0 +1,101 @@ +--- +title: Trigger a Deployment when a File Changes +description: File-based repo Triggers are currently supported only for Native Helm and Helm-based Kubernetes deployments. For more information, see Kubernetes or Helm?. For Build Workflows or a Build and Deploy P… +sidebar_position: 80 +helpdocs_topic_id: zr4tgwrzlb +helpdocs_category_id: weyg86m5qp +helpdocs_is_private: false +helpdocs_is_published: true +--- + +File-based repo Triggers are currently supported only for Native Helm and Helm-based Kubernetes deployments. For more information, see [Kubernetes or Helm?](https://docs.harness.io/article/i3n6qr8p5i-deployments-overview#kubernetes_or_helm).For [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) or a  [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview), you can trigger deployments in response to a Git event using Webhooks. This is described in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +In some Webhook Trigger scenarios, you might set a Webhook on your repo to trigger a Workflow or Pipeline when a Push event occurs in the repo. However, you might want to initiate the Trigger only when **specific files** in the repo are changed. + +For example, if you have a Trigger that executes a Helm Deployment workflow, and the Workflow uses a values.yaml file from a Git repo, you might want to initiate the Trigger only when that values.yaml file is changed. + +This topic describes how to set up and run a file-based Trigger. + + +### Before You Begin + +* [Build Workflows](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) +* [Build and Deploy Pipeline](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) +* [Add a Service](../setup-services/service-configuration.md) +* [Workflows](../workflows/workflow-configuration.md) +* [Add Environment](../environments/environment-configuration.md) +* [Create a Pipeline](../pipelines/pipeline-configuration.md) + +### Limitations + +In the **Actions** section of the Trigger, the **Deploy only if files have changed** option is displayed only if the following conditions are met: + +* The Workflow selected deploys a Harness Kubernetes or Native Helm Service. +* The **On Push** event is selected. + +For more information, see [Kubernetes or Helm?](https://docs.harness.io/article/i3n6qr8p5i-deployments-overview#kubernetes_or_helm). + +### Step 1: Create the Git Webhook Trigger + +Follow the steps in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +In **Condition**, select **On Webhook Event**. + +In **Payload Type**, select the repository type (GitHub, Bitbucket, GitLab). + +In **Event Type**, select the **On Push** Webhook event. + +When you get to the **Actions** section, click **Deploy only if files have changed**. + +The file-based, repo-related settings appear. + +### Step 2: Select the Files to Watch + +1. In **Git Connector**, select which of the SourceRepro Providers set up in Harness to use. These are the connections between Harness and your Git repos. For more information, see [Add SourceRepo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). +2. In **Branch Name**, enter the name of the branch to use. +3. In **File Path**, enter the file name for the file that, when changed and Pushed, will execute this Trigger. + + For multiple file paths, use commas or line breaks as separators. For example, `sample-manifests/values.yaml, index.yaml`. + + :::note + Wildcards are not supported for **Branch Name** or **File Path**. + ::: + + When you are done, the **Skip deployment if file(s) not changed** section will look something like this:![](./static/trigger-a-deployment-when-a-file-changes-00.png) +4. Click **Next** and then **Submit**. + +Now you can add the Webhook for that Trigger to the repo you selected, as described in [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md). + +When the file you entered is changed and Pushed, the Trigger will execute. + +### Review: Branch Regex Push and Pull + +For merging changes, use the **On Pull Request** event type and not **On Push**. There is no source or destination in push. It has an old state and a new state. See [Event Payloads](https://support.atlassian.com/bitbucket-cloud/docs/event-payloads/) from Atlassian. + +The On Pull Request has a source and a destination branch. See [Pull Request](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html#EventPayloads-entity_pullrequest) from Atlassian. + +The following is a list of exactly which keys the **On Pull Request** and **On Push** events refer to: + +* Github — push branch ref: `${ref.split('refs/heads/')[1]}` +* Github — pull branch ref: `${pull_request.head.ref}` +* GitLab — push branch ref: `${ref.split('refs/heads/')[1]}` +* GitLab — pull branch ref: `${object_attributes.source_branch}` +* BitBucket — push branch ref: `${push.changes[0].'new'.name}` +* BitBucket — push branch ref on-premises: `${changes[0].refId.split('refs/heads/')[1]}` + + You must also select event type as `Refs_changed`. +* BitBucket — pull branch ref: `${pullrequest.source.branch.name}` +* BitBucket — pull branch ref on-premises: `${pullRequest.fromRef.displayId}` + +### Related Topics + +* [Passing Variables into Workflows from Triggers](../expressions/passing-variable-into-workflows.md) +* For information on using Triggers as part of Harness Git integration, see [Onboard Teams Using Git](../../harness-git-based/onboard-teams-using-git-ops.md). +* [Trigger Deployments When a New Artifact is Added to a Repo](trigger-a-deployment-on-new-artifact.md) +* [Schedule Deployments using Triggers](trigger-a-deployment-on-a-time-schedule.md) +* [Trigger Deployments when Pipelines Complete](trigger-a-deployment-on-pipeline-completion.md) +* [Get Deployment Status using REST](get-deployment-status-using-rest.md) +* [Trigger a Deployment using a URL](trigger-a-deployment-using-a-url.md) +* [Trigger Deployments using Git Events](trigger-a-deployment-on-git-event.md) +* [Trigger a Deployment using cURL](trigger-a-deployment-using-c-url.md) +* [Pause All Triggers using Deployment Freeze](freeze-triggers.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/_category_.json b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/_category_.json new file mode 100644 index 00000000000..5859e3d6bb7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/_category_.json @@ -0,0 +1,14 @@ +{ + "label": "Add Workflows", + "position": 40, + "collapsible": "true", + "collapsed": "true", + "className": "red", + "link": { + "type": "generated-index", + "title": "Add Workflows" + }, + "customProps": { + "helpdocs_category_id": "a8jhf8hizv" + } +} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-notification-strategy-new-template.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-notification-strategy-new-template.md new file mode 100644 index 00000000000..f3b055d81cf --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-notification-strategy-new-template.md @@ -0,0 +1,77 @@ +--- +title: Add a Workflow Notification Strategy +description: Specify a notification strategy for a Workflow (or for a Workflow phase in a Canary or Multi-Service Workflow) that send notifications. +sidebar_position: 50 +helpdocs_topic_id: duu0gbhejn +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can specify a notification strategy for a Workflow (or for a Workflow phase in a Canary or Multi-Service Workflow) that send notifications using different criteria. + +## Before You Begin + +* [Add a Workflow](tags-how-tos.md) + + +## Step: Add Notification + +To add a notification strategy, do the following: + +1. In a Workflow, click **Notification Strategy**. The default notification step appears. + + ![](./static/add-notification-strategy-new-template-83.png) + +2. To edit the default strategy, click the pencil icon next to the strategy item. +3. To add a new notification strategy, click **Add Notification Strategy**. The **Notification Strategy** settings appear. + + ![](./static/add-notification-strategy-new-template-84.png) + +### Condition + +Click the drop-down to select the condition that will execute the notification strategy (Failure, Success, Paused). Click all the conditions that apply. + + +### Scope + +Select Workflow or Workflow Phase (for Canary or Multi-Service) as the scope for the condition(s) you selected. +* If you select **Workflow**, it applies to all Workflow settings, including Pre-deployment, Post-deployment, and all Workflow Phases. +* If you select **Workflow Phase**, it applies to Phases only. Pre-deployment, Post-deployment, and any settings outside of the Phases are not used. + + +### User Group + +:::note +The default User Group is Account Administrator. You can change this, but it is always the default when a new Workflow is created. +::: + +Select the User Group to notify when the condition is met within the scope. For information on setting up the notification channels for a User Group, see [User Notifications and Alert Settings](https://docs.harness.io/article/kf828e347t-notification-groups). + +You can also enter variable expressions for Workflow variables. + +You could create a Workflow variable named `StageOpsAdmin` and use that in **User Group**. + +For the **Workflow** and **Workflow Phase** scopes, you can select [Workflow variables](add-workflow-variables-new-template.md) using a `${workflow.variables.varName}` expression. + +:::note +You cannot use Service or Environment **Service Variables Overrides** in **User Group**. +::: + +#### Slack Notification Example + +Once Slack has been configured in the Harness User Group [Notification Settings](https://docs.harness.io/article/kf828e347t-notification-groups), you can add the User Group in the Workflow **Notification Settings**: + +![](./static/add-notification-strategy-new-template-85.png) + +When the Workflow deployment is completed, the Slack channel is notified: + +![](./static/add-notification-strategy-new-template-86.png) + +In the event of a failure, the Slack is notified because we selected **Failure** as a **Condition**. + +![](./static/add-notification-strategy-new-template-87.png) + +Notice that the error message is included in the Slack message. If multiple steps failed, they are included in the message: + +![](./static/add-notification-strategy-new-template-88.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-steps-for-different-tasks-in-a-wor-kflow.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-steps-for-different-tasks-in-a-wor-kflow.md new file mode 100644 index 00000000000..25b8c6b2e1d --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-steps-for-different-tasks-in-a-wor-kflow.md @@ -0,0 +1,38 @@ +--- +title: Use Steps for Different Workflow Tasks +description: Outline steps for adding different Workflow tasks. +sidebar_position: 90 +helpdocs_topic_id: oq0p41g19m +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Different Workflow deployment types involve different Workflow steps. Each type of step, such as **Deploy Containers** or **Verify Service**, has specific commands and options available to it. + +### Before You Begin + +* [Add a Workflow](tags-how-tos.md) +* [Workflows](workflow-configuration.md) +* [Skip Workflow Steps](skip-workflow-steps.md) + + +### Step: Add Step for Different Workflow Tasks + +To add steps, do the following: + +1. Under the Workflow step, click **Add Step**. The Add Step options appear: + + ![](./static/add-steps-for-different-tasks-in-a-wor-kflow-101.png) + +2. To use a template, click **Add Step**, and then click **Template Library**. For more information, see [Use Templates](https://docs.harness.io/article/ygi6d8epse-use-templates). +3. If a step has multiple commands, you can arrange them. **Mouseover** the command and then click the **down arrow**. +4. You can also control how the multiple commands and steps are executed. Click the vertical ellipsis next to a step with multiple steps. The execution drop-down appears. + + ![](./static/add-steps-for-different-tasks-in-a-wor-kflow-102.png) + +### Next Steps + +* [Add Phases to a Workflow](add-workflow-phase-new-template.md) +* [Verify Workflow](verify-workflow-new-template.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-workflow-phase-new-template.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-workflow-phase-new-template.md new file mode 100644 index 00000000000..2ebf4ee5fbc --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-workflow-phase-new-template.md @@ -0,0 +1,42 @@ +--- +title: Add Phases to a Workflow +description: Provide details on adding a phase to a multi-phase deployment, such as a Canary Deployment. +sidebar_position: 100 +helpdocs_topic_id: nq3ugixwle +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +In multi-phase deployments, such as a Canary Deployment, Workflow steps are grouped into phases. + +### Before You Begin + +* [Workflows](workflow-configuration.md) +* [Add a Workflow](tags-how-tos.md) + + +### Step: Add a Phase to a Multi-phase Workflow + +To add a phase to a multi-phase deployment, do the following: + +1. In **Deployment Phases**, click **Add Phase**. The **Workflow Phase** settings appears. + + ![](./static/add-workflow-phase-new-template-89.png) + +2. In **Service**, select the Service to use for this phase. +3. In **Infrastructure Definition**, select the Infrastructure Definition where you want the Workflow Phase to deploy the Service. + + ![](./static/add-workflow-phase-new-template-90.png) + +4. If you want to override a variable that is defined in the Service you selected, in **Service Variable Overrides**, click **Add**. Enter the name of the variable to override, and the override value. + +5. Click **Submit**. The new phase Workflow appears. Complete the phase Workflow as you would any other Workflow. You can also define the **Rollback Steps** for this phase. + +6. When you are done, click the name of the Workflow in the breadcrumbs, to return to the Workflow overview and see the phase added to **Deployment Phases**. + + +### Next Steps + +* [Verify Workflow](verify-workflow-new-template.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-workflow-variables-new-template.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-workflow-variables-new-template.md new file mode 100644 index 00000000000..a21720ea9b7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/add-workflow-variables-new-template.md @@ -0,0 +1,102 @@ +--- +title: Set Workflow Variables +description: Set variables in the Workflow Variables section, and use them in the Workflow step commands and settings. +sidebar_position: 80 +helpdocs_topic_id: 766iheu1bk +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can set variables in the **Workflow Variables** section of your Workflow, and use them in the Workflow step commands and settings. + +New to Harness Variables? See [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) + +You can also use them in some Harness Service settings in order to have their values replaced at the Workflow level. + +You provide values for Workflow variables at deployment runtime for the Workflow or a Pipeline executing the Workflow. + +### Before You Begin + +* [Workflows](workflow-configuration.md) +* [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) +* [Variable Override Priority](https://docs.harness.io/article/benvea28uq-variable-override-priority) + +### Review: Variable Types + +There are several different types of Workflow variables you can add: + +* **Entity**—If you have parameterized a Workflow Service, Environment, or Infrastructure Definition setting, it will appear in Workflow Variables with the type **ENTITY**. See [Templatize a Workflow](templatize-a-workflow-new-template.md). +* **Text**—A text-based variable. +* **Email**—Email addresses. +* **Number**—Numeric variable. + +Only [Service Config Variables](../setup-services/add-service-level-config-variables.md) are added as environment variables and can be output with `env`. Workflow and other variables are not added as environment variables. + +### Review: Multi-Select Values + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.For **Text** and **Email** types, in **Allowed Values**, you can enter multiple options by separating them by commas. + +For example: + +![](./static/add-workflow-variables-new-template-234.png) + +When you deploy the Workflow, the values appear as a dropdown, with the Default Value already selected: + +![](./static/add-workflow-variables-new-template-235.png) + +The same options will be available when the Workflow is used in a Pipeline and Trigger. + +You can also search for values by simply entering the search values in the dropdown. + +When you type into the dropdown to search for Allowed Values, be sure to select one or more values and not hit Enter. If you hit Enter, your search term will be used and if it is not exactly the same as an Allowed Value, it will fail during execution. + +### Review: Using a Workflow Variable across Pipeline Stages + +If you use Workflow variables with the same name across a Pipeline's stages, the Workflow variables will always have the same value. The value is assigned when the first instance of the variable is evaluated at runtime. + +When Harness runs a Pipeline, it will encounter the Workflow variable in the first stage where it is used. Harness will resolve that variable to a value. Each subsequent use the a Workflow variable with the same name in the same Pipeline will now also use that value. + +It's important to understand how Workflow variables with the same name work in a Pipeline because you might want to change the value of the Workflow from stage to stage. But Harness will not do this because it only resolves the variable once and then uses that value for each instance of the Workflow variable with the same name. + +### Step: Add Workflow Variables + +To use Workflow variables, do the following: + +1. In a Workflow, click the pencil icon next to **Workflow Variables**. The **Workflow Variables** dialog appears. + + ![](./static/add-workflow-variables-new-template-236.png) + + **Workflow Variables** have the following settings. + + | | | + | --- | --- | + | **Field** | **Description** | + | **Variable Name** | Enter a name for the variable. When the variable is referenced elsewhere in you Harness application, the variable name is used. | + | **Type** | Select **Text**, **Email**, or **Number**.If you have parameterized a Workflow Service, Environment, or Infrastructure Definition setting, it will appear in Workflow Variables with the type **ENTITY**. | + | **Allowed Values** | Enter a comma-separated list of values that users can select. The list will appear as a drop-down menu when the Workflow is deployed.See [Review: Multi-Select Values](add-workflow-variables-new-template.md#review-multi-select-values) above.You cannot template multi-value (drop-down) **Allowed Values** Workflow variables. | + | **Default Value** | Enter a value for the variable. A value is not mandatory. | + | **Required** | Select this option to enforce that a value for the variable is provided before the Workflow is executed. | + | **Fixed** | Select this option if the value of the variable specified here must not be changed. | + | **Description** | Provide a description of the variable that lets others know its purpose and requirements. | + +2. In a Workflow step, use your variable by typing a dollar sign ($) and the first letter of your variable. The syntax for variable names is **${workflow.variables.*****name*****}**. + +Harness will load matching variable names. + +For example, if you created a variable named **Url**, the variable name is **${workflow.variables.Url}**. + +When you deploy the Workflow, by itself or as part of a Pipeline, the variables are displayed in the Workflow execution step. + +If the variables require values, you will enter the values when you add the Workflow to a Pipeline in the **Stage** settings or in the **New Deployment** setting when you deploy the Workflow individually. + +### Notes + +**Workflow variable expressions in Services** — You can use Workflow variable expressions in a Harness Service, but Harness does not autocomplete Workflow variables in a Service like it does in a Workflow. You will need to manually enter the Workflow variable expression in the Service: **${workflow.variables.*****name*****}**. + +### Next Steps + +* [Workflows](workflow-configuration.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/capture-shell-script-step-output.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/capture-shell-script-step-output.md new file mode 100644 index 00000000000..658a948a2d5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/capture-shell-script-step-output.md @@ -0,0 +1,335 @@ +--- +title: Run Shell Scripts in Workflows +description: With the Shell Script command, you can execute scripts in the shell session of the Workflow. +sidebar_position: 150 +helpdocs_topic_id: 1fjrjbau7x +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +One of the steps you can include in a Harness Workflow is a **Shell Script** step. + +With the Shell Script step, you can execute scripts in the shell session of the Workflow in the following ways: + +* Execute bash scripts on the host running a Harness Delegate. You can use Delegate Selectors to identify which Harness Delegate to use. +* Execute bash or PowerShell scripts on a remote target host in the deployment Infrastructure Definition. + +This topic provides a simple demonstration of how to create a bash script in a Shell Script step, publish its output in a variable, and use the published variable in a subsequent Workflow step. + +### Before You Begin + +* [Add a Workflow](workflow-configuration.md) +* [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) +* [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) (for shell session execution credentials) + +### Limitations + +* Shell Script step names cannot contain dots. This is true for names entered in the Harness Manager UI or YAML via [Configure as Code](https://docs.harness.io/article/r5vya3dlt0-edit-the-code-in-harness) or [Git Sync](https://docs.harness.io/article/6mr74fm55h-harness-application-level-sync). +* If you add a trailing space at the end of any line on your script, Harness YAML transforms that into a single-line value with all control characters visible. For example, this script has a trailing space: + + ![](./static/capture-shell-script-step-output-91.png) + +Here is what the resulting YAML looks like: + +![](./static/capture-shell-script-step-output-92.png) + +Remove any trailing spaces from your script to avoid this limitation. Here's what the YAML should look like when there are no trailing spaces: + +![](./static/capture-shell-script-step-output-93.png) + +### Review: Shell Script Step Overview + +With the Shell Script command, you can execute scripts in the shell session of the Workflow in the following ways: + +* Execute bash scripts on the host running a Harness Delegate. You can use Delegate Selectors to identify which Harness Delegate to use. +* Execute bash or PowerShell scripts on a remote target host in the deployment Infrastructure Definition. + +You can run PowerShell scripts on a Harness Delegate, even though the Delegate must be run on Linux. Linux supports PowerShell using  [PowerShell core](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-windows?view=powershell-7). You must install PowerShell on the Delegate using a Delegate Profile. See [PowerShell](https://docs.harness.io/article/nxhlbmbgkj-common-delegate-profile-scripts#power_shell) in Common Profile Scripts. + +When executing a script, you can also **dynamically capture** the execution output from the script, providing runtime variables based on the script execution context, and export those to another step in the same workflow or another workflow in the same pipeline. + +For example, you could use the Shell Script step to capture instance IDs in the deployment environment and then pass those IDs downstream to future workflow steps or phases, or even to other workflows executed in the same pipeline. + +If you do not publish the output variables, you can still identify which ones you want to be displayed in the deployment details and logs.The Shell Script step uses Bash and PowerShell. This might cause an issue if your target operating system uses a different shell. For example, bash uses printenv while KornShell (ksh) has setenv. For other shells, like ksh, create command aliases. + +#### Shell Script Steps and Failures + +A failed Shell Script step does not prevent a Workflow deployment from succeeding. + +The Shell Script step succeeds or fails based on the exit value of the script. A failed command in a script does not fail the script, unless you call `set -e` at the top. + +#### What Information is Available to Capture? + +Any information in the particular shell session of the workflow can be set, captured and exported using one or more Shell Script steps in that workflow. In addition, you can set and capture information available using the built-in Harness variables. For more information, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +A good example of information you can capture and export is the Harness variable `${instance.name}`, which gives you the name of the target host on which this script is executed at runtime.Capturing and exporting script output in the Shell Script step can be very powerful. For example, a Harness trigger could pass in a variable to a workflow (like a Git commitID), the Shell Step could use that value and info from its session in a complex function, and then export the output down the pipeline for further evaluation. + +### Step 1: Add Your Script + +When the script in the Shell Script command is run, Harness executes the script on the target host's operating system. Consequently, the behavior of the script depends on that host's system settings. + +For this reason, you might wish to begin your script with a shebang line that identifies the shell language, such as `#!/bin/sh` (shell), `#!/bin/bash` (bash), or `#!/bin/dash` (dash). For more information, see the [Bash manual](https://www.gnu.org/software/bash/manual/html_node/index.html#SEC_Contents) from the GNU project. + +To capture the shell script output in a variable, do the following: + +1. In a Harness application, open a workflow. For this example, we will use a **Build** workflow. +2. In a workflow section, click **Add Command**. The **Add Command** dialog opens. +3. In **Add Command**, in **Others**, click **Shell Script**. The **Shell Script** settings appear. +4. In **Name**, enter a name for the step. + +Shell Script step names cannot contain a dot. This is true for names entered in the Harness Manager UI or YAML via Configure as Code or Git Sync. + +1. In **Script Type**, select **BASH** or **POWERSHELL**. In this example, we will use **BASH**. +2. In **Script**, enter a bash or PowerShell script. In this example, we will use a script that includes an export. For example, export the variable names `BUILD_NO`and `LANG`: + +``` +export BUILD_NO="345" +export LANG="en-us" +``` +You do not need to export the variables to use them with **Script Output** and **Publish output in the context**. You can simply declare them, like `BUILD_NO="345"`.For PowerShell, you could set an environment variable using $Env: + +``` +$Env:BUILD_NO="345" +``` + +You must use quotes around the value because environment variables are Strings. + +If you use Harness variable expressions in comments in your script, Harness will still attempt to evaluate and render the variable expressions. Do not use variable expressions that Harness cannot evaluate. + +### Step 2: Specify Output Variables + +In **Script Output**, enter the list of variables you want to use. + +In **Name**, enter the name of the variables. In our example, we would enter **BUILD\_NO,LANG**. + +In **Type**, you can select **String** or **Secret**. + +Select **Secret** if you want to mask the output variable value in the deployment logs. The output variable value is masked in the logs using asterisks (\*\*\*\*). + +The output variable value is masked in the log for the Shell Script step where you created the output variable, and in any step where you reference the output variable (`${context.publish_var_name.output_var_name}`). + +### Step 3: Specify Where to Run the Script + +If you wish to execute the script on the host running the Harness Delegate, enable **Execute on Delegate**. + +Often, you will want to execute the script on a target host. If so, ensure **Execute on Delegate** is disabled. + +If the Shell Script is executed on a target host (**Execute on Delegate** is disabled), then the Delegate that can reach the target host is used. + +**For Kubernetes Workflows:** If the Shell Script is executed on the Delegate (**Execute on Delegate** is enabled), then Harness checks to see if `kubectl` is being used. If `kubectl` is being used, Harness checks that the Delegate can reach the Kubernetes cluster. If `kubectl` is not being used, any Delegate is used for the script. + +In **Target Host**, enter the IP address or hostname of the remote host where you want to execute the script. The target host must be in the **Infrastructure Definition** selected when you created the workflow, and the Harness Delegate must have network access to the target host. You can also enter the variable `${instance.name}` and the script will execute on whichever target host is used during deployment. + +#### Include Infrastructure Selectors + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector, you can select this option so that its selected Delegate(s) are also used for this step. + +If you have selected Delegates in the Shell Script **Delegate Selector** setting and enabled **Include Infrastructure Selectors**, then Harness will use the Delegates selected in both the Cloud Provider and Shell Script step. + +See [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +### Step 4: Provide Execution Credentials + +If you selected **BASH** in **Script Type**, **Connection Type** will contain **SSH**. If you selected **POWERSHELL** in **Script Type**, **Connection Type** will contain **WINRM**. + +In **SSH Connection Attribute** (or **WinRM Connection Attribute**), select the execution credentials to use for the shell session. For information on setting up execution credentials, see [Add SSH Keys](https://docs.harness.io/article/gsp4s7abgc-add-ssh-keys) and [Add WinRM Connection Credentials](https://docs.harness.io/article/9fqa1vgar7-add-win-rm-connection-credentials). + +**Template the SSH Connection Attribute** — Click the **[T]** button to template the **SSH Connection Attribute**. This will create a Workflow variable for the SSH Connection Attribute. When you deploy, you will provide a value for the variable. This enables you to select the SSH Connection Attribute at deployment runtime.### Step 5: Specify Working Directory on Remote Host + +In **Working Directory**, specify the full folder path where the script is executed on the remote host, for example **/tmp** or **/home/ubuntu** for Linux or **%TEMP%** for Windows. + +This Working Directory is assumed to exist. Harness won’t create the directory as part of this Shell Script step's execution. + +If **Working Directory** is empty, Harness uses the Application Defaults setting for `RUNTIME_PATH` or `WINDOWS_RUNTIME_PATH`. If this Application Defaults setting is not present then workflow will fail.  + +See [Create Default Application Directories and Variables](../applications/set-default-application-directories-as-variables.md). + +### Option 1: Select the Harness Delegate to Use + +If your Workflow Infrastructure Definition's Cloud Provider uses a Delegate Selector (supported in Kubernetes Cluster and AWS Cloud Providers), then the Workflow uses the selected Delegate for all of its steps. + +In some cases, you might want this Workflow step to use a specific Delegate. If so, do the following: + +In **Delegate Selectors**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates in order to ensure that they are used to execute the command. For more information, see [Select Delegates for Specific Tasks with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Selectors can be used whether **Execute on Delegate** is enabled or not. The Shell Script command honors the Selector and executes the SSH connection to the specified target host via the selected Delegate. + +An example where Selectors might be useful when **Execute on Delegate** is disabled: When you specify an IP address in **Target Host**, but you have 2 VPCs with the same subnet and duplicate IP numbers exist in both. Using Selectors, you can scope the the shell session towards the delegate in a specific VPC. + +Harness will use Delegates matching the Selectors you select. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names, such as Environments, Services, etc. It is also a way to template the Shell Script step. + +### Step 6: Publish Output Variables + +An error can occur if you are publishing output via the **Publish Variable Name** setting and your Shell Script step exits early from its script. See [Troubleshooting](#troubleshooting) below.To export the output variable(s) you entered in **Script Output** earlier, enable **Publish** **output in the context**. If you do not enable this, the variables you entered in **Script Output** will still be displayed in the deployment details and logs for the workflow. + +In **Publish** **Variable Name**, enter a unique parent name for all of the output variables. You will use this name to reference the variable elsewhere. + +The reference follows the format: `${context.publish_var_name.output_var_name}`. + +![](./static/capture-shell-script-step-output-94.png) + +For example, if the **Publish** **Variable Name** is **region**, you would reference **BUILD\_NO** with `${context.region.BUILD_NO}` or `${region.BUILD_NO}`. + +In **Scope**, select **Pipeline**, **Workflow**, or **Phase**. The output variables are available within the scope you set here. + +The scope you select is useful for preventing variable name conflicts. You might use a workflow with published variables in multiple pipelines, so scoping the variable to **Workflow** will prevent conflicts with other workflows in the pipeline. + +Here is an example of a complete Shell Script command: + + ![](./static/capture-shell-script-step-output-95.png) + +Click **SUBMIT**. The Shell Script is added to your workflow. + +Next, use the output variables you defined in another command in your phase, workflow, or pipeline, as described below. + +### Step 7: Use Published Output Variables + +The following procedure demonstrates how to use the output variables you captured and published in the Shell Script command above. + +Remember that where you can reference your published output variables depends on the scope you set in **Scope** in the **Shell Script** command. + +To use published output variables, do the following: + +In your Harness workflow, add a new command. For this example, we will use a **HTTP** command. + +Click **Add Command**. In the **Add Step** dialog, click **HTTP**. The **HTTP** command dialog opens. + + ![](./static/capture-shell-script-step-output-96.png) + +In **URL**, enter a URL that references the **Publish Variable Name** `region` and **Script** **Output** variable names `BUILD_NO` and `LANG` you published in the Shell Script command. For example, here is a search using the variables: + +`https://www.google.com/?q=${context.region.BUILD_NO}&${context.region.LANG}` + +Note the use of `context` is optional. + +Fill out the rest of the **HTTP** dialog and click **SUBMIT**. + +When you deploy your workflow, you will see both the Shell Script and HTTP steps using the output variables. + + ![](./static/capture-shell-script-step-output-97.png) + +In the log for the **Shell Script** step, you can see the output variables: + + +``` +INFO 2018-10-22 14:03:36 Executing command ... +INFO 2018-10-22 14:03:37 Script output: +INFO 2018-10-22 14:03:37 **BUILD\_NO=345** +INFO 2018-10-22 14:03:37 **LANG=en-us** +INFO 2018-10-22 14:03:37 Command completed with ExitCode (0) +``` +In the log for the **HTTP** step, you see that the published variables that were used to create this URL + +`https://www.google.com/?q=${context.region.BUILD_NO}&${context.region.LANG}` + +Are now substituted with the output variable values to form the final URL + +`https://www.google.com/?q=345&en-us` + + ![](./static/capture-shell-script-step-output-98.png) + +### Option 2: Harness Expressions in Publish Variable Name + +The **Publish Variable Name** setting is used to a unique parent name for all of the output variables. + +In some cases, you might want to use one of Harness built-in expressions in **Publish Variable Name**. For example, `${instance.name}`: + + ![](./static/capture-shell-script-step-output-99.png) + +When you reference the output variable later in your Workflow, you need to nest it in a `${context.get()}` method. For example, `${context.get(${instance.name}).var1}`: + + ![](./static/capture-shell-script-step-output-100.png) + +### Notes + +#### Reserved Keywords + +The word `var` is a reserved word for Output and Publish Variable names in the Shell Script step. + +If you must use `var`, you can use single quotes and `get()` when referencing the published output variable. + +Instead of using `${test.var}` use `${test.get('var')}`. + +#### Reserved Words for Export Variable Names + +The following words cannot be used for names in **Publish Variable Name:** + +* arm +* ami +* aws +* host +* setupSweepingOutputAppService +* terragrunt +* terraform +* shellScriptProvisioner +* deploymentInstanceData +* setupSweepingOutputEcs +* deploySweepingOutputEcs +* runTaskDeploySweepingOutputEcs +* setupSweepingOutputAmi +* setupSweepingOutputAmiAlb +* ecsAllPhaseRollbackDone +* Azure VMSS all phase rollback +* k8s +* pcfDeploySweepingOutput +* CloudFormationCompletionFlag +* terraformPlan +* terraformApply +* terraformDestroy +* Elastigroup all phase rollback +* setupSweepingOutputSpotinst +* setupSweepingOutputSpotinstAlb + +#### Stopping Scripts After Failures + +The Shell Script command will continue to process through the script even if a script step fails. To prevent this, you can simply include instructions to stop on failure in your script. For example: + +* `set -e` - Exit immediately when a command fails. +* `set -o pipefail` - Sets the exit code of a pipeline to that of the rightmost command to exit with a non-zero status, or to a zero status if all commands of the pipeline exit successfully. +* `set -u` - Treat unset variables as an error and exit immediately. + +For more information, see this article: [Writing Robust Bash Shell Scripts](https://www.davidpashley.com/articles/writing-robust-shell-scripts/). + +#### Using Secrets in Scripts + +You can use Harness secrets in your Shell Script steps. + +See [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +Basically, you use `${secrets.getValue("secret_name")}` to refer to the secret. + +You must pay attention to the **Usage Scope** of the secret to ensure it is available where you need it. See **Review: Secret Scope** in the same topic. + +### Troubleshooting + +This section covers common problems experienced when using the Shell Script command. + +#### Published Variables Not Available + +This error happens when you are publishing output via the **Publish Variable Name** setting and your Shell Script step exits early from its script. + +There are many errors that can result from this situation. For example, you might see an error such as: + + +``` +FileNotFoundException inside shell script execution task +``` +If you exit from the script (`exit 0`), values for the context cannot be read. + +Instead, if you publish output variables in your Shell Script command, structure your script with `if...else` blocks to ensure it always runs to the end of the script. + +### Shell Scripts and Security + +Harness assumes that you trust your Harness users to add safe scripts to your Shell Script steps. + +Please ensure that users adding scripts, as well as executing deployments that run the scripts, are trusted. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/clone-a-workflow.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/clone-a-workflow.md new file mode 100644 index 00000000000..05d52034ec5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/clone-a-workflow.md @@ -0,0 +1,44 @@ +--- +title: Clone a Workflow +description: Use a Workflow as a template for future use by cloning the Workflow. +sidebar_position: 130 +helpdocs_topic_id: teg9oq8e59 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use a Workflow as a template for future Workflows by cloning the Workflow. When you clone a Workflow, you can select what application to clone the Workflow into. Cloning a Workflow can increase Workflow development rapidly, and ensure consistency in Workflows across applications. + +### Before You Begin + +* [Add a Workflow](tags-how-tos.md) + +### Limitations + +* [Harness Tags](https://docs.harness.io/article/mzcpqs3hrl-manage-tags) are not copied when you clone a Workflow. + +### Step: Clone a Workflow + +To clone a Workflow, do the following: + +1. In the **Workflows** page, or in the page for an individual Workflow, click the More Options ⋮ menu. + + ![](./static/clone-a-workflow-55.png) + +2. Click **Clone**. The **Clone Workflow** dialog appears. +3. Give the Workflow a new name and description. +4. In **Target Application**, select the Application where you want the Workflow cloned. If you do not select an Application, the Workflow is cloned into its current Application. + + ![](./static/clone-a-workflow-56.png) + + The Environment, Infrastructure Definition, and nodes used by the Workflow are not cloned into the target Application. You will need to set these up in the target Application. + +5. In **Service Mapping**, select the Service in the target Application to use with the cloned Workflow. + + ![](./static/clone-a-workflow-57.png) + +6. Click **Submit**. +7. Navigate to the target Application and click the Workflow. It will have the **-clone** suffix. +8. Add the Environment, Service Infrastructure (or Infrastructure Definition), and nodes needed for the Workflow. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/configure-workflow-using-yaml.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/configure-workflow-using-yaml.md new file mode 100644 index 00000000000..24f989795d9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/configure-workflow-using-yaml.md @@ -0,0 +1,37 @@ +--- +title: Configure Workflows Using YAML +description: Outline steps on how to configure a Workflow as code using YAML. +sidebar_position: 140 +helpdocs_topic_id: 0svkm9v7vr +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can configure a Workflow using YAML. You can view the YAML using the main Code editor, as described in [Configuration as Code](https://docs.harness.io/article/htvzryeqjw-configuration-as-code), or you can jump directly to the YAML of a specific Workflow in the **Workflows** page. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Step: Configure a Workflow as Code](#configure_yaml) +* [Next Steps](#next_steps) + + +### Before You Begin + +* [Workflows](workflow-configuration.md) +* [Add a Workflow](tags-how-tos.md) + + +### Step: Configure a Workflow as Code + +To configure a Workflow as code, do the following: + +1. In the **Workflows** page, click the code icon. The code editor appears, displaying your Workflow YAML.![](./static/configure-workflow-using-yaml-12.png) +2. Modify the YAML of the Workflow as needed, and then click **Save**. If you like, you can verify your change in the Harness Manager interface. + + +### Next Steps + +* [Troubleshooting a Workflow](https://docs.harness.io/article/y00dt1l4jl-troubleshooting-a-workflow) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/define-workflow-failure-strategy-new-template.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/define-workflow-failure-strategy-new-template.md new file mode 100644 index 00000000000..c628385df2f --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/define-workflow-failure-strategy-new-template.md @@ -0,0 +1,258 @@ +--- +title: Define Workflow Failure Strategy +description: Define how your Workflow handles different failure conditions. +sidebar_position: 70 +helpdocs_topic_id: vfp0ksdzg3 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Failure Strategy defines how your Workflow handles different failure conditions. + +There is no Failure Strategy in a [Build Workflow](https://docs.harness.io/article/wqytbv2bfd-ci-cd-with-the-build-workflow) because there is no rollback in a Build Workflow. A Build Workflow simply runs a build job and collects an artifact. + +### Before You Begin + +* [Add a Workflow](tags-how-tos.md) +* [Add Phases to a Workflow](add-workflow-phase-new-template.md) +* [Skip Workflow Steps](skip-workflow-steps.md) +* [Review: Multiple Failure Strategies in a Workflow](#review_multiple_failure_strategies_in_a_workflow) + +### Review: Workflow and Phase Priority + +You can define a Failure Strategy at the Workflow step and Phase level. + +![](./static/define-workflow-failure-strategy-new-template-208.png) + +**What is a Phase?** Unless you add multiple phases to your Workflow, the Workflow is considered a single Phase. Canary Workflows use multiple phases, and other Workflow types, such as Blue/Green and Rolling, are considered single Phase Workflows. + +The Failure Strategy applied to a Workflow step takes precedence over the Failure Strategy applied to the Phase. + +The Workflow step Failure Strategy does not propagate to the parent Phase. + +### Step: Add Workflow Failure Strategy + +To define the failure strategy for the entire Workflow, do the following: + +1. In a Workflow, click **Failure Strategy**. The default failure strategy appears. + + The default failure strategy is to fail the Workflow if there is any application error, and to rollback the Workflow execution. You can modify the default strategy or additional strategies. + +2. Click **Add Failure Strategy**. The Failure Strategy settings appear. + + ![](./static/define-workflow-failure-strategy-new-template-209.png) + +The dialog has the following fields: + +#### Failure + +Select the type of error: + +##### Application Error + +Harness encountered an application error during deployment. + +##### Unsupported Types + +The following types are listed but not supported at this time: + +* **Connectivity Error:** Harness is unable to connect to the target host/cluster/etc, or a provider, such as a Git repo. +* **Authentication Error:** Harness is unable to authenticate using the credentials you supplied in the Cloud Provider, Artifact Source, Source Repo Provider, and other connectors. +* **Verification Error:** If you have set up verification steps in your Workflow and a deployment event is flagged as an error by the step, Harness will fail the deployment. + +#### Scope + +Select the scope of the strategy. If you select **Workflow**, the Action is applied to the entire Workflow. If you select **Workflow Phase**, then the Action is applied to the **Workflow Phase** only. + +For example, if you selected **Workflow Phase** and then selected the Action **Rollback Phase Execution**, and a failure occurred in the second Phase of the Workflow, then the second Phase of the Workflow would be rolled back but the first Phase of the Workflow would not be rolled back. + +#### Action + +Select the action for Harness to take in the event of a failure, such as a retry or a rollback: + +##### Manual Intervention + +Applies to Workflow steps only, but not Approval steps or [Resource Lock](workflow-queuing.md#acquiring-resource-locks). + +You will be prompted to approve or reject the deployment on the Deployments page. + +##### Timeout (Manual Intervention) + +If you select **Manual Intervention** in **Action**, enter a timeout in **Timeout** and an action in **Action after timeout** (such as **Ignore**). Once the timeout is reached, the action is executed. + +![](./static/define-workflow-failure-strategy-new-template-210.png) + +The default value for **Timeout** is 14 days (**14d**). + +The available actions in **Action after timeout** are: + +* Ignore +* Mark as Success +* End Execution +* Abort Workflow +* Rollback Workflow +* Rollback Provisioner after Phases + +**Abort Workflow** and **Rollback Workflow** are different. When you use Abort Workflow, Harness does not clean up any deployed resources or rollback to a previous release and infrastructure.When a Workflow is paused on manual intervention, you can choose the action by clicking on the Workflow step. + +![](./static/define-workflow-failure-strategy-new-template-211.png) + +If a manual intervention has occurred, you can see it in the Workflow step details in **Deployments**. Here is an example using **End Execution**: + +![](./static/define-workflow-failure-strategy-new-template-212.png) + +##### Rollback Workflow Execution + +**Abort Workflow** and **Rollback Workflow** are different. When you use Abort Workflow, Harness does not clean up any deployed resources or rollback to a previous release and infrastructure.(Applies to Workflow Phase only) + +Harness will initiate rollback. + +Failure strategies can be applied at both the Workflow step and Phase level: + +![](./static/define-workflow-failure-strategy-new-template-213.png) + +You can choose **Rollback Provisioner after Phases** or **Rollback Workflow**. + +Currently, this feature is behind the Feature Flag `ROLLBACK_PROVISIONER_AFTER_PHASES`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.![](./static/define-workflow-failure-strategy-new-template-214.png) + +**Rollback Workflow Execution** is not applicable for Workflow steps, presently. It applies to Workflow Phases. + +##### Rollback Phase Execution + +Harness will initiate rollback of the Phase. + +##### Rollback Provisioner After Phases + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.This option is for Canary and Multiservice Workflows that use Infrastructure Provisioners in their **Pre-deployment** steps to provision the target infrastructure. + +This failure strategy should be used in Workflows that support **Pre-deployment Steps** only (Canary and Multiservice).By default, provisioners are rolled back before deployment phases and all provisioners are rolled back in the same order in which they were deployed. + +When the **Rollback Provisioner After Phases** failure strategy is used, rollback will happen as follows: + +1. Deployment phases are rolled back before the Infrastructure Provisioners in **Pre-deployment steps**. +2. All Infrastructure Provisioners in the **Pre-deployment Steps** are rolled back in the reverse order in which they were deployed. + +##### Ignore + +Harness ignores any failure and continues with deployment. The deployment does not fail on this step and the status is Passed. + +##### Retry + +(Applies to Workflow steps only) + +Harness will retry the step where the failure occurred. This is also only applicable to Workflow steps. + +##### End Execution + +Harness will end the Workflow (fail the state) and roll back any deployed resources. The status of the Workflow will be Failed. Typically, End Execution is used with Manual Intervention. + +If a step or Phase needs rollback (meaning, it deployed something), End Execution will not prevent rollback. For example, in a strategy using **End Execution**, a provisioning step (Terraform, CloudFormation, etc) will be rolled back, but an HTTP step will not be rolled back since it doesn't deploy anything. + +If you want to prevent rollback, use **Abort Workflow**. + +##### Abort Workflow + +**Abort Workflow** and **Rollback Workflow** are different. When you use Abort Workflow, Harness does not clean up any deployed resources or rollback to a previous release and infrastructure.Harness will abort the Workflow without rolling back. The status of the Workflow will be Aborted. + +If you want a failure to also initiate rollback, use **Rollback Workflow**, **End Execution**, **Rollback Phase Execution**, or **Rollback Provisioner After Phases**. + +To abort a Workflow, the Harness User must belong to a User Group with the Abort Workflow Application Permission enabled. + +![](./static/define-workflow-failure-strategy-new-template-215.png) + +Without this permission enabled, the User will not see an **Abort** option in the running Workflow deployment page. + +### Step: Step-level Failure Strategy + +To define the failure strategy for the step section of a Workflow, do the following: + +1. Next to the step section title, click more options (**⋮**). The step-level settings appear. + + ![](./static/define-workflow-failure-strategy-new-template-216.png) + +2. In **Failure Strategy**, click **Custom**. The **Failure Strategy** settings appear. + + ![](./static/define-workflow-failure-strategy-new-template-217.png) + +3. Click **Add Failure Strategy**. +4. Fill out the strategy. The dialog has the following fields: + * **Failure** - Select the type of error, such as Verification, Application, etc. The step-level Failure Strategy has the same options as the Phase-level Failure Strategy, with the exception of [Timeout Error](#timeout_error). + * **Action** - Select the action for Harness to take in the event of a failure, such as a retry or a rollback. + * **Specific Steps** - Select any specific Workflow steps that you want to target for the Failure Strategy. + The criteria for the strategy will be applied to those steps only. + + :::note + If you do not select steps, then the strategy is applied to all steps in that Workflow section. + ::: + + There is no **Scope** setting, like the **Scope** setting in the Workflow-level Failure Strategy, because the scope of this strategy is the step section. + +5. Click **Submit**. The failure strategy is added to the step section. + + ![](./static/define-workflow-failure-strategy-new-template-218.png) + +#### Timeout Error + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions.The Timeout Error condition in a Workflow step-level Failure Strategy helps you manage ECS step timeouts when you are deploying many containers in a Workflow or Pipeline. Timeout Error helps you manage these timeouts gracefully. + +In **Specific Steps**, you can select one or more of the following step types: + +* ECS Service Setup +* ECS Run Task +* ECS Daemon Service Setup +* Setup Load Balancer +* Setup Route 53 +* ECS Upgrade Containers +* ECS Steady State Check +* Swap Target Groups +* Swap Route 53 DNS +* Rollback ECS Setup +* ECS Rollback Containers +* Rollback Route 53 Weights +* Rollback Swap Target Groups +* HTTP +* Shell Script + +### Delegate Error + +Currently, this feature is behind the Feature Flag `FAIL_TASKS_IF_DELEGATE_DIES`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.During Workflow execution, if a Delegate goes down, the tasks are marked as failed. + +With the **Delegate Error** condition, you can identify such situations faster and retrigger the steps. + +A Delegate times out if it fails to send heartbeats within 3 minutes. You can apply a failure strategy in such scenarios and retrigger the tasks. + +### Review: Multiple Failure Strategies in a Workflow + +When using multiple Failure Strategies in a Workflow, consider the following: + +* For failure strategies that do not overlap (different types of failures selected), they will behave as expected. +* Two failures cannot occur at the same time, so whichever error occurs, that Failure Strategy will be used. + +#### Conflicts + +Conflicts might arise between failure strategies on the same level or different levels. By *level*, we mean the step-level or the Workflow level: + +![](./static/define-workflow-failure-strategy-new-template-219.png) + +##### Same level + +If there is a conflict between multiple failures in strategies on the same level, the first applicable strategy is used, and the remaining strategies are ignored. + +For example, consider these two strategies: + +1. Abort Workflow on Verification Failure or Authentication Failure. +2. Ignore on Verification Failure or Connectivity Error. + +Here's what will happen: + +* On a verification failure, the Workflow is aborted. +* On an authentication failure, the Workflow is aborted. +* On a connectivity error, the error is ignored. + +##### Different levels + +If there is a clash of selected failures in strategies on different levels, the step-level strategy will be used and the Workflow level strategy will be ignored. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/deploy-a-workflow.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/deploy-a-workflow.md new file mode 100644 index 00000000000..11daf547d18 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/deploy-a-workflow.md @@ -0,0 +1,56 @@ +--- +title: Deploy Individual Workflows +description: Deploy your individual Workflow using the Deploy option within each Workflow. +sidebar_position: 30 +helpdocs_topic_id: 5ffpvrohi3 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Typically, deployments are performed in [Pipelines](../pipelines/pipeline-configuration.md), which are collections of one or more Workflows, but you can also deploy individual Workflow. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Step: Deploy a Workflow](#deploy_a_workflow) +* [Next Steps](#next_steps) + + +### Before You Begin + +* [Add a Workflow](tags-how-tos.md) + + +### Step: Deploy a Workflow + +You can deploy individual Workflows using the **Deploy** button within each Workflow, as follows: + +1. In a Workflow, click the **Deploy** button.![](./static/deploy-a-workflow-10.png) +2. Provide values for any required variables. These are Workflow variables created in the **Workflow Variables** section of the Workflow. +3. Complete the **Start New Deployment** dialog and click **SUBMIT**. The Workflow is run according to the Workflow steps. +4. After a Workflow is deployed, its deployment information is available on the **Workflows** page.![](./static/deploy-a-workflow-11.png) +5. Click a **Timestamp** for a Workflow to view a deployment. + +For information about using Triggers to deploy Workflows, see [Triggers](../triggers/add-a-trigger-2.md) and [Triggers and Queued Workflows](../triggers/add-a-trigger-2.md#triggers-and-queued-workflows). + +#### Abort or Rollback a Running Deployment + +If you deploy a Workflow and choose the **Abort** option during the running deployment, the **Rollback Steps** for the Workflow are not executed. Abort stops the deployment execution without rollback or cleanup. To execute the **Rollback Steps**, click the **Rollback** button. + + + +| | | +| --- | --- | +| **Abort Button** | **Rollback Button** | +| ![](./static/_abort-button-left.png) | ![](./static/_rollback-button-right.png) | + +#### Rollback of a Completed Deployment + +With certain combinations of deployment type and platform, you have the option to roll back the most recent *successful* deployment to your Production Environment. For details, see [Post-Deployment Rollback](post-deployment-rollback.md). + + +### Next Steps + +* [Clone a Workflow](clone-a-workflow.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/deploy-multiple-services-simultaneously-using-barriers.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/deploy-multiple-services-simultaneously-using-barriers.md new file mode 100644 index 00000000000..836dacc2b46 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/deploy-multiple-services-simultaneously-using-barriers.md @@ -0,0 +1,134 @@ +--- +title: Deploy Multiple Services Simultaneously using Barriers +description: This topic covers a very common Barriers use case -- deploying multiple microservices simultaneously (multi-service deployments). In this scenario, each microservice is deployed by a different Workflow… +sidebar_position: 260 +helpdocs_topic_id: dr1srl937n +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic covers a very common Barriers use case: deploying multiple microservices simultaneously (multi-service deployments). + +In this scenario, each microservice is deployed by a different Workflow. For example, before executing integration tests, you might want to ensure all microservices for the application are deployed successfully. + +Using Barriers you can handle the synchronization of multi-service deployments in a simple and uniform way. + +### Before You Begin + +If you want to reproduce this scenario, ensure that you have the following setup: + +* Your target environment must have a Kubernetes cluster with two namespaces. +* Your Harness Application must have two Services—one for the frontend service and one for the backend service—that can be deployed to a Kubernetes cluster. +* Configure the Harness Kubernetes Cluster Cloud Provider, Environment, Infrastructure Definitions, and a [Rolling Deployment Workflow](https://docs.harness.io/article/dl0l34ge8l-create-a-kubernetes-rolling-deployment) to test each service deployment. +* Deploy the microservices in different namespaces. This is typically set up using separate Infrastructure Definitions in the Environment. + +## Visual Summary + +This example uses two Harness Services: a **Frontend Service** and a **Backend Service**. The goal is to have the Frontend Service deployment start only after the Backend Service is deployed successfully. + +Similarly, if there is any error in deploying the frontend service, the backend service should rollback. + +The picture below shows the frontend deployment Workflow using a Barrier named **Wait for Backend service deployment**. + +![](./static/deploy-multiple-services-simultaneously-using-barriers-42.png) + + + +The next screenshot shows the backend deployment Workflow using two Barriers: + +* **Backend service deployment done** +* **Wait for FrontEnd service deployment** + +![](./static/deploy-multiple-services-simultaneously-using-barriers-43.png) + + + +Notice the order of the Barriers. For example, if you switch the sequence, the execution will result in a cyclic loop resulting in timeout error. + +Always ensure that there is no cyclical reference for Barriers.You can use multiple Barriers to synchronize the execution flow: the outcome from the dependent steps runs in a different Workflow that runs in parallel. + +Here is a picture of a failed deployment with rollback of the backend caused by the frontend failure. Notice the error message in the details section. + +![](./static/deploy-multiple-services-simultaneously-using-barriers-44.png) + + + +## Review: Barriers + +[Barriers](synchronize-workflows-in-your-pipeline-using-barrier.md) enable you to control the execution of multiple Workflows running in parallel. Using Barriers, complex release management processes can be easily implemented using Harness Pipelines.  + +Barriers are available in **Flow Control** when adding a new Workflow step. + +![](./static/deploy-multiple-services-simultaneously-using-barriers-45.png) + + + +Here are the Barrier settings: + +![](./static/deploy-multiple-services-simultaneously-using-barriers-46.png) + + + +Be sure to use a descriptive name for the Barrier. Common phrases used are **Waiting for…**, **Completed…**, etc. + +Define an **Identifier** that describes the purpose of the Barrier. The identifier is used to synchronize between the Workflows. + +In this topic's example, the identifiers used were **BE** and **FE** to distinguish between backend and frontend. + +For more complex deployments, identifiers such as **integration-test-started**, **integration-test-done**, etc, can be used. + +Let's look at some common Barrier scenarios. + +### Scenario 1: Multi-Service + +This is the scenario we will describe in this topic. It is a very common scenario for testing an application consisting of many microservices. + +As you will see, each microservice is deployed by a different Workflow. Before you execute any integration tests, you want to ensure all application microservices are deployed successfully. Using Barriers, you synch the microservice deployments in a simple and uniform way. + +### Scenario 2: Multi-Environment + +Deploy the same application version across multiple regions. If there is any failure in any one region, you would like to rollback all regions to the previous version automatically. Without this feature, you'll need complex, custom scripts.  + +### Scenario 3: Application Compatibility + +Using multiple environments to test applications where each environment varies in the OS patch level, Java patch version, or the infrastructure uses different cloud providers (AWS and GCP). + +The ability to deploy the same version of the application across these environments successfully allows you to verify the portability of the application changes. + +If these tasks were performed sequentially, or done selectivity for every major release, the cost to fix any issues discovered in any one environment is too high to resolve. This results in restricting the options for deployment, probably paying a higher support cost for running older versions of the OS or stack, thereby increasing the overall cost of operation. + +Let us see how we can use Barriers for the most common scenario, Scenario 1. + +## Step 1: Add Barrier to Frontend Workflow + +Start with a [Kubernetes Rolling Workflow](https://docs.harness.io/article/dl0l34ge8l-create-a-kubernetes-rolling-deployment) and modify it to add Barriers as follows: + +![](./static/deploy-multiple-services-simultaneously-using-barriers-47.png) + +## Step 2: Add Barrier to Backend Workflow + +Use a second Kubernetes Rolling Workflow and modify it to add Barriers as follows: + +![](./static/deploy-multiple-services-simultaneously-using-barriers-48.png) + + +## Step 3: Create and Test Pipeline + +Create a [Pipeline](../pipelines/pipeline-configuration.md) containing both the frontend and backend Workflows. + +Configure the Workflows to run in parallel by enabling the **Execute in Parallel with Previous Step** setting on the second Workflow. + +![](./static/deploy-multiple-services-simultaneously-using-barriers-49.png) + +The frontend-deploy Workflow deploys to the frontend namespace and backend-deploy Workflow deploys to backend namespace. + +Note that the order of the parallel Workflows in the Pipeline does not matter. + +![](./static/deploy-multiple-services-simultaneously-using-barriers-50.png) + +## Next Steps + +* [Continuous Verification: Wait Before Execution](../../continuous-verification/continuous-verification-overview/concepts-cv/cv-strategies-and-best-practices.md#wait-before-execution) +* [Synchronize Workflow Deployments using Barriers](synchronize-workflows-in-your-pipeline-using-barrier.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/integrate-tests-into-harness-workflows.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/integrate-tests-into-harness-workflows.md new file mode 100644 index 00000000000..4114a7d707b --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/integrate-tests-into-harness-workflows.md @@ -0,0 +1,427 @@ +--- +title: Integrate Tests into Harness Workflows +description: Integrate unit, functional, and smoke and sanity tests into Harness Workflows. +sidebar_position: 250 +helpdocs_topic_id: yaojicax61 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can integrate testing into Harness Workflows, such as unit, functional, and smoke and sanity tests. + +In this topic we will walk you through common test integrations. + +### Before You Begin + +* **Application used in this topic** — This topic uses a simple Java application that exposes a REST endpoint: the [Hello World Quarkus app](https://quarkus.io/guides/getting-started). It includes unit tests using the [REST Assured](http://rest-assured.io/) Java DSL. +* [Artifact Build and Deploy Pipelines](https://docs.harness.io/category/j1q21aler1-build-deploy) — Review these How-tos to learn about CI/CD integration in Harness. + +### Visual Summary + +Here is a completed Harness Pipeline execution. The Pipeline incorporate the Jenkins unit tests, third-party functional tests, and HTTP smoke tests. In this topic we will be exploring the different tests used in the Pipeline. + +![](./static/integrate-tests-into-harness-workflows-58.png) + +The Pipeline skips approval gates and other stages, such as creating a change ticket and updating Jira. These are added to production Pipelines, typically. + +See [Approvals](../approvals/approvals.md), [Jira Integration](jira-integration.md). and [ServiceNow Integration](service-now-integration.md). + +### Review: Harness CI Connectors + +You can use Harness to run a build or test process via Jenkins, Bamboo, Shell Script, or any CI tool. + +First, you need to connect Harness with Jenkins, Bamboo, or other CI tool. + +For Jenkins and Bamboo connections, see [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +For integrating CI using Shell Scripts, see [Using the Shell Script Command](capture-shell-script-step-output.md). + +### Use Case 1: Use an Existing Jenkins Pipeline + +Many Harness customers use the Harness Jenkins integration to build and collect artifacts as part of their Pipelines. + +Customers also reuse existing Jenkins pipelines to run tests against deployments in different environments, such as QA, UAT, SIT, etc. + +Let's look at a Jenkins pipeline execution that uses parameters to skip the build stage and run tests. + +![](./static/integrate-tests-into-harness-workflows-59.png) + +In Harness, we'll execute this Jenkins pipeline as part of a deployment Workflow. + +The first step is to add a Jenkins Artifact Server in Harness, as described in [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +In this example, we are running Jenkins locally: + +![](./static/integrate-tests-into-harness-workflows-60.png) + +We will create a Harness SSH deployment. This is also called a [Traditional deployment](https://docs.harness.io/article/6pwni5f9el-traditional-deployments-overview). + +The same approach works for other types of deployments, such as Kubernetes, ECS, Helm, Pivotal, and so on. For deployments, you would use the corresponding Cloud Providers, and Service and Workflow deployment types.Next, we create the Harness Application with the following components: + +* **Cloud Provider** — We are using the Physical Data Center Cloud Provider to connect to the VMs where we will deploy our artifact and run our tests: + + ![](./static/integrate-tests-into-harness-workflows-61.png) + +* **Service** — We are using a Service with a **Secure Shell (SSH)** deployment type and a **Java Archive (JAR)** artifact. It simply copies the artifact to the target hosts, and installs and runs the application. + + ![](./static/integrate-tests-into-harness-workflows-62.png) + +* **Environment** and **Infrastructure Provisioner** — Our Environment has an Infrastructure Provisioner that uses the Physical Data Center Cloud Provider to connect to the target VMs. + + ![](./static/integrate-tests-into-harness-workflows-63.png) + +* **Workflow** — We created a Workflow to deploy our artifact and run our tests. The Workflow deployment type is Basic. A Basic Workflow simply selects the nodes defined in the Infrastructure Provisioner and deploys the Service. + + ![](./static/integrate-tests-into-harness-workflows-64.png) + +1. Click **Select Nodes**. In Select Nodes, you select the target hosts where the application will be deployed. + + ![](./static/integrate-tests-into-harness-workflows-65.png) + +The **Host Name(s)** IP address is one of the hosts you identified in the Infrastructure Definition **Host Name(s)** setting. + +Typically, you would have many Environments such as Dev, QA, UAT, SIT. Each Environment contains an Infrastructure Definitions that can be used for any Service. + +This Environment configuration enables the same Workflow to run using different Environments without any additional effort. + +**Use Infrastructure Definitions for Testing** — Another Environment setup variation is to use two Infrastructure Definitions per Environment. For example, a QA-API Infrastructure Definition to deploy the micro-service, and a QA-Test Infrastructure Definition to run the test client. Using a Harness Pipeline, you could run two different Workflows in sequence to deploy the Service on all the hosts configured in QA-API and run the tests on all hosts configured in QA-Test.In the example in this topic, we are using a simple Workflow that deploys and runs the test on the same test host. + +1. In step **3. Deploy Service**, add a Jenkins step. +2. To configure the Jenkins step with the Jenkins job for your test, select the Jenkins Artifact Server you added in **Jenkins Server**. +3. In **Job Name**, select the name of the job for your tests. Harness populates the **Job Parameters** automatically. + + ![](./static/integrate-tests-into-harness-workflows-66.png) + +4. Add values for the job parameters Harness automatically populates. + +You can use Harness variable expressions for values. For example, Service or Workflow variables. Users can assign values when the Workflow is deployed. See [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +Let's look at the Workflow execution. + +As you can see, the test client is run on the selected test host using the Jenkins pipeline. + +![](./static/integrate-tests-into-harness-workflows-67.png) + +The result of the Jenkins execution is displayed in **Details**. The pipeline output is also displayed in **Console Output**.  + +The Job Status is displayed as **SUCCESS**. Verify the status using Jenkins blue ocean UI and check the Test Results. + +![](./static/integrate-tests-into-harness-workflows-68.png) + +Using Jenkins in Workflows allows you to leverage your existing Jenkins infrastructure as part of deployment. You can now run tests and take advantage of Jenkins' distributed testing using nodes/agents (shared test infrastructure resources). + +### Use Case 2: Use Third Party Testing Tools + +Various testing tools perform functional, load, and stress tests as part of a release. Most of the tests are automated, but some tests can be performed manually. + +Let's look at Workflow using the [newman CLI for Postman](https://github.com/postmanlabs/newman) to run automated tests. + +Usually, the test client hosts are already installed with the required tools, but in this section we will demonstrate how to use a Docker image to allow any test client hosts to run these tests. The test client hosts must be running Docker. + +Let's look at a Basic **Secure Shell (SSH)** Workflow that uses the same Service and Infrastructure Definition we used in the previous section. + +1. In the Workflow, in step **3. Deploy Service**, add a Shell Script step. + +The Shell Script step will run the Postman collection to execute the tests. Here is the snippet of the postman collection script: + + +postman collection script + +``` +cat <<_EOF_ > /tmp/getting-started.json + +{ + + "info": { + + "_postman_id": "ddf9e653-87d9-4d6c-8b45-fa5772929169", + + "name": "getting-started", + + "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json" + + }, + + "item": [ + + { + + "name": "quarkus-test", + + "item": [ + + { + + "name": "quarkus-loadtest", + + "request": { + + "method": "GET", + + "header": [], + + "url": { + + "raw": "http://172.28.128.3:8080/hello/greeting/1234", + + "protocol": "http", + + "host": [ + + "172.28.128.3" + + ], + + "port": "8080", + + "path": [ + + "hello", + + "greeting", + + "1234" + + ] + + } + + }, + + "response": [] + + } + + ], + + "protocolProfileBehavior": {} + + }, + + { + + "name": "postman-echo.com/get", + + "request": { + + "method": "GET", + + "header": [], + + "url": { + + "raw": "http://172.28.128.3:8080/hello", + + "protocol": "http", + + "host": [ + + "172.28.128.3" + + ], + + "port": "8080", + + "path": [ + + "hello" + + ] + + }, + + "description": "Initial" + + }, + + "response": [] + + } + + ], + + "protocolProfileBehavior": {} + +} + +_EOF_ + +docker run -v /tmp:/tmp -t postman/newman:alpine run /tmp/getting-started.json + + +``` + +You can see the command to run the collection at the bottom: + + +``` +docker run -v /tmp:/tmp -t postman/newman:alpine run /tmp/getting-started.json +``` +When the Workflow is executed, the console output displays the results of the newman CLI. + +![](./static/integrate-tests-into-harness-workflows-69.png) + +You can use the same method to run other third party automated testing tools *that can be packaged as a Docker image*, such as jmeter. + +#### Test Execution Host Options + +In some cases, the third party tool or home grown testing framework is installed and configured on a specific host instead of packaged as a Docker image. + +In these cases, the Shell Script step can be run on the designated host. For example, you could have a designated test host that runs load testing, and another for performance testing. + +Here's how the Shell Script step we used is set up: + +![](./static/integrate-tests-into-harness-workflows-70.png) + +You have the following options of where to run the Shell Script step: + +* Run on one or more hosts as defined by the **Select Nodes** step of the Workflow. +This could be all the hosts configured Infrastructure Definition, a percentage such as 50%, or a count, such as 2 nodes. Here is an example using 50%: + + ![](./static/integrate-tests-into-harness-workflows-71.png) + +* Run the tests on any Delegate or a specific Delegate using Delegate **Tags**. + + ![](./static/integrate-tests-into-harness-workflows-72.png) + +* Run the tests on a specific test node using the **Target Host** option and the built-in Harness `${instance.hostName}` expression. See [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + + ![](./static/integrate-tests-into-harness-workflows-73.png) + +When selecting the target nodes to run a Docker-based test, ensure the nodes are pre-configured with the Docker installation.**Integrate Manual Testing** — To integrate manual testing in your Harness Pipeline, you simply add an Approval stage in the Pipeline. The Approval stage notifies the test team to execute tests. The team can approve or reject the release based on the test results. For more information about Approvals, see [Approvals](../approvals/approvals.md). + +### Use Case 3: Run HTTP Tests using HTTP Step + +A new application can pass system tests, load tests, and so on, but these tests are not executed on the production deployment, typically. + +You still need a way to perform basic smoke or sanity tests before routing production traffic to the new version of the application.  + +Harness HTTP Workflow step allows you to perform basic smoke or sanity tests without any dependency on third party tools or having test code in the production environment. + +To demonstrate, create a Basic Workflow using the same Service and Infrastructure Definition we've been using so far. + +1. In the Workflow, in step **3. Deploy Service**, add a [HTTP command](using-the-http-command.md). +2. Configure the HTTP step to invoke a REST endpoint using GET and assert/verify the response: + +![](./static/integrate-tests-into-harness-workflows-74.png) + +The benefit of this simple test in clear when there is a production deployment failure. + +If failure occurs, the Harness Workflow will rollback to the previous successful deployment automatically and immediately. + +You can also run multiple HTTP steps in parallel by selecting the **Execute steps in Parallel** option. + +![](./static/integrate-tests-into-harness-workflows-75.png) + +Let's look at the Workflow deployment to see the details of the test: + +![](./static/integrate-tests-into-harness-workflows-76.png) + +### Use Case 4: Run Docker-based Tests + +This use case is a variation of [Use Case 2: Use Third Party Testing Tools](#use_case_2_use_third_party_testing_tools). + +In this case, a Docker image containing the tests is pushed to a Docker registry or artifactory server from a CI tool such as Jenkins. This Docker image is then used to run tests as a container in a Kubernetes cluster or using Docker CLI on a test node. + +For this example, the Docker image was built using an Apache Maven base image. The Maven project with test sources and the pom.xml was copied into the image. + +We use two Build Workflows in this Pipeline, each with a separate Service: one Workflow to deploy the application and the second to run the Docker-based test. + +![](./static/integrate-tests-into-harness-workflows-77.png) + +The Harness Service used by the test Workflow is set up with the Docker test image and a Harness Exec command that uses the Docker CLI. + +![](./static/integrate-tests-into-harness-workflows-78.png) + +The **run maven test** Exec command contains the Docker CLI to invoke the tests. + + +``` +echo "Running Maven Test on ${instance.hostName}" +docker volume inspect maven-repo > /dev/null + +if [ $? -eq 1 ]; then + docker volume create maven-repo +fi + +docker run -it -v maven-repo:/root/.m2 -w /build ${artifact.source.repositoryName} mvn -Dserver.host=http://${demo.MYHOST} -Dserver.port=8080 test +``` +This enables any test host in the infrastructure to execute the tests if its running Docker and has connectivity to the node hosting the REST API. + +The Artifact Source in the Service is named **quarkus-test-image**. It uses a Harness Docker Hub Artifact Server connection to a Docker registry. + +![](./static/integrate-tests-into-harness-workflows-79.png) + +The Docker image is referenced in the **run maven test** Exec command script as `${artifact.source.repositoryName}`. + +You can modify the script to specify image tag/version using `${artifact.buildNo}`. See [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +We is using the latest image (the default). The REST API endpoint listed in the script is at `${demo.MYHOST}`. This is a variable published from a Shell Script step in the previous Workflow in the Pipeline. `${demo.MYHOST}` is the hostname of the instance where the application was deployed. + +![](./static/integrate-tests-into-harness-workflows-80.png) + +Also, note the use of Docker volume `maven-repo`. This allows subsequent runs of the test to use the Maven repository cached in the test node. + +Let's look at the Workflow execution. + +![](./static/integrate-tests-into-harness-workflows-81.png) + +You can see the output of the Exec command in the Details: + + +``` +Running Maven Test on 172.28.128.6 +[INFO] Scanning for projects... +[INFO] +[INFO] ------------------------------------------------------------------------ +[INFO] Building getting-started 1.0-SNAPSHOT +[INFO] ------------------------------------------------------------------------ +[INFO] +[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ getting-started --- +[INFO] Using 'UTF-8' encoding to copy filtered resources. +[INFO] Copying 2 resources +[INFO] +[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ getting-started --- +[INFO] Changes detected - recompiling the module! +[INFO] Compiling 2 source files to /build/target/classes +[INFO] +[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ getting-started --- +[INFO] Using 'UTF-8' encoding to copy filtered resources. +[INFO] skip non existing resourceDirectory /build/src/test/resources +[INFO] +[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ getting-started --- +[INFO] Changes detected - recompiling the module! +[INFO] Compiling 1 source file to /build/target/test-classes +[INFO] +[INFO] --- maven-surefire-plugin:2.22.1:test (default-test) @ getting-started --- +[INFO] +[INFO] ------------------------------------------------------- +[INFO] T E S T S +[INFO] ------------------------------------------------------- +[INFO] Running org.acme.quickstart.GreetingResourceTest +[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.494 s - in org.acme.quickstart.GreetingResourceTest +[INFO] +[INFO] Results: +[INFO] +[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 +``` +With Docker-based tests, provisioning and setting up shared test infrastructure is very simple: you simply need to set up Docker. + +The Docker image includes the necessary tools and libraries, so any test node can execute the tests whether it is implemented as Java, Node.js, or other tools. + +Since the tests are run as Docker containers, environment variables or files from previous runs do not persist between runs, resulting in repeatable tests. + +### Review + +This article showed you some of the benefits of integrated testing in Harness: + +* Harness supports the reuse of existing scripts. +* Harness supports tools that integrate tests with the deployment release process. +* You can reuse and leverage existing testing tools and scripts, but also simplify your custom scripts to reduce maintenance. +* DevOps teams can track execution and view the results from the single Harness UI. + +### Next Steps + +* [Artifact Build and Deploy Pipelines Overview](https://docs.harness.io/article/0tphhkfqx8-artifact-build-and-deploy-pipelines-overview) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/jira-integration.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/jira-integration.md new file mode 100644 index 00000000000..845a42bf595 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/jira-integration.md @@ -0,0 +1,274 @@ +--- +title: Jira Integration +description: Create and update Jira issues from Harness, and approve deployment stages using Jira issues as part of Workflow and Pipeline approvals. +sidebar_position: 160 +helpdocs_topic_id: 077hwokrpr +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +[Atlassian Jira](https://www.atlassian.com/software/jira) provides dev teams with project management and issue tracking throughout the software delivery lifecycle. Harness integrates with Jira to enable you to track the progress of deployments (by creating and updating Jira issues from Harness), and to approve deployment stages (using Jira issues as part of Workflow and Pipeline approvals). + +For example, you could create a Jira issue when your deployment Pipeline execution starts, and update the same Jira ticket when each Workflow in the Pipeline complete successfully. And you can use the Jira issue's status to approve or reject a deployment. + +### Add Jira as a Collaboration Provider + +To use Jira integration in your Workflow or Pipeline, you need to add a Jira account as a Harness Collaboration Provider. For instructions and required permissions, see [Add Jira Collaboration Provider](https://docs.harness.io/article/bhpffyx0co-add-jira-collaboration-provider). + +![](./static/jira-integration-121.png) + +Once you have added a Jira account to Harness, you can integrate Jira into your Workflows and Pipelines. + +### Create a Jira Issue Using a Workflow + +You can create a Jira issue as a step in the execution of a Workflow. You can add a Jira step to any of the Workflow deployment steps (for example, Setup Container, Deploy Containers, Verify Service, and Wrap Up) and Rollback steps. + +When you add a Workflow step that *creates* a Jira issue (as opposed to [updating](#update_a_jira_issue_using_a_workflow) an issue), the Workflow will create a new, independent Jira issue every time it is run.To create a Jira issue as part of a Workflow, do the following: + +1. Add a Jira account as a Harness Collaboration Provider, as described in [Add Jira Collaboration Provider](https://docs.harness.io/article/bhpffyx0co-add-jira-collaboration-provider). +2. Create a new Harness Workflow, or open an existing Workflow. +3. In any of the deployment or rollback steps, click **Add Step**. The **Add Step** settings appear. + + ![](./static/jira-integration-122.png) + +4. In **Collaboration**, click **Jira**. The **Jira** settings appear. + + ![](./static/jira-integration-123.png) + +The Jira dialog has the following fields: + +* **Title** – By default, the step is titled **Jira**. If you are creating a Jira issue, you might want to rename the step **Jira Creation**, for example. +* **Request Type** – Select **Create an Issue**. **Update an issue** is discussed in [Update an Issue in a Workflow](#update_a_jira_issue_using_a_workflow). +* **Jira Connector** – Select the Jira account to use, by selecting the Jira Collaboration Provider set up for that account. For more information, see [Add Jira Collaboration Provider](https://docs.harness.io/article/bhpffyx0co-add-jira-collaboration-provider). +* **Project** – Select a Jira project from the list. A Jira project is used to create the issue key and ID when the issue is created. The unique issue number is created automatically by Jira. +* **Issue Type** – Select a Jira issue type from the list of types in the Jira project you selected. +* **Priority** – Select a priority for the Jira issue. The list is generated from the Jira project you selected. +* **Labels** – Add labels to the issue. These will be added to the Jira project you selected. +* **Summary** – Required. Add a summary for the Jira issue you are creating. This is commonly called an issue title. +* **Description** – Add the issue description. You can enter text and expressions together. +* **Output in the Context** – Click this option to create and pass variables from this Jira issue to another step in this Workflow or to a Pipeline step. For more information, see [Jira Issue Variables](#jira_issue_variables). +* **Issue Fields** – You can enable this control to target specific fields within a Jira issue. For more information, see [Jira Custom Fields](#jira_custom_fields).Harness supports the following for Jira settings: + +– Enter all the values only in English-language. Summary, Description, and all other fields must be in English. +– You can use expressions for all the fields except **Jira Connector**. + +When the **Jira** dialog is complete, it will look something like this: + +![](./static/jira-integration-124.png)Note the use of Harness variables in the **Summary** and **Description** fields: + + +``` +Deployment url : ${deploymentUrl} +Artifact: ${artifact.displayName} +Build number: ${artifact.buildNo} +``` +The variables will be replaced at runtime with deployment information. For general information on Harness variables, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). For specifics, see [Jira Issue Variables](#jira_issue_variables) below. + +When the Workflow is deployed, in the Harness Manager **Deployments** page, you can see a link to the Jira issue created by the Jira step: + +![](./static/jira-integration-125.png)Click the link to see the Jira issue that's been created: + +![](./static/jira-integration-126.png)Note how the Harness variables were replaced with values. Try it yourself. + +### Jira Issue Variables + +When you add a Jira step to a Workflow to create a Jira issue, you can create an output variable that can be used to reference that Jira issue in other Workflow steps or Pipelines that use that Workflow. + +Using Jira issue variables involves three steps: + +1. Identify a new or existing Jira issue using a Jira step in a Workflow. +2. Create an output variable in the step for the Jira issue. +3. Reference the Jira issue in a Workflow, Phase, or Pipeline using the output variable and a Jira issue property, such as an issue ID. + +For example, to reference a Jira issue, in the Jira step in a Workflow, you create an output variable, such as **Jiravar**. + +![](./static/jira-integration-127.png)The output variable identifies the Jira issue you created. When you reference the Jira issue in another Workflow, Phase, or Pipeline step, you use the `issueId` property. For example, `${Jiravar.issueId}`. + +Presently, the only Jira issue elements you can reference are the Jira issue Key or ID (using `issueId` or `issueKey`) and a Jira **Description** (using `issue.description`). Harness plans to later add variables for more Jira issue elements.To create a Jira issue variable, do the following: + +1. In a Workflow, in any of the deployment or rollback steps, click **Add Step**. The **Add Step** settings appear. +2. In **Collaboration**, click **Jira**. The **Jira** dialog appears. +3. Click **Create an Issue** and fill out the Jira dialog. In this example, we create a new Jira issue, but you can also create an output variable if the **Update an Issue** settings are used. +4. Click **Output in the Context**. The **Variable Name** and **Scope** settings appear.![](./static/jira-integration-128.png) +5. In **Variable Name**, enter a name for your variable, such as **Jiravar**. +6. In **Scope**, select **Pipeline**, **Workflow**, or **Phase**. + +The **Pipeline** scope is the widest, and includes the **Workflow** and **Phase** scopes. Next widest is **Workflow**, which includes any phases within that Workflow. And finally, **Phase** scopes the variable to the Workflow phase in which it was created. + +For example, a variable with a **Pipeline** scope can be used by any step in the Pipeline which uses this Workflow, such as an **Approval Step**. For this example, we will use the **Pipeline** scope. +7. Click **Submit**. The variable is created. + +Next, let's reference the variable, named **Jiravar**, in another Workflow step, and then a Pipeline step. + +1. In the same Workflow, in any of the deployment or rollback steps, click **Add Step**. The **Add Step** settings appear. +2. In **Collaboration**, click **Jira**. The **Jira** dialog appears. We will update the Jira issue in this step, so you might want to rename the step title to **Jira Update**. +3. Click **Update an Issue**. The settings for updating a Jira issue appear.![](./static/jira-integration-129.png) +4. Select the same **Jira Connector** and **Project** that you used when you added the Jira step to create the issue. +5. In **Key/Issue ID**, enter the variable that references the Key or ID of the issue you created. You reference the **Key/ID** using the property `issueId`. In our example, we created a variable named **Jiravar**, so the variable reference is `${Jiravar.issueId}`. +6. Next, enter text to update the Jira issue referenced by `${Jiravar.issueId}` using the **Update Summary** and **Update Status** fields. +7. In **Description**, you can reference the **Description** of the Jira issue you output using `${Jiravar.issue.description}`. + +Next, let's use the `${Jiravar.issueId}` variable in a Pipeline. For information on creating Pipelines, see [Pipelines](../pipelines/pipeline-configuration.md). + +1. Create or use an existing Pipeline. +2. Add the Workflow using `${Jiravar.issueId}` variable to the Pipeline. +3. Click the **+** button to insert a stage into the Pipeline.![](./static/jira-integration-130.png)The new stage settings appear. +4. Click **Approval Step**. The dialog changes to display the approval settings. +5. In **Ticketing System**, select **Jira Service Desk**. The **Jira** settings appear.![](./static/jira-integration-131.png) +6. Select the **Jira Connector** and **Project** that you used when creating and updating the issue. +7. In **Key/Issue ID**, enter the Jira issue variable you created, `${Jiravar.issueId}`. +8. Fill out the rest of the **Approval Step** as described in [Jira Approvals](../approvals/jira-based-approvals.md). +9. Click **Submit**. + + +### Jira Custom Fields + +Once you've filled in the **Jira** dialog's required fields for **Jira Connector**, **Project**, and **Issue Type**, the dialog's **Issue Fields** control is enabled. + +1. Open **Issue Fields** pop-up to select additional fields within the Jira issue you're creating or updating. +![](./static/jira-integration-132.png) +2. As you select fields, they're added below the pop-up. You can then configure these fields with the values you want to apply to the Jira issue you're targeting.![](./static/jira-integration-133.png) +3. When your Project or Issue Type are templatized, **Issue Fields** lists **Name** and **Value**. Click **Add** and enter the details in the Name and Value field. You can enter text and expressions together.In Project if you enter variable expression, the project name gets converted to keys. For example, `CD NextGen` gets converted to `${workflow.variables.CDNG)`.![](./static/jira-integration-134.png) + +Harness supports only Jira fields of type `Option`, `Array`, `Any`, `Number`, `Date`, and `String`. Harness does not integrate with Jira fields that manage users, issue links, or attachments. This means that Jira fields like Assignee and Sprint are not accessible in Harness' Jira integration. + +#### User Fields + +Currently, this feature is behind the feature flag `ALLOW_USER_TYPE_FIELDS_JIRA`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.When you select user-based fields from **Issue Fields**, such as **Reporter**, you can click in the fields produced and enter the first letters of a name. Harness will automatically search Jira for the name. Click the name to add it. + +![](./static/jira-integration-135.png)You can use a Jira user's username or email. + +User-based fields also support [Workflow Variables](add-workflow-variables-new-template.md). + +#### Harness Expression Support in Issue Fields + +In the **Issue Fields** section, all select and multi-select settings can use [Service](../setup-services/add-service-level-config-variables.md) and [Workflow variable expressions](add-workflow-variables-new-template.md). + +Whether a field setting is select or multi-select depends on how you designed the field in Jira. + +![](./static/jira-integration-136.png)The variables must exist already and be resolvable at this point in the Workflow. + +For example, if you use a Service variable in the Jira step in the **Pre-deployment** section of a Canary Workflow, the variable is not available. The Service is selected in the Phases of the Canary Workflow, and so that Service variable can only be used in those Phases. + +Workflow variables can be used anywhere in the Workflow. + +When you deploy your Workflow, you will provide values for any variables. If you are using variables for Custom Fields, you can enter in a comma-separated list for your Workflow variable value, such as `test, hello, goodbye`. + +See: + +* [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables) +* [Add Service Config Variables](../setup-services/add-service-level-config-variables.md) +* [Set Workflow Variables](add-workflow-variables-new-template.md) + +### Jira Date Field Support + +Among the custom fields Harness supports are Baseline End Date and Start Date Time. If your Jira project uses these fields, they are available in Custom Fields: + +![](./static/jira-integration-137.png)Once you have selected these fields, the **Baseline end date** and **Start Date Time** settings appear. + +Click in these fields to use the date and time selectors. + +![](./static/jira-integration-138.png)You can also use advanced dates using Workflow variables and the `current()` function: + +![](./static/jira-integration-139.png)And you can use these together with operators: + +![](./static/jira-integration-140.png) + +### Update a Jira Issue Using a Workflow + +You can update an existing Jira issue in two ways: + +* [Update with Jira Keys/IDs](#update_by_ids) +* [Update with a Variable](#update_by_variable) + + +#### Update with Jira Keys/IDs + +To update by explicit Keys/IDs, simply copy one or more Jira issue Keys/IDs from Jira. Then, in the Workflow Jira step's **Update an Issue** settings, paste them into the **Key/Issue ID** field. + +![](./static/jira-integration-141.png)To enter multiple issue IDs, you can use the characters `, : ;` or spaces as separators. Optionally, you can also enclose a group of issue IDs in the character pairs `{}` or `()` or `[]`. +#### Update with a Variable + +As an alternative, you can use the `issueId` variable property to update one or more Jira issues that you created with Harness. Using the variable we [created earlier](#jira_issue_variables), this example shows the field entry `${Jiravar.issueId}`: + +![](./static/jira-integration-142.png) +#### Example: Update by Variable + +To update a Jira issue in a Workflow using the `issueId` variable property, do the following: + +1. Create a new Jira issue using the **Jira** step in a Workflow, as described in [Create an Issue in a Workflow](#create_a_jira_issue_using_a_workflow). +2. In the **Jira** step, click **Output in the Context**. The **Variable Name** and **Scope** settings appear. +3. In **Variable Name**, enter a name for your variable, such as **Jiravar**. +4. In **Scope**, select **Pipeline**, **Workflow**, or **Phase**. +5. Click **Submit** to complete the Jira step. +6. Within the scope you set—which, in a Pipeline, can include another Workflow—click **Add Step**. The **Add Step** settings appear. +7. In **Collaboration**, click **Jira**. The **Jira** dialog appears. We will update the Jira issue in this step, so you might want to rename the step title to **Jira Update**. +8. Click **Update an Issue**. The settings for updating a Jira issue appear:![](./static/jira-integration-143.png) +9. Select the same **Jira Connector** and **Project** that you used when you added the Jira step to create the issue. +10. In **Key/Issue ID**, enter the variable expression that references the **Key/ID**(s) of the issue(s) you created. You reference a **Key/ID** using the property `issueId` or `issueKey`. In our example, when you created an output variable (in the Workflow's Jira step) that created the Jira issue, the variable was named **Jiravar**. So the variable reference is `${Jiravar.issueId}`. + +Now this Jira step will update the same Jira issue(s). +11. Next, enter text to update the Jira issue(s) referenced by `${Jiravar.issueId}`, using the **Update Summary** and **Update Status** fields. +12. In **Description**, you can reference the description field of the same issue using the variable `${jiraVar.issue.description}`, and append information to it, like this: + +![](./static/jira-integration-144.png) + +Here's an example where the Jira step creates a new Jira issue, relying on the output variable **Jiravar**. The variable is scoped to **Pipeline**, to enable another Workflow to update the Jira issue. + +Here's the Jira step in Harness that creates the Jira issue: + +![](./static/jira-integration-145.png)Here's the newly created ticket in Jira: + +![](./static/jira-integration-146.png)Here's the Jira step to *update* the Jira issue. Note the comment in the **Add Comment** field: `Prod Workflow Complete`. + +![](./static/jira-integration-147.png)**Updating the issue Description:** If you leave the **Description** setting empty, the description from the issue referenced using `${Jiravar.issueId}` is maintained. If you add text to the **Description** field, that text overwrites the text in the issue. You can use the variable `${Jiravar.issue.description}` to reference the original description, and then append information. The `issue.description` expression references the original description in the issue referenced with `${Jiravar.issueId}`.During deployment, Harness Manager's **Deployments** page contains a link to the updated Jira issue: + +![](./static/jira-integration-148.png)Here's the updated Jira issue in Jira. Note the Comment: `Prod Workflow Complete`. + +![](./static/jira-integration-149.png) +#### Example: Update by Artifact Tag + +This second Workflow example uses a variable, combined with an artifact tag, to update a pair of Jira issues during deployment. In this example, the artifact is a an AMI. + +See [AMI (Amazon Machine Image) Deployment](https://docs.harness.io/article/rd6ghl00va-ami-deployment) for background information about AMI deployment in Harness.In the AWS Console, the highlighted AMI (`ui-integration-test-v5`) has been assigned a Tag consisting of a key/value pair. The key is named `jiraIds`, and its value is the pair of Jira issue IDs we want to update: `TJI-1234, TJI-1235`. + +![](./static/jira-integration-150.png)In Harness, our Workflow to deploy this AMI contains a Jira step, which is executed after **Setup AutoScaling Group**: + +![](./static/jira-integration-151.png)In the Jira step, the **Key/Issue ID** field references the AMI's Tag key using the variable `${artifact.metadata.jiraIds}` . The **Add Comment** field specifies a simple test message to write to the linked Jira issues: + +![](./static/jira-integration-152.png)Once we run the deployment, upon successful setup of the Auto Scaling Group, the **Details** panel shows that Harness has updated the two intended Jira issues: + +![](./static/jira-integration-153.png) + +### Jira Resolution Support + +You can update the resolution setting of a Jira Issue in the Jira step: + +![](./static/jira-integration-154.png)To add the resolution setting, in the Jira step, click **Issue Fields**, and then select **Resolution**. + +The **Resolution** setting is added. It is populated by the resolution settings in the Jira project you selected in **Project**. + +Once you deploy the Workflow, the Jira issue's **Resolution** status is updated with the value you selected: + +![](./static/jira-integration-155.png)You can also add a Harness Service or Workflow variable expression. See [Add Service Config Variables](../setup-services/add-service-level-config-variables.md) and [Set Workflow Variables](add-workflow-variables-new-template.md). + +### Jira Time Tracking Support + +Harness does not support Jira Time Tracking **legacy mode**.You can update the time tracking setting of a Jira Issue in the Jira step. To add the Time tracking setting, do the following: + +1. In the **Jira** step, click **Issue Fields**.![](./static/jira-integration-156.png) +2. Select **Time tracking**.![](./static/jira-integration-157.png) +3. Enter **Original** and **Remaining Estimate**. +4. Click **Submit**. + +The Time tracking setting is added. Once you deploy the Workflow, the Jira issue's Time tracking status is updated with the value you selected.You can also add a Harness Service or Workflow variable expression. See [Add Service Config Variables](../setup-services/add-service-level-config-variables.md) and [Set Workflow Variables](add-workflow-variables-new-template.md). + +### Jira-based Approvals + +You can add approval steps in both Workflows and Pipelines, and can use Jira issues' Workflow statuses for approval and rejection criteria. For more information, see [Jira Approvals](../approvals/jira-based-approvals.md). + +### Create Jira Issues from Verification Events + +When a deployment fails, you can create a Jira issue directly from a deployment's verify step, or from 24/7 Service Guard. For details, see [File Jira Tickets on Verification Events](../../continuous-verification/tuning-tracking-verification/jira-cv-ticket.md). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/post-deployment-rollback.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/post-deployment-rollback.md new file mode 100644 index 00000000000..53c2820d970 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/post-deployment-rollback.md @@ -0,0 +1,123 @@ +--- +title: Rollback Deployments +description: Use the Rollback Deployment option to undo your most-recent successful deployment. +sidebar_position: 220 +helpdocs_topic_id: 2f36rsbrve +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The **Rollback Deployment** option initiates a rollback of your most-recent successful deployment. This allows rapid, predictable recovery from a deployment that succeeded on technical criteria, but that you want to undo for other reasons. + +![](./static/post-deployment-rollback-02.png) + +### Limitations + +* Post-deployment rollback is only supported in Workflows and Pipelines that deploy container images (Docker), AMI/ASG images, and traditional artifacts (ZIP, TAR, etc). Workflows and Pipelines that deploy only manifests or Helm charts with hardcoded artifacts in their specs are not supported. + +### Review: Platform and Workflow Support + +Rollback Deployment currently supports the following platforms and strategies: + +* **Kubernetes** deployments: Basic, Blue/Green, Canary, Rolling Workflows. +* **SSH** deployments: Blue/Green, Canary, and Basic Workflows. +* **PCF (Pivotal Cloud Foundry)** deployments: Blue/Green, Canary, and Basic Workflows. +* **WinRM (IIS and .NET)** deployments: Blue/Green, Canary, and Basic Workflows. +* **ECS** deployments: all Workflow types, and both EC2 and Fargate clusters. +* **AMI/ASG** deployments: Blue/Green, Canary, and Basic Workflows. + +Harness anticipates expanding this feature to other deployment platforms. + +### Review: Required Permissions + +The Rollback Deployment option requires the following User Group Account and Application permissions: + +* **Account:** `Manage Applications` +* **Application:** `Rollback Workflow` + +![](./static/post-deployment-rollback-03.png) + +You can also add the **Rollback Workflow** Application permission via the GraphQL API: + + +``` +mutation { + updateUserGroupPermissions(input: { + clientMutationId: "123" + userGroupId: "Gh9IDnVrQOSjckFbk_NJWg" + permissions: { + appPermissions: { + actions:[ROLLBACK_WORKFLOW] + permissionType: ALL + applications: { + filterType: ALL + } + deployments: { + filterTypes: NON_PRODUCTION_ENVIRONMENTS + } + } + } + }) { + clientMutationId + } +} +``` +#### Rollback Workflow added if Execute Workflow used Previously + +All User Groups that had the **Execute Workflow** permission enabled will now have **Rollback Workflow** enabled, also. You can disable it if needed. + +### Step 1: Rollback a Deployment + +Before you begin, please review [Requirements and Warnings](post-deployment-rollback.md#requirements-and-warnings).To initiate a post-deployment rollback: + +1. Open your [Services Dashboard](https://docs.harness.io/article/c3s245o7z8-main-and-services-dashboards#services_dashboard). +2. In the **Current Deployment Status** panel, click the More Options ⋮ menu beside the most-recent deployment. Then select **Rollback Deployment**.![](./static/post-deployment-rollback-04.png)The **Rollback Deployment** option appears only for the current deployment. +3. In the resulting confirmation dialog, verify the deployment's details. If everything looks correct, click **Rollback**.![](./static/post-deployment-rollback-05.png)Harness then invokes the Workflow's configured [Rollback Strategy](workflow-configuration.md#rollback-steps), executing the same Rollback Steps as if the deployment had failed.![](./static/post-deployment-rollback-06.png)Once the rollback completes, your deployed instances will be returned to the state they were in before the most-recent deployment.![](./static/post-deployment-rollback-07.png) + +### Requirements and Warnings + +**Rollback Deployment** will execute Rollback Steps on your deployment according to the Workflow's current configuration. Make sure the Workflow (including any variables) has not been reconfigured since this most-recent deployment, or the rollback can have unpredictable results.A deployment's **Rerun** option will be unavailable during, and following, a post-deployment rollback.In order to use Post-Deployment rollback, the following requirements must be met: + +* There must be at least two successful deployments of the Workflow. +* The **Workflow Type** cannot be a **Multi-Service Deployment.**![](./static/post-deployment-rollback-08.png) +* A user's ability to invoke the **Rollback Deployment** option is based on their [User Group](https://docs.harness.io/article/ven0bvulsj-users-and-permissions) membership, and on corresponding role-based permissions. + +### AWS ASG Deployments + +Currently, this feature is behind the feature flag `WINRM_ASG_ROLLBACK`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.By default, Harness rolls back the instances that are initially deployed. When using Post-Deployment rollback, this list of instances could differ from the initial list. + +When the feature flag `WINRM_ASG_ROLLBACK` is enabled, Harness will going through the list of all hosts specified in the ASG selected in the Infrastructure Definition and check whether the host has an active instance (triggered for rollback). If the host has an active instance, we include the host in rollback. + +This method ensures that all instances are rolled back, including those initially deployed and those instances that appeared due to the ASG. + +Let's look at an example. + +If a Workflow uses an Infrastructure Definition that includes an AWS Cloud Provider and and ASG, then its Post-Deployment rollback is handled in the following way. + +![](./static/post-deployment-rollback-09.png)For each deployment, Harness calculates the percentage of instances that are deployed in every Workflow phase relative to the number of all instances that have been deployed. Harness uses this percentage to calculate the number of instances for rollback relative to all active instances at that moment. + +Let’s say before/during deployment, we have `i1,i2,...,i8` (8 instances) and we have a 4 phase Canary Workflow defined with these percentages: + +1. [Split: 10%] phase 1 (10%) - `i1` +2. [Split: 20%] phase 2 (30%)- `i2,i3` +3. [Split: 30%] phase 3 (60%)- `i4,i5` +4. [Split: 40%] phase 4 (100%)- `i6,i7,i8` + +During Post-Deployment rollback we have 15 instances, some old and some new. + +`i1, i2, i3, i4, i7, i8, i9, i10, i11, i12, i13, i14, i15, i16, i17` + +Here's how rollback works: + +1. **Rollback phase 4,** Harness rolls backs `i7, i8, i9, i10, i11, i12`. For phase 4, Harness deployed 3 instances of 8, which is 37.5%. Harness will rollback 37.5% of all active instances: +n = 15 \* 37.5% = 5.625 +This is rounded to 6 (two old and four new). +2. **Rollback phase 3,** Harness rolls back `i4, i13, i14, i15`. For phase 3, Harness deployed 2 instances of 8, which is 25%. Harness will rollback 25% of all active instances: +n = 15 \* 25% = 3.75 +This is rounded to 4 (one old and three new). +3. **Rollback phase 2,** Harness rolls back `i2, i3, i16, i17`. For phase 2, Harness deployed 2 instances of 8, which is 25%. Harness will rollback 25% of all active instances: +n = 15 \* 25% = 3.75 +This is rounded to 4 (two old and two new). +4. **Rollback phase 1,** Harness rolls back `i1`. In the last rollback phase, Harness rolls back all instances that are left (only i1). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/resource-restrictions.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/resource-restrictions.md new file mode 100644 index 00000000000..b6ba6535e41 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/resource-restrictions.md @@ -0,0 +1,85 @@ +--- +title: Control Resource Usage +description: To queue resources, you can place capacity limits on the resources Harness requests during deployments. +sidebar_position: 180 +helpdocs_topic_id: nxtsta7d3t +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To queue the number of resources Harness requests during a deployment, and prevent multiple Workflows, Workflow Phases, or Pipelines requesting the same deployment environment resources at the same time, you can place capacity limits on the resources Harness requests. + +For example, a deployment cloud environment (AWS, GCP, Azure, etc) might limit host access to 5 at a time. You can use Resource Guard to ensure that the Workflow only requests access to 5 at the same time during deployment. + +Another common example is deploying multiple artifacts to a single Kubernetes namespace because the same Workflow is deployed by two people simultaneously. To avoid collision, and queue deployments, you can set a maximum capacity of one request to the namespace at a time. + +Resource Guards are Account-wide. This ensures that if a Resource Guard is placed in one Workflow, it will restrict any other Workflow in the Account from using the resource until it is safe to use. + +### Control Resource Usage + +Resource Guards are set up at the Harness Account level, and may be used throughout all Applications. + +To create a Resource Guard, do the following: + +1. In Harness, click **Setup**. +2. In **Account**, click the vertical ellipsis, and then click **Resource Guard**. + + The **Resource Restrictions** settings appear. + + ![](./static/resource-restrictions-00.png)1. Click **Add Constraint**. + +3. Fill out the constraint fields: + + * **Resource Constraint** - Enter a name for your resource constraint. You will use this name to select the Resource Constraint in a Workflow. + * **Capacity** - The maximum number of resources that may be consumed simultaneously. + * **Strategy** - Choose **FIFO** if you want the resource requests to be selected in the order they arrived. Choose **ASAP** if you want Harness to select the first pending resource request that matches the number of available resources. + As an **ASAP** example, imagine you have two resource requests: the first needs 2 hosts, and the second request needs 1 host. When 1 host becomes available, the second request is given the host as its request matches the number of available resources first. + +The **Current Usage** value identifies if the resource restriction is applied a Workflow. + +### Apply Constraints in Workflows + +You can use your Resource Guards anywhere they are needed in a Workflow to queue resource requests. + +Only add Resource Guards to Workflow sections where the resource you reference can be obtained. For example, if you add Resource Guards to the **Pre-deployment Steps**. then Services and Infrastructure Definition cannot be referenced as Harness has not selected those at this stage of the Workflow.To apply a Resource Guards to a Workflow, do the following: + +1. In a Workflow, click **Add Step**. The **Resource Constraint** dialog appears. +2. Fill out the constraint settings and click **SUBMIT**. The settings are described below. + +#### Resource Constraint ID + +Select the **Resource Guard** to use at this stage of the Workflow. Note the capacity of the restriction. This is the maximum number of resources that may be consumed simultaneously. + +#### Required Resource Usage + +Enter the number of resources that are required at this step in the Workflow. If the number is more than the Capacity of the Resource Guard you selected, the Workflow will remain at this step until the required resource usage has been reached. This is how you queue resource usage. + +For example, if the **Capacity** is 4 and the **Required Resource Usage** is 5, Harness will use 4 resources simultaneously and wait until 1 resource becomes free so that it can use it for the 5th required resource. + +#### Unit + +The resource unit is any value that can identify a resource uniquely. You can use Harness built-in variables by entering `$` and selecting from the list. + +If you leave **Unit** empty, the Resource Constraint will be applied to all resources across your account.For example, to queue the services deployed to a Kubernetes namespace, you can enter `${infra.kubernetes.namespace}` in **Unit** and set the Resource Restraint Capacity to **1**. + +To queue the services deployed to a particular Harness Infrastructure Definition that uses the namespace, you would enter `${infra.kubernetes.infraId}-${infra.kubernetes.namespace}` in **Unit**. + +The following image shows the **Resource Constraint** step in the Workflow with **Unit** set to `${infra.kubernetes.infraId}-${infra.kubernetes.namespace}` and the results of the deployed Workflow: + +![](./static/resource-restrictions-01.png) + +#### Scope + +Currently, the **Pipeline** option is behind the feature flag `RESOURCE_CONSTRAINT_SCOPE_PIPELINE_ENABLED`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Scope this constraint to a Workflow, Workflow Phase, or Pipeline. + +The scope determines what must be completed before the resource can be used by another Workflow, Workflow Phase, or Pipeline. + +For example, let's look at selecting **Pipeline** in **Resource Constraint**. Once selected, this Workflow is added to Pipeline A and Pipeline B. + +Pipeline A is run and then Pipeline B is run immediately afterwards. Pipeline B will not be able to use the same deployment environment resources as Pipeline A until Pipeline A is fully executed. Pipeline B is queued until Pipeline A is complete. Once Pipeline A completes, Pipeline B proceeds. + +#### Timeout + +Enter how long the Resource Guard should run before failing the step. If the timeout is reached before the Resource Guard is able to process all resource requests, it will fail the Workflow and rollback will occur. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/select-nodes-in-a-rolling-deployment-workflow.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/select-nodes-in-a-rolling-deployment-workflow.md new file mode 100644 index 00000000000..08fe2d98ba9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/select-nodes-in-a-rolling-deployment-workflow.md @@ -0,0 +1,68 @@ +--- +title: Select Nodes in a Rolling Deployment Workflow +description: Configure the Select Nodes step in a Rolling deployment Workflow. +sidebar_position: 230 +helpdocs_topic_id: ax6mntmp3s +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic explains how to configure the Select Nodes step in a Rolling deployment Workflow. + +Rolling deployments are supported for most platforms. This topic is only concerned with the Rolling deployment Workflows that include the **Select Nodes** step. For a list of all platforms that support Rolling deployments, see [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* [Deployment Concepts and Strategies](https://docs.harness.io/article/325x7awntc-deployment-concepts-and-strategies) + +### Step 1: Create a Rolling Workflow + +A Rolling Workflow with Select Nodes is supported for the following Harness Service deployment types: + +* Secure Shell (SSH) +* WinRM (IIS .NET) + +When you have created a Rolling Workflow for one of these Service types, the **Select Nodes** step is added to the **Prepare Infra** section automatically. + +![](./static/select-nodes-in-a-rolling-deployment-workflow-51.png) + +### Step 2: Enter the Number of Rolling Instances + +In **Select Nodes**, the **Instance Count** setting refers to the number (count or percentage) of nodes to use when performing the rolling deployment. + +For example, if you enter **1 Count**, Harness will deploy to 1 node, and then roll onto the next node. + +If you enter **2 Count**, Harness will deploy to 2 nodes, and then roll onto the next node. + +If you select **Percentage** in **Unit Type**, Harness calculates the percentage of available target hosts at deployment runtime. + +The number of available nodes is determined by the Infrastructure Definition used by the Workflow. + +For example, in the Infrastructure Definition for the **Secure Shell (SSH)** deployment type, you enter the hostnames/IP addresses for the target nodes in **Host Names**: + +![](./static/select-nodes-in-a-rolling-deployment-workflow-52.png) + +In the Infrastructure Definition for the **Windows Remote Management (WinRM)** deployment type, you can identify the hosts using **Tags**: + +![](./static/select-nodes-in-a-rolling-deployment-workflow-53.png) + +In both types, you need at least nodes to perform a proper Rolling deployment. + +### Option: Select Host Not in Infrastructure Definition + +Currently, this feature is behind the feature flag `DEPLOY_TO_INLINE_HOSTS` and available in SSH and WinRM deployments only. Contact [Harness Support](mailto:support@harness.io) to enable the feature.The nodes that appear in the Select Nodes **Host Name(s)** setting are taken from the Workflow's Infrastructure Definition, but you can enter in alternate or additional nodes. + +In the following example, host1 and host2 are from the Workflow's Infrastructure Definition, and the remaining hosts are entered manually. + +![](./static/select-nodes-in-a-rolling-deployment-workflow-54.png) + +You can also enter [Workflow variable expressions](add-workflow-variables-new-template.md) that are resolved at runtime. The Workflow variables can be a list of hosts. + +### Next Steps + +* [Traditional Deployments Overview](https://docs.harness.io/article/6pwni5f9el-traditional-deployments-overview) +* [IIS (.NET) Quickstart](https://docs.harness.io/article/2oo63r9rwb-iis-net-quickstart) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/send-an-email-in-your-workflow.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/send-an-email-in-your-workflow.md new file mode 100644 index 00000000000..31fa0c03d43 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/send-an-email-in-your-workflow.md @@ -0,0 +1,78 @@ +--- +title: Send an Email from Your Workflow +description: You can use the Email Workflow step to send an email to registered Harness User email addresses as part of your Workflow. The Email step is different than the Workflow Notification Strategy, which no… +sidebar_position: 270 +helpdocs_topic_id: q1xblriy6d +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use the Email Workflow step to send an email to registered Harness User email addresses as part of your Workflow. + +The Email step is different than the Workflow Notification Strategy, which notifies Harness User Groups of different Workflow conditions. See [Add a Workflow Notification Strategy](add-notification-strategy-new-template.md). + +### Before You Begin + +* [Workflows](workflow-configuration.md) +* [Add a Workflow Notification Strategy](add-notification-strategy-new-template.md) +* [Add SMTP Collaboration Provider](https://docs.harness.io/article/8nkhcbjnh7-add-smtp-collaboration-provider) + +### Limitations + +* You can only use email addresses that are **registered** to Harness User accounts. This helps to ensure secure communication from your Workflow. If the Email step uses an unregistered email address, it will not be sent to the unregistered email address. +* If the email is addressed to registered and unregistered addresses, only the registered addresses receive the email. +* The email that is received does not show the Harness User that executed the Workflow as the **from** or **reply-to** settings. + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Option: Use Your Own SMTP Server + +By default, emails are sent using the built-in Harness default SMTP server. + +To use your own SMTP server, follow the steps in [Add SMTP Collaboration Provider](https://docs.harness.io/article/8nkhcbjnh7-add-smtp-collaboration-provider). + +Configuring your SMTP server is required only if you are using [Harness On-Prem](https://docs.harness.io/article/gng086569h-harness-on-premise-versions). + +### Step 1: Add the Email Step + +You can add the Email step to any section of a Workflow that allows steps. + +1. In your Harness Workflow, click **Add Step**. +2. Click **Email** and then click **Next**. +3. In **Name**, enter a name for the step. + +### Step 2: Enter Addresses and Message + +1. In **To** and **CC**, enter one or more email addresses of Harness Users. Email addresses are comma-separated. +2. Provide a **Subject** and **Body** message. + +The **Body** setting supports HTML. The **Subject** setting does not. + +### Option: Use Variable Expressions in Body + +You can use [Harness variables expressions](https://docs.harness.io/article/9dvxcegm90-variables) in the Body of the message. + +You can use built-in Harness expressions to display information about the deployment: + + +``` +Testing email variables: +
    +
  • ${app.name}
  • +
  • ${workflow.displayName}
  • +
  • ${deploymentUrl}
  • +
+``` +Which are displayed in the delivered message: + +![](./static/send-an-email-in-your-workflow-237.png) + +You can use [Workflow variables](add-workflow-variables-new-template.md) also. You can use [Service Config variables](../setup-services/add-service-level-config-variables.md) but the Email step must be in a Workflow phase where the Service is used, and not in a Pre-deployment section of the Workflow. + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/service-now-integration.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/service-now-integration.md new file mode 100644 index 00000000000..f580e100169 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/service-now-integration.md @@ -0,0 +1,201 @@ +--- +title: ServiceNow Integration (FirstGen) +description: Integrate Harness with ServiceNow (SNOW) to track and audit the progress of Harness deployments, and to approve or reject Pipeline stages from within ServiceNow. +sidebar_position: 170 +helpdocs_topic_id: 7vsqnt0gch +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Using [ServiceNow](https://docs.servicenow.com/) (SNOW) tickets from one or more ServiceNow instances, you can track and audit the progress of Harness deployments and Pipelines, and can approve or reject Pipeline stages. + +### Video Summary + +### ServiceNow Collaboration Provider + +To use ServiceNow integration in your Workflow or Pipeline, you must first add a ServiceNow account as a Harness Collaboration Provider. For instructions and required permissions, see [Add ServiceNow Collaboration Provider](https://docs.harness.io/article/vftxcr51xx-add-service-now-collaboration-provider). + +![](./static/service-now-integration-159.png)Once you have added a ServiceNow account to Harness, you can integrate ServiceNow into your Workflows and Pipelines. + +### UTC Timezone Only + +The ServiceNow API only allows date time and time values in the UTC timezone. Consequently, input for any datetime/time fields in Harness ServiceNow steps must be provided in UTC format irrespective of time zone settings in your ServiceNow account. + +The timezone settings govern the display value of the settings not their actual value. + +The display values in the Harness UI depend on ServiceNow timezone settings. +### Create ServiceNow Tickets in a Workflow + +You can create and update ServiceNow tickets during the execution of a Harness Pipeline Stage or Workflow. This section will cover creating and updating tickets in a Workflow. We will create a ServiceNow ticket in an existing Workflow's **Pre-deployment Steps**, and then update the same ticket in the **Post-deployment Steps** as the Workflow completes. + +To use a ServiceNow ticket as part of a Workflow, do the following: + +1. Ensure that you have added a ServiceNow account as a Harness Collaboration Provider, as described in [Add ServiceNow Collaboration Provider](https://docs.harness.io/article/vftxcr51xx-add-service-now-collaboration-provider). +2. Open an existing Harness Workflow, or create a new one. In this example, we are using a Canary Deployment Workflow. +3. In **Pre-deployment Steps**, click **Add Step**. +4. In the **Add Command** dialog, click **ServiceNow**. + + ![](./static/service-now-integration-160.png) + + The **ServiceNow** dialog appears. + + ![](./static/service-now-integration-161.png) + + The **ServiceNow** dialog has the following fields: + + * **Title** – By default, the step is titled ServiceNow. If you are creating a ServiceNow ticket, you might want to rename the step ServiceNow Creation, for example. + * **Request Type** – Select **Create an Issue**. (The [Update an Issue](#update) and [Import Set](#import_set) options are discussed below.) + * **Connector** – Select the ServiceNow account to use, by selecting the ServiceNow Collaboration Provider set up for that account. For more information, see [ServiceNow](https://docs.harness.io/article/cv98scx8pj-collaboration-providers#service_now). + + When you select the ServiceNow Collaboration Provider, any account-specific fields in that ServiceNow account are pulled into the dialog. For example, here are the **Impact** and **Urgency fields**, with their values displayed: + + ![](./static/service-now-integration-162.png) + + * **Ticket Type** – Select a ServiceNow ticket type from the list of types. + * **Short Description** – Add a description for the ticket you are creating. This will be the ticket title. (You can use Harness variables in the **Short Description** and **Description** fields. Simply type **$** and a list of available variables appears, as shown below.) + + ![](./static/service-now-integration-163.png) + + * **Description** – Add the ticket description. + * **Output in the Context** – Select this option to create a variable for the ServiceNow issue. You can reference this variable in another step in this Workflow, or in a Pipeline. + +5. Select **Output in the Context**, and in **Variable Name**, enter a name, such as **snow**. +6. In **Scope**, select **Pipeline**, **Workflow**, or **Phase**. The **Pipeline** scope is the widest, and includes the **Workflow** and **Phase** scopes. Next widest is **Workflow**, which includes any phases within that Workflow. And finally, **Phase** scopes the variable to the Workflow phase in which it was created. + +For example, a variable with a **Pipeline** scope can be used by any step in the Pipeline which uses this Workflow, such as an **Approval Step**. For this example, we will use the **Pipeline** scope. + +Now that there is an output variable, you can add activity to the same ServiceNow ticket using the variable name. We will use this variable when we [update](#update) this ticket, but for now, note that the syntax to reference the variable is `${variable_name.issueNumber}`. For example, `${snow.issueNumber}`. + +Presently, the only ServiceNow issue element you can reference is the issue ID, using `issueNumber`. Harness will be adding more issue element variables in the near future.When the **ServiceNow** dialog is complete, it will look something like this: + +![](./static/service-now-integration-164.png)Note the use of Harness variables in the **Description** field: + + +``` +Deploying Workflow: ${workflow.name} +Deployment URL: ${deploymentUrl} +``` +The variables will be replaced at runtime with deployment information. For more information on Harness variables, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +Once the Workflow is deployed, the Harness Manager **Deployments** page displays a link to the ServiceNow ticket that the ServiceNow step created: + +![](./static/service-now-integration-165.png)Click this link to view the ticket in ServiceNow. + +To see the variables you used in the **Descriptions** field, click the **Details** section's More Options ⋮ menu, and select **View Execution Context**. + +![](./static/service-now-integration-166.png)The runtime output is displayed: + +![](./static/service-now-integration-167.png) + + +#### Configure Custom Fields + +A Workflow's **ServiceNow** dialog includes the **Configure Fields** option shown below. This multi-select drop-down enables you to access multiple custom fields from your ServiceNow integration. + +Harness only shows English (EN) language fields fetched from your ServiceNow server. It does not support fields in other languages.Type the first few characters of a field name to quickly jump to that field. You can also scroll the list. + +![](./static/service-now-integration-168.png)Select each field that you want to add. You can also toggle the **Select All** check box to add, or remove, all custom fields. + +![](./static/service-now-integration-169.png)When you close the drop-down, all of your selected fields are added to the dialog. You can now write static values, or variables, to these fields. + +![](./static/service-now-integration-170.png) + + +### Update ServiceNow Tickets + +This section shows how to update the ServiceNow ticket in the **Post-deployment Steps** of the Workflow we used above. + +1. In **Post-deployment Steps**, click **Add Step**, and click **ServiceNow**. The **ServiceNow** dialog appears. +2. In **Request Type**, click **Update an Issue**. The dialog changes to display the update settings. + + ![](./static/service-now-integration-171.png) + +3. In **Connector**, select the same ServiceNow Collaboration Provider you used to create the ticket. +4. In **Ticket Type**, select the same type. +5. In **Issue Number**, use the output variable expression to identity the ticket you created in the ServiceNow step where you created the ticket. The output variable name was **snow**, so the variable is `${snow.issueNumber}` + + .![](./static/service-now-integration-172.png) + +6. In **Update State**, update the ticket information. +7. In **Add work notes**, add any notes. You can also use Harness variables, for example: **Started progress on Workflow:** `${deploymentUrl}`. +8. You can also output this issue number using **Output in the Context** and change the scope. + +When you are done, the dialog will look something like this: + +![](./static/service-now-integration-173.png) + +Click **SUBMIT**. Your Workflow now has a ServiceNow step in its **Pre** and **Post-deployment Steps**: + +![](./static/service-now-integration-174.png) + +When Workflow is deployed, the ServiceNow steps are displayed. + +![](./static/service-now-integration-175.png) + +Clicking the link to the ServiceNow ticket takes you directly to the ticket in ServiceNow, and reveals all the activity logged during the Workflow steps: + +![](./static/service-now-integration-176.png) + +Now you know how to create and update ServiceNow tickets in your Workflows. The following sections cover additional ticket creation and update options. Later, this topic will cover how to use ServiceNow tickets for [Pipeline Approval stages](#approvals). + + +### Import Set + +The ServiceNow (Workflow) dialog's **Import Set** option is an alternative to **Create an Issue** or **Update an Issue**. This option invokes ServiceNow's Import Set API to create or update tickets via a ServiceNow staging table. + +![](./static/service-now-integration-177.png) + +As shown in the Workflow step above, enabling the **Import Set** radio button exposes the following Import Set–specific fields in the dialog's body: + +#### Staging Table Name + +Specifies the staging table that will be used to import Harness data into ServiceNow. These intermediate tables' names are prefixed with **u\_**. + +#### JSON Body + +Contains the JSON that this Workflow step will pass when it makes a call to ServiceNow's Import Set API. The example above creates a Change Request with comment based on the **u\_harness** table's transformation map. + +You can use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in the JSON of **JSON Body**. For example, you could create a Workflow variable named `importset` and then reference it in **JSON Body** as `{"u_example":"${workflow.variables.importset}"}`. + +For details on the table requirements and naming, see [Create a table](https://docs.servicenow.com/en-US/bundle/sandiego-platform-administration/page/administer/table-administration/task/t_CreateATable.html) from ServiceNow. + +#### Transform Table Maps + +Within ServiceNow, Transform Table Maps determine how data will be written from the staging table to existing target tables. Map rows determine whether the transformation will create or update a ticket, and determine the ticket type. + +![](./static/service-now-integration-178.png) + +Here, Harness' Deployments page shows successful execution of the Workflow step configured above: + +![](./static/service-now-integration-179.png) + +The highlighted **Transformation Values** field indicates the import set's output values. These values are available in the Harness variable `myVar.transformationValues[0]` . The same Workflow's steps 1–4 will access this variable, under [Change Requests/Change Tasks](#changes) below. + + +### Change Requests/Change Tasks + +Harness incorporates ServiceNow's hierarchy of Change Requests and linked Change Tasks. Shown below is a Workflow step that creates a new ticket of type **Change Task**, linked to an existing Change Request. + +We've specified the **Change Request Number** using the `myVar` variable shown in earlier the [Import Set](#import_set) example, and have also created a new output variable, `myVar2`: + +![](./static/service-now-integration-180.png) + +Once the Change Task is created in ServiceNow, we can update it in a later Workflow step, by selecting **Update an Issue** and **Ticket Type: Change Task**. To identify the Change Task ticket to update, we can either enter an **Issue Number**, or (as shown in this example) we can use the output variable `myVar.transformationValues[0]` created in the [previous Workflow step](#import_set). + +![](./static/service-now-integration-181.png) + +For Change Tasks only, you can select the **Update Multiple** check box. This exposes the **Change Task Type** drop-down shown below. In this example, all Change Tasks of type **Planning** will be updated with the selected **Update State**, and with the contents of the **Add Work Notes** field. + +![](./static/service-now-integration-182.png) + +Once all Change Tasks have been resolved, you can close the associated Change Request by selecting **Update an Issue** and **Ticket Type: Change**. Here, we're specifying the **Issue Number** using our original variable. + +![](./static/service-now-integration-183.png) + +### ServiceNow-based Approvals + +You can use a ServiceNow ticket as an approval stage in a Harness Pipeline and, when the ticket is updated, the Pipeline stage can be approved or rejected. For example, changing the **State attribute** of a ServiceNow ticket from **Resolved** to **Approved** could automatically approve the execution of a Pipeline stage or Workflow deployment in Harness. + +For more information, see [ServiceNow Approvals](../approvals/service-now-ticketing-system.md). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/skip-workflow-steps.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/skip-workflow-steps.md new file mode 100644 index 00000000000..587a08ea15d --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/skip-workflow-steps.md @@ -0,0 +1,200 @@ +--- +title: Skip Workflow Steps +description: Skip one or more steps based on different conditions. +sidebar_position: 210 +helpdocs_topic_id: 02i1k7nvsj +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can configure Workflows to skip one or more steps based on different conditions. + +Harness users often have Workflows that are very similar, but differ in settings such as Service, Environment, or artifact. Instead if managing multiple Workflows, you can merge these Workflows and add conditions that skip specific steps based on these different conditions. + +:::note +You can add Workflow step skip conditions to any Harness Workflow types. +::: + +### Before You Begin + +* [Workflows](workflow-configuration.md) +* [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) + +### Option 1: Skip All Steps in a Section + +This example uses Harness Workflow templates and Harness built-in variable expressions. See [Template a Workflow](workflow-configuration.md#template-a-workflow) and [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables).Open a Workflow that contains multiple steps that you want skipped under a specific condition. + +For example, here is a Workflow that deploys a WAR file to physical servers. + +It contains two **Prepare Infra** steps to select nodes: + +![](./static/skip-workflow-steps-184.png) + +For this example, you want to skip all of these steps if the name of the Environment used by the Workflow is **testing**. + +Click the more options button (**︙**) next to **Prepare Infra** to see its options. + +![](./static/skip-workflow-steps-185.png) + +In **Execution**, click **Conditional**. The **Conditional Execution** settings appear. + +![](./static/skip-workflow-steps-186.png) + +Click **All Steps**. + +In **Skip condition**, enter the condition to evaluate. Enter `$` to see the available Harness built-in variables. See [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +Skip conditions are boolean. If the condition you enter in **Skip condition** evaluates to **true**, all of the steps in **Prepare Infra** are skipped. + +Skip conditions use [Java Expression Language (JEXL)](https://commons.apache.org/proper/commons-jexl/reference/syntax.html). You could use simply use a condition like `myvalue == true`. + +Skip conditions support advanced expressions. For example: + +`!${workflow.variables.A} == 1 || ${workflow.variables.B} == 2)` + +Review the operators in the [JEXL reference](https://commons.apache.org/proper/commons-jexl/reference/syntax.html#Operators). + +In our example, we want the steps skipped if the name if the Environment used by the Workflow is **testing**. The **Conditional Execution** settings are: `${env.name} == 'testing'` + +![](./static/skip-workflow-steps-187.png) + +The variable expression `${env.name}` is one of the many built-in Harness expressions. + +Click **Submit**. The Workflow indicates that the section is using skip conditions, and displays the skip condition you set up: + +![](./static/skip-workflow-steps-188.png)In this example we deploy the Workflow and select **testing** for the Environment. This means that the skip condition evaluates to true, and the steps are skipped. + +![](./static/skip-workflow-steps-189.png) + +As you can see, the deployment results indicates the skip condition. + +![](./static/skip-workflow-steps-190.png) + +If you wish to remove the defined conditional expressions, click on the **Default** button and confirm. + +![](./static/skip-workflow-steps-191.png) + +### Option 2: Skip Specific Steps in a Section + +You can skip specific steps based on a condition. + +Open a Workflow, and click the more options button (**︙**) next to a section heading, such as **Prepare Infra**. + +![](./static/skip-workflow-steps-192.png) + +In **Execution**, click **Conditional**. The **Conditional Execution** settings appear. + +![](./static/skip-workflow-steps-193.png) + +Click **Selected steps**. In **Skip Conditions** click **Add**. The settings for adding multiple conditions appear. + +![](./static/skip-workflow-steps-194.png) + +Click in **Skip** and select the step(s) you want to set a skip condition on. + +![](./static/skip-workflow-steps-195.png) + +In **Skip condition**, enter the condition to evaluate. Enter `$` to see the available Harness built-in variables. + +![](./static/skip-workflow-steps-196.png) + +Click **Add** to add conditions for other steps. Here's an example with conditions for two steps using the variable expression `${infra.name}`. + +The `${infra.name}` evaluates to the name of the Infrastructure Definition used by this Workflow at deployment runtime. See [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) for other built-in expressions. + +In this example, two conditions are added: + +* The Infrastructure Definition named **QA** is selected as the condition value, and the step **Select\_Nodes\_Dev** is selected as the skip step. +* The Infrastructure Definition named **DEV** is selected as the condition value, and the step **Select\_Nodes\_QA** is skipped. + +![](./static/skip-workflow-steps-197.png) + +Click **Submit**. The Workflow indicates that each step is using skip conditions, and displays the skip condition you set up for each step: + +![](./static/skip-workflow-steps-198.png) + +In this example we deploy the Workflow and select **DEV** for the Infrastructure Infrastructure. This means that the skip condition evaluates to true, and the **Select Nodes\_QA** step is skipped. + +![](./static/skip-workflow-steps-199.png) + +As you can see, the deployment results indicates the skip condition. + +![](./static/skip-workflow-steps-200.png) + +### Option 3: Skip Rollback Steps + +You can add skip conditions to steps in a Workflow **Rollback Steps** section. + +For example, in a Harness Kubernetes Canary Workflow there is a default rollback step, **Rollback Deployment**. + +![](./static/skip-workflow-steps-201.png) + +Perhaps you are testing and do not want to rollback the deployment if it fails. You can click the more option button next to **Deploy** and then in **Execution** click **Conditional**. + +![](./static/skip-workflow-steps-202.png) + +In **Conditional Execution**, you can set a condition to skip the **Rollback Deployment** step if the name of the Environment used by the Workflow is **testing**: + +![](./static/skip-workflow-steps-203.png) + +### Review: What Conditions Can I Use? + +You cannot use Harness secrets in conditions.The conditions you set in **Conditional Execution** are boolean. If the condition evaluates to **true**, the selected step is skipped. + +The conditions follow the syntax `val1 == val2`. + +You can use boolean values `true` and `false`, or any other value that can be evaluated. + +Skip conditions use [Java Expression Language (JEXL)](https://commons.apache.org/proper/commons-jexl/reference/syntax.html). You could use simply use a condition like `myvalue == true`. + +Skip conditions support advanced expressions. For example: + +`!${workflow.variables.A} == 1 || ${workflow.variables.B} == 2)` + +Review the operators in the [JEXL reference](https://commons.apache.org/proper/commons-jexl/reference/syntax.html#Operators). + +For variable expressions, you can use: + +* Harness built-in variable expressions. +* Expressions that reference Service or Workflow variables you created. +* Expressions that reference variables published as part of an Approval Stage in a Pipeline. + +Ensure the variable referenced by the expression is available in the Workflow section where you have applied the skip condition. + +For example, Service variables are not available in Canary or Multi-Service Workflow **Pre-deployment Steps**. This is because in these Workflow types, the Service is selected in the Phases that follow the **Pre-deployment Steps**. + +If the variable expression you select cannot be evaluated, the Workflow will fail. + +![](./static/skip-workflow-steps-204.png) + +#### Regex + +Harness supports the [Java Expression Language (JEXL)](https://commons.apache.org/proper/commons-jexl/reference/syntax.html). You can use JEXL regex in your skip conditions. + +For example, if the artifact the Workflow is deploying is named **todolist.war** you could use the following regex in your skip condition: `${artifact.fileName} =~ '^todo.*'`. + +#### Operators + +Harness supports [JEXL operators](https://commons.apache.org/proper/commons-jexl/reference/syntax.html#Operators). Since skip conditions are boolean, you may only use operators that evaluate to true or false. An expression that evaluates to a numeric value like `${artifact.buildNo}*10` will not work. + +As an example of other operators, you can use the Starts With `=^` operator like this `${infra.name} =^ 'Q'` : + +![](./static/skip-workflow-steps-205.png) + +If you select the **QA** Infrastructure Definition at deployment, the condition is matched because **QA** starts with **Q** and the **Select Nodes\_Dev** is skipped: + +![](./static/skip-workflow-steps-206.png) + +Here is another example using the Inequality `!=` operator: + +![](./static/skip-workflow-steps-207.png) + +### Limitations + +* Skip conditions do not support the Ternary conditional `?:` operator. + +### Next Steps + +* [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/specific-hosts.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/specific-hosts.md new file mode 100644 index 00000000000..14b7054cf7f --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/specific-hosts.md @@ -0,0 +1,103 @@ +--- +title: Target Specific Hosts During Deployment +description: Dynamically select specific target hosts at deploy time, when starting or rerunning a traditional (SSH) deployment. +sidebar_position: 200 +helpdocs_topic_id: 3ecctnq3p9 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `DEPLOY_TO_SPECIFIC_HOSTS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.You can choose to deploy to specific hosts when you start or rerun a Workflow whose Service uses the [Secure Shell (SSH) Deployment Type](https://docs.harness.io/article/6pwni5f9el-traditional-deployments-overview) or [Windows Remote Management (WinRM)](https://docs.harness.io/article/2oo63r9rwb-iis-net-quickstart) deployment types. + +![](./static/specific-hosts-13.png) + +### Review: Target Specific Hosts + +By default, when you deploy an SSH or WinRM Service, Harness automatically selects target hosts in the VPC (Virtual Private Cloud) that you've specified in your Infrastructure Definition. + +You can use the **Tags** field here to identify specific hosts, but you'll need to have these hosts available for every deployment, and to ensure that the Tags are applied to them. Harness cannot query the hosts before deployment. + +Also, if you are doing frequent ad-hoc deployments, managing the Tags for your target hosts can be time-consuming, and you'll need to repeatedly update your Infrastructure Definitions. + +With the **Target to specific hosts only** alternative outlined below, when you deploy a Workflow for an SSH or WinRM  Service, you can select specific target hosts in the dialog that starts or restarts the deployment. + +![](./static/specific-hosts-14.png)This option enables you to manually select hosts even within templatized Workflows, and to redeploy specific Services without creating new Infrastructure Definition mappings. + +### Review: Restrictions + +The **Target to specific hosts only** option is restricted to the following circumstances where it is practical: + +* Available only in Workflows that deploy SSH Services (see [Traditional Deployments](https://docs.harness.io/article/6pwni5f9el-traditional-deployments-overview)) or WinRM Services (see [IIS (.NET) Quickstart](https://docs.harness.io/article/2oo63r9rwb-iis-net-quickstart)), and that therefore contain a [Select Nodes](https://docs.harness.io/article/9h1cqaxyp9-select-nodes-workflow-step) step. +* Available only in direct Workflow execution—not in Pipeline or Trigger execution. +* Available only in Basic, Canary, or Rolling Workflows that deploy a *single* Harness SSH or WinRM Service. +* Unavailable with [dynamically provisioned](../infrastructure-provisioner/add-an-infra-provisioner.md) Infrastructure Definitions, relying on Terraform or CloudFormation (where selecting nodes/hosts is not possible). +* In a multi-phase Workflow, your selected hosts will override only the first phase. Harness will skip remaining phases. +* Overrides any [Select Nodes](https://docs.harness.io/article/9h1cqaxyp9-select-nodes-workflow-step) count, percentage, or specific hosts statically configured in the Workflow. +* Overrides the option to **Skip instances with the same artifact version already deployed.** +* Defaults to no selected hosts each time you start or rerun a Workflow deployment. + +### Review: Skip instances with the same artifact version already deployed + +Currently, this feature is behind the feature flag `DEPLOY_TO_SPECIFIC_HOSTS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If you want to rerun the Workflow and only deploy to instances that do not have the same artifact version, enable the **Skip instances with the same artifact version already deployed** option. + +![](./static/specific-hosts-15.png) + +### Step 1: Selecting Hosts + +When you click the **Deploy**, **Start New Deployment**, or **Rerun Workflow** option for a Workflow set up with an SSH Service, the resulting dialog displays the option highlighted below, labeled **Target to specific hosts only**: + +![](./static/specific-hosts-16.png)Clicking this check box overrides the adjacent option labeled **Skip instances with the same artifact version already deployed**. (This option grays out, as shown below.) + +More importantly, clicking the check box displays a new **Select Hosts** drop-down: + +![](./static/specific-hosts-17.png)Click **Select Hosts** to open the controls shown below. + +Each time you start or rerun the Workflow, the upper selection field will be empty. Regardless of your selections during prior deployments, you must manually select target hosts for each new deployment.![](./static/specific-hosts-18.png)These controls provide the following options for selecting target hosts: + +* Scroll the drop-down list to select individual hosts' check boxes. +* Click **Select All** to select all hosts in the list. (This is a toggle: Empty the check box to deselect all hosts.) +* Type substrings into the **Search** box to scroll directly to individual hosts. + +#### Searching for Hosts + +In this example, we've searched on a substring to locate—and select—one matching host: + +![](./static/specific-hosts-19.png) + +#### Confirming Hosts + +After you've selected your desired target hosts, click or tab out of the selection field. The field's label now reads **Selected Hosts**, and summarizes the number of hosts you've selected: + +![](./static/specific-hosts-20.png)To double-check that you've selected individual hosts, you can reopen the [Search](#search) box, and find these hosts by substring. + +When you're satisfied with your selections, click **Submit** to deploy the Workflow. + +### Option: Select Host Not in Infrastructure Definition + +Currently, this feature is behind the feature flag `DEPLOY_TO_INLINE_HOSTS` and available in SSH or WinRM deployments only. Contact [Harness Support](mailto:support@harness.io) to enable the feature.When your Workflow deploys an SSH Service or WinRM using an Infrastructure Definition of Deployment Type **Secure Shell (SSH)** or **Windows Remote Management (WinRM)**, you can select target hosts that were not selected in the Workflow's Infrastructure Definition. + +In **Start New Deployment**, in the **Target to specific hosts only** option, enter additional hosts. + +![](./static/specific-hosts-21.png)Enter the target host name or IP address and click **Create option**. + +**Target to specific hosts only** allows you to select hosts that are listed in the Workflow's Infrastructure Definition, enter additional hosts manually, or use Workflow variable expressions. + +### Review: Selected Hosts in Deployment + +Using the example configuration above, deployment proceeds as normal to the **Select Nodes** step: + +![](./static/specific-hosts-22.png)As that step completes, the **Details** panel confirms that our execution-time selections have overridden the Workflow's **Select Nodes** defaults: + +![](./static/specific-hosts-23.png)Once the **Install** step executes, its log confirms connections to the host(s) we've specified: + +![](./static/specific-hosts-24.png)Assuming that all selected hosts are available, your deployment should conclude successfully: + +![](./static/specific-hosts-25.png) + +### Review: Rerunning Workflows + +When you select the Workflow's **Rerun Workflow** link, the resulting dialog will display the same **Target to specific hosts only** check box and selection controls, in the same location: + +![](./static/specific-hosts-26.png)As with the first deployment, the selection drop-down list will initially be empty. It will not pre-populate with any of the hosts selected in the previous run of the Workflow. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_abort-button-left.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_abort-button-left.png new file mode 100644 index 00000000000..abd20548c02 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_abort-button-left.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_add-cmd-cur-left.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_add-cmd-cur-left.png new file mode 100644 index 00000000000..1794421632d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_add-cmd-cur-left.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_add-cmd-new-right.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_add-cmd-new-right.png new file mode 100644 index 00000000000..5a569599399 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_add-cmd-new-right.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_cevfjb.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_cevfjb.png new file mode 100644 index 00000000000..0e69e7e340b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_cevfjb.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-headers.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-headers.png new file mode 100644 index 00000000000..8f65454d1f9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-headers.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-shell-script.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-shell-script.png new file mode 100644 index 00000000000..6d8a6be9ecc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-shell-script.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-step.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-step.png new file mode 100644 index 00000000000..eb5c78550ea Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_http-step.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_jenkins-expr.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_jenkins-expr.png new file mode 100644 index 00000000000..5017539ca3e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_jenkins-expr.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_jenkins-srvr2.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_jenkins-srvr2.png new file mode 100644 index 00000000000..6c5dce15287 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_jenkins-srvr2.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_rollback-button-right.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_rollback-button-right.png new file mode 100644 index 00000000000..8e3bd58979e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_rollback-button-right.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_wf-vars-prod-right.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_wf-vars-prod-right.png new file mode 100644 index 00000000000..ce729375db7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_wf-vars-prod-right.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_wf-vars-qa-left.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_wf-vars-qa-left.png new file mode 100644 index 00000000000..17219d2cbf4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/_wf-vars-qa-left.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-83.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-83.png new file mode 100644 index 00000000000..2906bac2930 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-83.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-84.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-84.png new file mode 100644 index 00000000000..e41e930f2f0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-84.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-85.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-85.png new file mode 100644 index 00000000000..f2c96f4c7cc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-85.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-86.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-86.png new file mode 100644 index 00000000000..638df7b8197 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-86.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-87.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-87.png new file mode 100644 index 00000000000..8a71c077f9e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-87.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-88.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-88.png new file mode 100644 index 00000000000..a1ce202ef6e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-notification-strategy-new-template-88.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-steps-for-different-tasks-in-a-wor-kflow-101.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-steps-for-different-tasks-in-a-wor-kflow-101.png new file mode 100644 index 00000000000..4d4d7c11d37 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-steps-for-different-tasks-in-a-wor-kflow-101.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-steps-for-different-tasks-in-a-wor-kflow-102.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-steps-for-different-tasks-in-a-wor-kflow-102.png new file mode 100644 index 00000000000..6fc2d70757b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-steps-for-different-tasks-in-a-wor-kflow-102.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-phase-new-template-89.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-phase-new-template-89.png new file mode 100644 index 00000000000..3004f814695 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-phase-new-template-89.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-phase-new-template-90.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-phase-new-template-90.png new file mode 100644 index 00000000000..bf791b16827 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-phase-new-template-90.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-234.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-234.png new file mode 100644 index 00000000000..ec5d72eb9db Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-234.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-235.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-235.png new file mode 100644 index 00000000000..d5e6a1232fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-235.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-236.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-236.png new file mode 100644 index 00000000000..efc5745ecd5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/add-workflow-variables-new-template-236.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-100.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-100.png new file mode 100644 index 00000000000..50d6b5e5441 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-100.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-91.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-91.png new file mode 100644 index 00000000000..c90c643df2e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-91.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-92.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-92.png new file mode 100644 index 00000000000..0f0f5114cba Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-92.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-93.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-93.png new file mode 100644 index 00000000000..1aae0fbdc6c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-93.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-94.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-94.png new file mode 100644 index 00000000000..bea1115f476 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-94.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-95.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-95.png new file mode 100644 index 00000000000..d2f2020d05d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-95.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-96.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-96.png new file mode 100644 index 00000000000..e284b23e540 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-96.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-97.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-97.png new file mode 100644 index 00000000000..5aece0e5547 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-97.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-98.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-98.png new file mode 100644 index 00000000000..ef9d6ff70de Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-98.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-99.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-99.png new file mode 100644 index 00000000000..a6b805cbf9e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/capture-shell-script-step-output-99.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-55.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-55.png new file mode 100644 index 00000000000..2bc14fa72d2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-55.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-56.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-56.png new file mode 100644 index 00000000000..2e7924363e3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-56.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-57.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-57.png new file mode 100644 index 00000000000..2d55888adb0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/clone-a-workflow-57.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/configure-workflow-using-yaml-12.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/configure-workflow-using-yaml-12.png new file mode 100644 index 00000000000..56126b16678 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/configure-workflow-using-yaml-12.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-208.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-208.png new file mode 100644 index 00000000000..1dda1512a76 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-208.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-209.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-209.png new file mode 100644 index 00000000000..ddfde4a6152 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-209.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-210.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-210.png new file mode 100644 index 00000000000..db19d0b7ec8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-210.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-211.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-211.png new file mode 100644 index 00000000000..bfea4735557 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-211.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-212.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-212.png new file mode 100644 index 00000000000..6bc081c3ce2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-212.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-213.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-213.png new file mode 100644 index 00000000000..1dda1512a76 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-213.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-214.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-214.png new file mode 100644 index 00000000000..21b45ef0af4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-214.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-215.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-215.png new file mode 100644 index 00000000000..0f1094f8a45 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-215.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-216.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-216.png new file mode 100644 index 00000000000..94e59411859 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-216.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-217.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-217.png new file mode 100644 index 00000000000..bb31b4f33bd Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-217.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-218.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-218.png new file mode 100644 index 00000000000..30a236397cb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-218.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-219.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-219.png new file mode 100644 index 00000000000..a3f16855744 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/define-workflow-failure-strategy-new-template-219.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-a-workflow-10.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-a-workflow-10.png new file mode 100644 index 00000000000..abcdf8d0635 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-a-workflow-10.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-a-workflow-11.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-a-workflow-11.png new file mode 100644 index 00000000000..840957a158a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-a-workflow-11.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-42.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-42.png new file mode 100644 index 00000000000..40151554f57 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-42.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-43.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-43.png new file mode 100644 index 00000000000..dedf9b1bce5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-43.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-44.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-44.png new file mode 100644 index 00000000000..9a2a1b1203b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-44.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-45.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-45.png new file mode 100644 index 00000000000..265713af1e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-45.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-46.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-46.png new file mode 100644 index 00000000000..5852d3e9805 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-46.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-47.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-47.png new file mode 100644 index 00000000000..161c482618f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-47.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-48.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-48.png new file mode 100644 index 00000000000..ece056e7d95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-48.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-49.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-49.png new file mode 100644 index 00000000000..c8162fb0023 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-49.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-50.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-50.png new file mode 100644 index 00000000000..f20f4eb4dfe Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/deploy-multiple-services-simultaneously-using-barriers-50.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-58.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-58.png new file mode 100644 index 00000000000..47e5bf9d4da Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-58.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-59.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-59.png new file mode 100644 index 00000000000..f4c30fdb42f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-59.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-60.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-60.png new file mode 100644 index 00000000000..28bbbaee2ab Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-60.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-61.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-61.png new file mode 100644 index 00000000000..d5786446ad3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-61.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-62.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-62.png new file mode 100644 index 00000000000..3e6a0784907 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-62.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-63.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-63.png new file mode 100644 index 00000000000..df8ff234cc0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-63.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-64.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-64.png new file mode 100644 index 00000000000..b5943f8e323 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-64.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-65.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-65.png new file mode 100644 index 00000000000..2952579d6d0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-65.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-66.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-66.png new file mode 100644 index 00000000000..b47e7ed3454 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-66.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-67.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-67.png new file mode 100644 index 00000000000..24bee245203 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-67.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-68.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-68.png new file mode 100644 index 00000000000..a786684003d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-68.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-69.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-69.png new file mode 100644 index 00000000000..434605911d5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-69.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-70.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-70.png new file mode 100644 index 00000000000..79c348bc01e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-70.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-71.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-71.png new file mode 100644 index 00000000000..021b6ae8cae Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-71.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-72.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-72.png new file mode 100644 index 00000000000..00ffe1f4428 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-72.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-73.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-73.png new file mode 100644 index 00000000000..0b45dc1c8e4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-73.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-74.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-74.png new file mode 100644 index 00000000000..65c405cd06d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-74.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-75.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-75.png new file mode 100644 index 00000000000..8be8fdebe16 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-75.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-76.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-76.png new file mode 100644 index 00000000000..d26fd5e9f77 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-76.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-77.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-77.png new file mode 100644 index 00000000000..242fc00fa4f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-77.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-78.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-78.png new file mode 100644 index 00000000000..55906b07d21 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-78.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-79.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-79.png new file mode 100644 index 00000000000..7499b609e13 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-79.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-80.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-80.png new file mode 100644 index 00000000000..1db250d664b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-80.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-81.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-81.png new file mode 100644 index 00000000000..bb6c56ce8cf Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/integrate-tests-into-harness-workflows-81.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-121.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-121.png new file mode 100644 index 00000000000..6609e567c9b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-121.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-122.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-122.png new file mode 100644 index 00000000000..47422dc3176 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-122.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-123.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-123.png new file mode 100644 index 00000000000..1d9b8f8318a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-123.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-124.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-124.png new file mode 100644 index 00000000000..c3621ec9cba Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-124.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-125.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-125.png new file mode 100644 index 00000000000..bf5d5132451 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-125.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-126.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-126.png new file mode 100644 index 00000000000..1b86db21593 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-126.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-127.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-127.png new file mode 100644 index 00000000000..ca5dfebf61f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-127.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-128.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-128.png new file mode 100644 index 00000000000..fed5124f3be Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-128.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-129.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-129.png new file mode 100644 index 00000000000..9c0b385f245 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-129.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-130.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-130.png new file mode 100644 index 00000000000..b245172cd88 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-130.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-131.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-131.png new file mode 100644 index 00000000000..da026c0988e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-131.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-132.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-132.png new file mode 100644 index 00000000000..3103f00cfb4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-132.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-133.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-133.png new file mode 100644 index 00000000000..425e149cb6e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-133.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-134.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-134.png new file mode 100644 index 00000000000..b3ac84efb86 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-134.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-135.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-135.png new file mode 100644 index 00000000000..e6611bfeffc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-135.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-136.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-136.png new file mode 100644 index 00000000000..36401a27437 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-136.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-137.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-137.png new file mode 100644 index 00000000000..28f87ea9b63 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-137.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-138.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-138.png new file mode 100644 index 00000000000..4a0682ec776 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-138.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-139.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-139.png new file mode 100644 index 00000000000..6126cea69b3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-139.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-140.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-140.png new file mode 100644 index 00000000000..363415f4f6a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-140.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-141.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-141.png new file mode 100644 index 00000000000..96c29d131ed Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-141.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-142.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-142.png new file mode 100644 index 00000000000..652235a03fa Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-142.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-143.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-143.png new file mode 100644 index 00000000000..d0bd3e10bfe Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-143.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-144.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-144.png new file mode 100644 index 00000000000..abb399e92fa Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-144.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-145.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-145.png new file mode 100644 index 00000000000..cb4a815d8a3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-145.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-146.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-146.png new file mode 100644 index 00000000000..3663c3fc3eb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-146.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-147.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-147.png new file mode 100644 index 00000000000..fc4033f27f6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-147.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-148.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-148.png new file mode 100644 index 00000000000..cdca67c2637 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-148.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-149.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-149.png new file mode 100644 index 00000000000..d70c018affb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-149.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-150.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-150.png new file mode 100644 index 00000000000..0dd1f36426c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-150.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-151.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-151.png new file mode 100644 index 00000000000..e5a9bf5832b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-151.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-152.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-152.png new file mode 100644 index 00000000000..9b411a1543d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-152.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-153.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-153.png new file mode 100644 index 00000000000..5c75733a728 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-153.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-154.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-154.png new file mode 100644 index 00000000000..ffaf2e2749c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-154.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-155.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-155.png new file mode 100644 index 00000000000..15041030864 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-155.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-156.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-156.png new file mode 100644 index 00000000000..ef29f4888e6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-156.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-157.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-157.png new file mode 100644 index 00000000000..f48d34af781 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/jira-integration-157.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-02.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-02.png new file mode 100644 index 00000000000..d1f0f67917a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-02.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-03.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-03.png new file mode 100644 index 00000000000..f758d97610a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-03.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-04.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-04.png new file mode 100644 index 00000000000..5af69a018f3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-04.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-05.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-05.png new file mode 100644 index 00000000000..2d6b42c3647 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-05.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-06.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-06.png new file mode 100644 index 00000000000..5d403d2cbdc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-06.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-07.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-07.png new file mode 100644 index 00000000000..6f7a25c4c0a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-07.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-08.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-08.png new file mode 100644 index 00000000000..992c07c156d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-08.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-09.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-09.png new file mode 100644 index 00000000000..1e6a724e07a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/post-deployment-rollback-09.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/resource-restrictions-00.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/resource-restrictions-00.png new file mode 100644 index 00000000000..caa38fc479d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/resource-restrictions-00.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/resource-restrictions-01.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/resource-restrictions-01.png new file mode 100644 index 00000000000..7483cb0d04b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/resource-restrictions-01.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-51.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-51.png new file mode 100644 index 00000000000..828a04a6d56 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-51.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-52.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-52.png new file mode 100644 index 00000000000..0e056b08cc4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-52.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-53.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-53.png new file mode 100644 index 00000000000..0d519a77b00 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-53.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-54.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-54.png new file mode 100644 index 00000000000..06ff05e7b72 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/select-nodes-in-a-rolling-deployment-workflow-54.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/send-an-email-in-your-workflow-237.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/send-an-email-in-your-workflow-237.png new file mode 100644 index 00000000000..a6a2d49d13a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/send-an-email-in-your-workflow-237.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-159.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-159.png new file mode 100644 index 00000000000..6aff56b91c3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-159.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-160.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-160.png new file mode 100644 index 00000000000..480cf61cca3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-160.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-161.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-161.png new file mode 100644 index 00000000000..667ae990d38 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-161.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-162.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-162.png new file mode 100644 index 00000000000..f0ff48b0b3a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-162.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-163.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-163.png new file mode 100644 index 00000000000..9def91a8e35 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-163.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-164.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-164.png new file mode 100644 index 00000000000..246c75148a4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-164.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-165.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-165.png new file mode 100644 index 00000000000..af5be4cc81c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-165.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-166.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-166.png new file mode 100644 index 00000000000..1aca2e01ff2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-166.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-167.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-167.png new file mode 100644 index 00000000000..54b4e18d651 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-167.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-168.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-168.png new file mode 100644 index 00000000000..6d94069a829 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-168.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-169.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-169.png new file mode 100644 index 00000000000..740afd0dc3d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-169.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-170.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-170.png new file mode 100644 index 00000000000..ce394f1f177 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-170.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-171.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-171.png new file mode 100644 index 00000000000..2a4dd5ce307 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-171.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-172.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-172.png new file mode 100644 index 00000000000..29ec832a793 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-172.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-173.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-173.png new file mode 100644 index 00000000000..fb20811b231 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-173.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-174.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-174.png new file mode 100644 index 00000000000..c7d9e07cb30 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-174.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-175.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-175.png new file mode 100644 index 00000000000..e916226006d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-175.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-176.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-176.png new file mode 100644 index 00000000000..69402064b0d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-176.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-177.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-177.png new file mode 100644 index 00000000000..b2a59ba7e9f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-177.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-178.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-178.png new file mode 100644 index 00000000000..606f76f89af Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-178.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-179.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-179.png new file mode 100644 index 00000000000..a3267d0035d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-179.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-180.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-180.png new file mode 100644 index 00000000000..544814805d8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-180.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-181.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-181.png new file mode 100644 index 00000000000..96cf2fe7eb2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-181.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-182.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-182.png new file mode 100644 index 00000000000..6487df5fc1f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-182.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-183.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-183.png new file mode 100644 index 00000000000..b5f49bc993f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/service-now-integration-183.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-184.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-184.png new file mode 100644 index 00000000000..afc0e3a9b87 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-184.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-185.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-185.png new file mode 100644 index 00000000000..eeae9d22a9a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-185.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-186.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-186.png new file mode 100644 index 00000000000..d345ac1d7f4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-186.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-187.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-187.png new file mode 100644 index 00000000000..cf97b0a6541 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-187.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-188.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-188.png new file mode 100644 index 00000000000..5118d1ee506 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-188.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-189.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-189.png new file mode 100644 index 00000000000..13fe9b40416 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-189.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-190.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-190.png new file mode 100644 index 00000000000..7d227c33bf3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-190.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-191.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-191.png new file mode 100644 index 00000000000..7d9670eb8dc Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-191.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-192.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-192.png new file mode 100644 index 00000000000..eeae9d22a9a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-192.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-193.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-193.png new file mode 100644 index 00000000000..d345ac1d7f4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-193.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-194.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-194.png new file mode 100644 index 00000000000..708e62b9d63 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-194.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-195.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-195.png new file mode 100644 index 00000000000..2a459e73777 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-195.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-196.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-196.png new file mode 100644 index 00000000000..5b937559b9a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-196.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-197.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-197.png new file mode 100644 index 00000000000..a0ad4c0fc13 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-197.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-198.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-198.png new file mode 100644 index 00000000000..96f0c9ca21c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-198.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-199.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-199.png new file mode 100644 index 00000000000..8dbf8e62835 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-199.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-200.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-200.png new file mode 100644 index 00000000000..4841f79e037 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-200.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-201.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-201.png new file mode 100644 index 00000000000..8c0df77fa2e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-201.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-202.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-202.png new file mode 100644 index 00000000000..0336d7619c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-202.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-203.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-203.png new file mode 100644 index 00000000000..48400dca001 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-203.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-204.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-204.png new file mode 100644 index 00000000000..c3570ba121b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-204.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-205.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-205.png new file mode 100644 index 00000000000..3c4828102a1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-205.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-206.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-206.png new file mode 100644 index 00000000000..435f24919ce Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-206.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-207.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-207.png new file mode 100644 index 00000000000..0db29840213 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/skip-workflow-steps-207.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-13.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-13.png new file mode 100644 index 00000000000..04614d0de1f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-13.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-14.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-14.png new file mode 100644 index 00000000000..c1aaf729e8e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-14.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-15.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-15.png new file mode 100644 index 00000000000..a6086efd509 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-15.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-16.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-16.png new file mode 100644 index 00000000000..e16d24ed9c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-16.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-17.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-17.png new file mode 100644 index 00000000000..8635b4bd597 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-17.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-18.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-18.png new file mode 100644 index 00000000000..dce6c4f6a24 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-18.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-19.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-19.png new file mode 100644 index 00000000000..7e0af94bfc0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-19.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-20.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-20.png new file mode 100644 index 00000000000..f89e96289d2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-20.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-21.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-21.png new file mode 100644 index 00000000000..629beadea68 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-21.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-22.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-22.png new file mode 100644 index 00000000000..144419f4679 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-22.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-23.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-23.png new file mode 100644 index 00000000000..a1169f3a4b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-23.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-24.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-24.png new file mode 100644 index 00000000000..19383f14c20 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-24.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-25.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-25.png new file mode 100644 index 00000000000..fd8cfd2ec76 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-25.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-26.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-26.png new file mode 100644 index 00000000000..4ef4601c23e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/specific-hosts-26.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/synchronize-workflows-in-your-pipeline-using-barrier-232.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/synchronize-workflows-in-your-pipeline-using-barrier-232.png new file mode 100644 index 00000000000..6c2a53dc3f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/synchronize-workflows-in-your-pipeline-using-barrier-232.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/synchronize-workflows-in-your-pipeline-using-barrier-233.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/synchronize-workflows-in-your-pipeline-using-barrier-233.png new file mode 100644 index 00000000000..d67fded1445 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/synchronize-workflows-in-your-pipeline-using-barrier-233.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/tags-how-tos-158.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/tags-how-tos-158.png new file mode 100644 index 00000000000..48d3426a6b3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/tags-how-tos-158.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-238.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-238.png new file mode 100644 index 00000000000..25f5607938d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-238.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-239.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-239.png new file mode 100644 index 00000000000..1ee6eccb9e6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-239.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-240.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-240.png new file mode 100644 index 00000000000..2972537f95d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/templatize-a-workflow-new-template-240.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-http-command-82.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-http-command-82.png new file mode 100644 index 00000000000..deb758125a0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-http-command-82.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-27.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-27.png new file mode 100644 index 00000000000..b3ea244161c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-27.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-28.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-28.png new file mode 100644 index 00000000000..098345fa27e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-28.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-29.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-29.png new file mode 100644 index 00000000000..64c9b0c9af6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-29.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-30.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-30.png new file mode 100644 index 00000000000..b3ea244161c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-30.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-31.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-31.png new file mode 100644 index 00000000000..f505ba70012 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-31.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-32.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-32.png new file mode 100644 index 00000000000..81594d3cbc9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-32.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-33.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-33.png new file mode 100644 index 00000000000..81594d3cbc9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-33.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-34.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-34.png new file mode 100644 index 00000000000..7867c5033f4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-34.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-35.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-35.png new file mode 100644 index 00000000000..7551615c1a7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-35.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-36.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-36.png new file mode 100644 index 00000000000..ff4b442f3b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-36.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-37.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-37.png new file mode 100644 index 00000000000..354c9596b6d Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-37.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-38.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-38.png new file mode 100644 index 00000000000..f505ba70012 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-38.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-39.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-39.png new file mode 100644 index 00000000000..524ab5812cd Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-39.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-40.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-40.png new file mode 100644 index 00000000000..3bacd21a3fb Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-40.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-41.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-41.png new file mode 100644 index 00000000000..7e10353a080 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/using-the-jenkins-command-41.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/verify-workflow-new-template-220.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/verify-workflow-new-template-220.png new file mode 100644 index 00000000000..e7da634214c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/verify-workflow-new-template-220.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-221.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-221.png new file mode 100644 index 00000000000..a8948508329 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-221.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-222.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-222.png new file mode 100644 index 00000000000..808d2282da5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-222.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-223.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-223.png new file mode 100644 index 00000000000..6c2a53dc3f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-223.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-224.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-224.png new file mode 100644 index 00000000000..a77a1f7f2e8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-224.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-225.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-225.png new file mode 100644 index 00000000000..eb33dd6079e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-225.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-226.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-226.png new file mode 100644 index 00000000000..3d581e505fd Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-226.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-227.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-227.png new file mode 100644 index 00000000000..eec522cb557 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-227.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-228.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-228.png new file mode 100644 index 00000000000..15b962dafe9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-228.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-229.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-229.png new file mode 100644 index 00000000000..f758d97610a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-configuration-229.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-103.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-103.png new file mode 100644 index 00000000000..16c5da2566e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-103.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-104.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-104.png new file mode 100644 index 00000000000..58171eac243 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-104.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-105.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-105.png new file mode 100644 index 00000000000..9cf1aec4f2a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-105.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-106.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-106.png new file mode 100644 index 00000000000..c80c8e170e5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-106.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-107.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-107.png new file mode 100644 index 00000000000..190af0dc876 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-107.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-108.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-108.png new file mode 100644 index 00000000000..48267a6437e Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-108.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-109.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-109.png new file mode 100644 index 00000000000..b96b7997b86 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-109.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-110.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-110.png new file mode 100644 index 00000000000..39a119a7b95 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-110.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-111.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-111.png new file mode 100644 index 00000000000..41f26d101a3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-111.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-112.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-112.png new file mode 100644 index 00000000000..bc9bf295a47 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-112.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-113.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-113.png new file mode 100644 index 00000000000..64ecf09ca8c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-113.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-114.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-114.png new file mode 100644 index 00000000000..0e1d1565a1a Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-114.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-115.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-115.png new file mode 100644 index 00000000000..425512fd645 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-115.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-116.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-116.png new file mode 100644 index 00000000000..88c054f8c3c Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-116.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-117.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-117.png new file mode 100644 index 00000000000..ff963af5952 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-117.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-118.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-118.png new file mode 100644 index 00000000000..c55ed6b7cca Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-118.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-119.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-119.png new file mode 100644 index 00000000000..79bd9727f74 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-119.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-120.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-120.png new file mode 100644 index 00000000000..100b610239b Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-queuing-120.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-steps-ui-changes-230.png b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-steps-ui-changes-230.png new file mode 100644 index 00000000000..6e4098a461f Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-steps-ui-changes-230.png differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-steps-ui-changes-231.gif b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-steps-ui-changes-231.gif new file mode 100644 index 00000000000..70ec39b6ef9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/static/workflow-steps-ui-changes-231.gif differ diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/synchronize-workflows-in-your-pipeline-using-barrier.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/synchronize-workflows-in-your-pipeline-using-barrier.md new file mode 100644 index 00000000000..7059984afc7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/synchronize-workflows-in-your-pipeline-using-barrier.md @@ -0,0 +1,46 @@ +--- +title: Synchronize Workflow Deployments using Barriers +description: Synchronize different Workflows in your Pipeline, and control the flow of your deployment systematically. +sidebar_position: 110 +helpdocs_topic_id: 7tg1s7du0d +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Barriers allow you to synchronize different Workflows in your Pipeline, and control the flow of your deployment systematically. + +### Before You Begin + +* [Barriers](workflow-configuration.md#barriers) +* [Add a Workflow](tags-how-tos.md) + +### Review: Barriers and Synchronization + +When deploying interdependent services, such as microservices or a large and complicated application, there might be a need to coordinate the timing of the different components' deployments. A common example is the need to verify a group of services only after *all the services* are deployed successfully. + +Harness Workflows address this scenario using barriers. Barriers allow you to synchronize different Workflows in your Pipeline, and control the flow of your deployment systematically. + +Barriers have an effect only when two or more Workflows use the same barrier name, and are executed in parallel in a Pipeline. When executed in parallel, both Workflows will cross the barrier at the same time. + +If a Workflow fails before reaching its barrier point, the Workflow signals the other Workflows that have the same barrier, and the other Workflows will react as if they failed as well. At that point, each Workflow will act according to its [Failure Strategy](#failure_strategies). + +![](./static/synchronize-workflows-in-your-pipeline-using-barrier-232.png) + +### Step: Configure Barrier + +To use a barrier, do the following: + +1. In your Workflow, click **Add Step**. +2. Click **Barrier**. The **Configure Barrier** settings appear.![](./static/synchronize-workflows-in-your-pipeline-using-barrier-233.png) +3. In **Identifier**, enter a name for the barrier. This name must be identical to all the other related barriers in the other Workflows you want to impact. + +You cannot use a Harness variable expression in **Identifier**.1. In **Timeout**, enter the timeout period, in milliseconds. For example, 600000 milliseconds is 10 minutes. The timeout period determines how long each Workflow with a barrier must wait for the other Workflows to reach their barrier point. When the timeouts expire, it is considered a deployment failure. + +Barrier timeouts are not hard timeouts. A Barrier can fail anytime between timeout + 1min. + +#### Notes + +* You can have multiple barriers in a Workflow. Every Barrier in the same Workflow must use a unique Identifier. +* Ensure the identifier string for each related barrier across the different Workflows matches. + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/tags-how-tos.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/tags-how-tos.md new file mode 100644 index 00000000000..66740c4438d --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/tags-how-tos.md @@ -0,0 +1,49 @@ +--- +title: Add a Workflow +description: Outline the steps involved in setting up a Workflow. +sidebar_position: 20 +helpdocs_topic_id: o86qyexcab +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic outlines how to set up a Workflow. + +### Before You Begin + +Before adding a workflow, you should have an understanding of the following: + +* [Workflows](workflow-configuration.md) +* [Application Components](../applications/application-configuration.md) +* [Add a Service](../setup-services/service-configuration.md) +* [Add an Environment](../environments/environment-configuration.md) + + +### Visual Summary + + +### Step: Workflow Setup + +The following steps cover the common Workflow setup. To add a Workflow, do the following: + +1. Click **Setup**. +2. Click the Application where you want to put the Workflow. +3. Click **Workflows**. +4. Click **Add Workflow**. The **Workflow** dialog appears. +5. Give your Workflow a name and description that tells users its purpose. For example, if the Workflow takes a service with a Docker artifact and deploys it to a Kubernetes environment in GCP, you might name the workflow **docker-to-k8s-GCP**. +6. In **Workflow Type**, select the type of Workflow you want to perform. For a summary of the types, see **Workflow Types** below. +7. In **Environment**, select the environment where you want to deploy the service. Select from the environments you added in [Add an Environment](../environments/environment-configuration.md). +8. In **Service**, select the service you want to deploy. +9. In **Infrastructure Definition**, select the Infrastructure Definition where you want the Workflow to deploy the Service. +10. Click **SUBMIT**. The Workflow is created. Here is an example of a Basic Deployment. + +![](./static/tags-how-tos-158.png) + +If this is a Basic Deployment, you might need to update one step before using the Workflow. For example, in the figure above, the **Upgrade Containers** step requires attention. In the case of other deployment types, there might be additional steps to configure. + + +### Next Steps + +For platform and strategy-specific Workflow configurations, see the topics in [Continuous Deployments](https://docs.harness.io/category/1qtels4t8p-cd-category). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/templatize-a-workflow-new-template.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/templatize-a-workflow-new-template.md new file mode 100644 index 00000000000..28d5acf8ce9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/templatize-a-workflow-new-template.md @@ -0,0 +1,43 @@ +--- +title: Templatize a Workflow +description: Turn a Workflow into a Workflow template ("templatize it") by using variables for important settings such as Environment, Service, and Infrastructure Definition. +sidebar_position: 120 +helpdocs_topic_id: bov41f5b7o +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can turn a Workflow into a Workflow template ("templatize it") by using variables for important settings such as Environment, Service, and Infrastructure Definition. + +### Before You Begin + +* [Add a Workflow](tags-how-tos.md) + + +### Step: Turn a Workflow into a Template + +To turn a Workflow into a template, do the following: + +1. In a Workflow, click the More Options ⋮ menu next to the **Deploy** button, and click **Edit**. The **Workflow** settings appear. The Workflow settings that may be turned into variables have a **[T]** button in their fields. +2. For each setting you want to turn into a variable, click the **[T]** button in its field. The field values are replaced by variables. + + ![](./static/templatize-a-workflow-new-template-238.png) + + You can pass in variables from Triggers to set values for these Workflow variables. For more information, see [Passing Variables into Workflows and Pipelines from Triggers](../expressions/passing-variable-into-workflows.md). +3. Click **SUBMIT**. The new variables are displayed under **Workflow Variables**. +4. To see how the Workflow variables are used, click **Deploy**. The **Start New Deployment** dialog appears, displaying the variables you created in the **Workflow Variables** section. + + ![](./static/templatize-a-workflow-new-template-239.png) + + When this deployment is run, users can select from the options for each setting from the **Value** drop-down. Now the Workflow is a template that can be used by multiple services. + +In the **Workflows** page, Workflow templates are identified with a template icon. + +![](./static/templatize-a-workflow-new-template-240.png) + +The template option for Workflow settings is available in the main Workflow settings in all Workflow types. In multi-phase deployment types (Canary), some settings may also be templated in the Phase settings. +### Next Steps + +* [Template a Workflow](workflow-configuration.md#template-a-workflow) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/using-the-http-command.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/using-the-http-command.md new file mode 100644 index 00000000000..d5472a13a01 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/using-the-http-command.md @@ -0,0 +1,172 @@ +--- +title: Using the HTTP Command +description: Add HTTP commands to a Harness Workflow to run HTTP methods that contain URLs, headers, assertions, and variables. +# sidebar_position: 2 +helpdocs_topic_id: m8ksas9f71 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use the HTTP step to run HTTP methods containing URLs, methods, headers, assertions, and variables. + +![](./static/using-the-http-command-82.png)In this topic: + +* [Create HTTP Command](using-the-http-command.md#create-http-command) +* [Header Capability Check](using-the-http-command.md#header-capability-check) +* [Reserved Words for Export Variable Names](using-the-http-command.md#reserved-words-for-export-variable-names) +* [Next Steps](using-the-http-command.md#next-steps) + +### Create HTTP Command + +1. In your **Workflow**, click **Add Step**. The **Add Step** settings appear. +2. To create a new HTTP command, click **HTTP**. To use an existing HTTP command, select a Template Library and link the HTTP command template to your Workflow. For this example, we will create a new HTTP command. + +The **HTTP** settings appear. + +Harness CV supports the following HTTP command options. + +#### Name + +Enter the name for the HTTP command. + +#### URL + +Enter the URL for the HTTP call. + +#### Method + +Select the [HTTP method](https://restfulapi.net/http-methods/#summary). + +#### Headers + +Enter the media type for the message. For example, if you are using the GET method, the headers are used to specify the GET response body message type Harness will check for. +1. In **Headers**, click **Add**. +2. In **Key**, enter the key. For example, `Token, Variable:`. +3. In **Value**, enter the value. For example, `${secrets.getValue("aws-playground_AWS_secret_key")}`, `var1,var2:var3`. + +You can enter multiple header entries. Click **Add** to add **Key** and **Value** fields. + +![](./static/_http-headers.png) + +#### Body + +Enter the message body (if any) of the HTTP message. | + +#### Assertion + +The assertion is used to validate the incoming response. For example, if you wanted to check the health of an HTTP connection, you could use the assertion **${httpResponseCode}==200**. + +To see the available expressions, simply enter `${` in the Assertions field. The HTTP expressions are described in the [HTTP](https://docs.harness.io/article/9dvxcegm90-variables#http) section of [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +You can also use JSON and XML functors as described in [JSON and XML Functors](https://docs.harness.io/article/wfvecw3yod-json-and-xml-functors). For example: + +`json.select("status", ${httpResponseBody}) == "success"` + +#### Timeout + +Enter a value, in seconds, for how long Harness should wait for a response from the server you specified in **URL**. + +#### Process Additional Variables + +Create variables using built-in Harness expressions.You can then publish these are output variables using the **Publish output in the context** settings. Whatever is configured in **Process Additional Variables** can be made available in the context defined in **Publish output in the context**.In **Name**, enter a name for the variable. In **Expression**, enter an expression that obtains some value. Harness supports these functors and methods: + +* JSON Path: + + `select()`. Example: `${json.select("path-in-response", httpResponseBody)}` + + `object()`. Example: `${json.object(httpResponseBody).item}` + + `list()`. Example: `{json.list(\"store.book\", httpResponseBody).get(2).isbn}` + +* XPath: + + `select()`. Example: `${xml.select("/bookstore/book[1]/title", httpResponseBody)}` + +For details, see [JSON and XML Functors](https://docs.harness.io/article/wfvecw3yod-json-and-xml-functors). + +See [Variable Expression Name Restrictions](https://docs.harness.io/article/9dvxcegm90-variables#variable_expression_name_restrictions). | + +#### **Publish output in the context** + +Select this option to create a variable containing the content of a variable specified in **Process Additional Variables**. + +in **Publish Variable Name**, enter a unique name to define the output context. You will use this name to reference the variable elsewhere. + +For example, if the name of a variable in **Process Additional Variables** is `name` the **Publish** **Variable Name** is `httpoutput`, you would reference it with `${context.httpoutput.name}`. + +Here is the HTTP step: + +![](./static/_http-step.png) + +And here is a Shell Script step referencing the published variable using `${context.httpoutput.name}`: + +![](./static/_http-shell-script.png) + +In **Scope**, select **Pipeline**, **Workflow**, or **Phase**. The output variable are available within the scope you set here. + +The scope you select is useful for preventing variable name conflicts. You might use a Workflow with published variables in multiple pipelines, so scoping the variable to **Workflow** will prevent conflicts with other Workflows in the Pipeline. | + +#### Delegate Selectors + +You can use Selectors to select which Harness Delegates to use when executing the HTTP step. Enter the Selectors of the Delegates you want to use. + +You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables). For example, if you have a Workflow variables named delegate, you can enter `$(workflow.variables.delegate)`. When you deploy the Workflow, you can provide a value for the variable that matches a Delegate Selector. + +Harness will use Delegates matching the Selectors you select. + +If you use one Selector, Harness will use any Delegate that has that Selector.If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +:::danger +If your Workflow Infrastructure Definition's Cloud Provider is a Harness [Kubernetes Cluster Cloud Provider](https://docs.harness.io/article/l68rujg6mp-add-kubernetes-cluster-cloud-provider) or [AWS Cloud Provider](https://docs.harness.io/article/wt1gnigme7-add-amazon-web-services-cloud-provider) that uses Delegate Selectors, do not add a Selector to the Workflow step. The Workflow is already targeted to a specific Delegate. +::: + +#### Use Delegate Proxy + +Select this option to explicitly use the delegate proxy settings. For details, see [Delegate Proxy Settings](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_proxy_settings). +* If the Delegate is not using any proxy, selecting this option does not enable the proxy settings. +* If you have specified an URL that is set up to bypass proxy settings on the Delegate, then it throws an error. + + +### Header Capability Check + +Currently, this feature is behind the Feature Flag `HTTP_HEADERS_CAPABILITY_CHECK`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. When Harness runs an HTTP step and connects to a service, it checks to make sure that an HTTP connection can be established. + +Some services require that HTTP headers are included in connections. Without the headers, the HTTP connections fail and simple HTTP verification cannot be performed. + +Harness performs an HTTP header capability check for any header requirements on the target service. + +If the target host server requires headers and you do not include headers in the **Headers** setting of the HTTP step, the Harness Delegate will fail the deployment with the error `No eligible Delegates could perform this task` (`error 400`). + +Simply add the required headers in **Headers**, and then run the deployment. Adding the headers will prevent the 400 error. + +### Reserved Words for Export Variable Names + +The following words cannot be used for names in **Publish Variable Name:** + +* arm +* ami +* aws +* host +* setupSweepingOutputAppService +* terragrunt +* terraform +* deploymentInstanceData +* setupSweepingOutputEcs +* deploySweepingOutputEcs +* runTaskDeploySweepingOutputEcs +* setupSweepingOutputAmi +* setupSweepingOutputAmiAlb +* ecsAllPhaseRollbackDone +* Azure VMSS all phase rollback +* k8s +* pcfDeploySweepingOutput +* CloudFormationCompletionFlag +* terraformPlan +* terraformApply +* terraformDestroy +* Elastigroup all phase rollback +* setupSweepingOutputSpotinst +* setupSweepingOutputSpotinstAlb + +### Next Steps + +* [Use Templates](https://docs.harness.io/article/ygi6d8epse-use-templates) +* [Using the Shell Script Command](capture-shell-script-step-output.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/using-the-jenkins-command.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/using-the-jenkins-command.md new file mode 100644 index 00000000000..e50bda391de --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/using-the-jenkins-command.md @@ -0,0 +1,277 @@ +--- +title: Run Jenkins Jobs in Workflows +description: With the Jenkins command, you can execute Jenkins jobs in the shell session of the Workflow. +sidebar_position: 150 +helpdocs_topic_id: 5fzq9w0pq7 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://docs.harness.io/article/as4dtppasg).Harness integrates with [Jenkins](https://jenkins.io/), enabling you to run Jenkins jobs, to dynamically capture output variables from the jobs, and to pull artifacts from Jenkins. + +Harness' integration requires Jenkins version 2.130 or higher. + +## Overview + +Among the steps you can include in a Harness Workflow is a **Jenkins** command. + +![](./static/using-the-jenkins-command-27.png) + +With the **Jenkins** command, you can execute Jenkins jobs in the Workflow's shell session. + +When executing a job, you can also *dynamically capture* the output from the job, publishing runtime variables based on the context. You can then use those variables in another step in the same Workflow or Phase, or in another Workflow or Phase in the same Pipeline. + +The **Shell Script** workflow command step can also set and capture shell session information and publish it as output variables. For more information, see [Using Shell Script Commands](capture-shell-script-step-output.md).### What Information is Available to Capture? + +Any Jenkins job information in the particular shell session of the Workflow can be captured and output using one or more Jenkins steps in that Workflow. In addition, you can capture information available using the built-in Harness variables. For more information, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables). + +Capturing and exporting output in the Jenkins step can be very powerful. For example, a Jenkins step could capture Jenkins build information in a Workflow, and a Harness service could echo the build information and use it in a complex function, and then export the output down the Pipeline for further evaluation. + +## Jenkins Plugin Requirements + +For Harness to capture Jenkins environment variables, your Jenkins configuration requires the [EnvInject Plugin](https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin). The plugin does not provide full compatibility with the Pipeline Plugin. See [Known limitations](https://plugins.jenkins.io/envinject) from Jenkins. + +## Before You Begin + +* [Add a Workflow](workflow-configuration.md) +* [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) +* [Using Shell Script Commands](capture-shell-script-step-output.md) + +## Use the Jenkins Command Step + +Before you can use this command, you need to add your Jenkins server as a Harness Artifact Server. See [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server).The following procedure adds and configures a Jenkins command step in a Workflow, and captures and publishes Jenkins output in a variable. Later, we will show you how to use the published variable in a Harness service. + +To use the Jenkins command step, do the following: + +1. In a Harness Application, open a Workflow. For this example, since we are using Jenkins, we will use a **Build** workflow.![](./static/using-the-jenkins-command-28.png) +2. In the Workflow **Phase** section, in **Prepare Steps**, click **Add Step**. The **Add Step** dialog opens. +3. In **Add Step**, select **Jenkins**. The **Jenkins** settings appear.![](./static/using-the-jenkins-command-29.png) +4. Configure the **Jenkins** command step and click **SUBMIT**. + + +The Jenkins command step has the following settings. + + +### Jenkins Server + +Select the Jenkins server you added as a Harness Artifact Server. For more information, see the [Jenkins Artifact Server](https://docs.harness.io/article/qa7lewndxq-add-jenkins-artifact-servers) setup. + +You can turn this setting into a deployment runtime parameter by clicking the template button (**[T]**). Clicking the button turns the setting into a variable expression, such as `${Jenkins_Server}`. You can change the name of the expression. + +![](./static/_jenkins-expr.png) + +The Workflow now has a [Workflow variable](add-workflow-variables-new-template.md) for that setting. + +If you template the setting, Harness cannot pull the list of Jobs from a server. You must manually enter the Job name in **Job Name**. + +When you deploy the Workflow, you must select a Jenkins server for the Jenkins Server setting Workflow variable. In **Value**, you can select one of the [Jenkins Artifact Servers](https://docs.harness.io/article/qa7lewndxq-add-jenkins-artifact-servers) you have set up in Harness. + +![](./static/_jenkins-srvr2.png) + + +You can also pass in variable expressions to use in this setting, or use a [Service Config Variable](../setup-services/add-service-level-config-variables.md). See [Pass Variables between Workflows](../expressions/how-to-pass-variables-between-workflows.md) and [Passing Variables into Workflows and Pipelines from Triggers](../expressions/passing-variable-into-workflows.md). + + +### Job Name + +Select the Jenkins job (also called a project) to execute. The list is automatically populated using the **Jenkins Server** you selected. + + +### Job Parameters + +If you are using a [parameterized build](https://wiki.jenkins.io/display/JENKINS/Parameterized+Build), click **Add** to add your name/value parameters. These are the parameters you will reference using the published output variables. + + +### Treat unstable Jenkins status as success + +A build is **stable** if it was built successfully and no publisher reports it as unstable.A build is **unstable** if it was built successfully and one or more publishers report it unstable. For example, if the JUnit publisher is configured and a test fails then the build will be marked unstable. + + +### Capture environment variables from Jenkins build + +Select this option to capture the environment variables from the Jenkins Job. These are the environment variables you can see in your Jenkins Job Environment Variables section (or via API) + +![](./static/_cevfjb.png) + +Next, you can enable the **Jenkins Output in the Context** setting to output the environment variables. See [Using Published Jenkins Variables](using-the-jenkins-command.md#using-published-jenkins-variables) below. + + +### Timeout + +Enter the timeout period, in milliseconds. For example, 600000 milliseconds is 10 minutes. The timeout period determines how long to wait for the step to complete. When the timeout expires, it is considered a workflow failure and the workflow [Failure Strategy](workflow-configuration.md#failure-strategy) is initiated. + + +### Wait interval before execution + +Set how long the deployment process should wait before executing the step. + + +### Execute with previous steps + +Check this checkbox to run this step in parallel with the previous steps. + + +### Jenkins Output in the Context + +To export the Jenkins job output as a variable, enable **Jenkins output in the context**. + + +### Variable Name + +Enter a unique name for the output variable. You will use this name to reference the variable elsewhere. For example, if the Variable Name is **Jenkins\_Test**, you would reference the Jenkins `Url` parameter set in **Job Parameters** with `${Jenkins_Test.Url}`. + +### Scope + +Select **Pipeline**, **Workflow**, or **Phase**. The output variables are available within the scope you set here.The scope you select is useful for preventing variable name conflicts. You might use a workflow with published variables in multiple pipelines, so scoping the variable to **Workflow** will prevent conflicts with other workflows in the pipeline. | + +A completed Jenkins step looks something like this: + +![](./static/using-the-jenkins-command-30.png) + + +:::note +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. +::: + +### Referencing Parameters in the Same Step + +You can reference the Job parameters you set in Jenkins step **Job Parameters** section in the same Workflow step (for example, **Collect Artifact**) using the name of the Jenkins step. + +Here are two steps in the same section, **Collect Artifact**, a **Jenkins** step where the parameters as set in the **Job Parameters** section, and a **Shell Script** step that uses the parameters via variables. + +![](./static/using-the-jenkins-command-31.png)Here is the Job parameter set in **Jenkins** step **Job Parameters** section: + +![](./static/using-the-jenkins-command-32.png)Here is the parameter referenced in the **Shell Script** step using the **Jenkins** step name, **Jenkins**: + +`${Jenkins.jobParameters.Url}` + +To reference the parameters outside the step (in other steps in the Workflow, or in another Workflow in the Pipeline), use output variables, described in [Using Published Jenkins Variables](#using-published-jenkins-variables). + +### Using Published Jenkins Variables + +For Harness to capture Jenkins environment variables, your Jenkins configuration requires the [EnvInject Plugin](https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin). The plugin does not provide full compatibility with the Pipeline Plugin. See [Known Incompatibilities](https://plugins.jenkins.io/envinject/#plugin-content-jenkins-pipeline-compatibility) from Jenkins.You can use the Jenkins variable you published in the Jenkins step (via the **Jenkins Output in the Context** settings), and reference parameters and environment variables in other steps of the Workflow, such as a Shell Script step. + +You can reference job parameters from the Jenkins step **Job Parameters** section with the `jobParameters` component in the following syntax: + +``` +${var_name.jobParameters.param_name} +``` +Or, if you selected the **Capture environment variables from Jenkins build** setting, you can reference environment variables with the `envVars` component in the following syntax: + + +``` +${var_name.envVars.envVar_name} +``` +#### Job Parameters + +For example, here is a Job parameter set in Jenkins step **Job Parameters** section: + +![](./static/using-the-jenkins-command-33.png) + +Here is the output variable in the same Jenkins step, scoped to Pipeline: + +![](./static/using-the-jenkins-command-34.png) + +Here is a Shell Script step referencing the Job parameter using the output variable: + +![](./static/using-the-jenkins-command-35.png) + +When the Workflow is deployed, the Harness Deployments page will list the output of the variable (in this case, the Url parameter we set): + +![](./static/using-the-jenkins-command-36.png) + +#### Environment Variables + +For Harness to capture Jenkins environment variables, your Jenkins configuration requires the [EnvInject Plugin](https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin). The plugin does not provide full compatibility with the Pipeline Plugin. See [Known Incompatibilities](https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin#EnvInjectPlugin-Knownincompatibilities) from Jenkins.Let's look at an example for environment variables where you selected the **Capture environment variables from Jenkins build** setting, and then used **Jenkins Output in the Context** settings to set an output variable named `myVar`: + +![](./static/using-the-jenkins-command-37.png)Here is an example of the environment variables from a Jenkins Job (via API): + + +``` +{ + "_class" : "org.jenkinsci.plugins.envinject.EnvInjectVarList", + "envMap" : { + "BUILD_CAUSE" : "MANUALTRIGGER", + "BUILD_CAUSE_MANUALTRIGGER" : "true", + "BUILD_DISPLAY_NAME" : "#65", + "BUILD_ID" : "65", + "BUILD_NUMBER" : "65", + "BUILD_TAG" : "jenkins-build-descriptor-setter-with-params-65", + "BUILD_URL" : "https://jenkinsint.harness.io/job/build-descriptor-setter-with-params/65/", + ... + } +} +``` +In a Shell Script step in the Workflow, you can get the BUILD\_URL environment variable using a variable containing `envVars`: + + +``` +${myVar.envVars.BUILD_URL} +``` +You can get all environment variables by simply omitting any specific environment variable name: + + +``` +${myVar.envVars} +``` +### Harness Built-in Parameter Variables + +Harness includes built-in Jenkins parameters you can use in your Shell Script steps, but only within the same Workflow step section as the Jenkins step. For example, here are two steps in the same section, a Jenkins step and a Shell Script step that uses the built-in variables: + +![](./static/using-the-jenkins-command-38.png) + +Here is the list of parameters with their variables: + +* **buildNumber** - `${Jenkins.buildNumber}` +* **buildUrl** - `${Jenkins.buildUrl}` +* **buildDisplayName** - `${Jenkins.buildDisplayName}` +* **buildFullDisplayName** - `${Jenkins.buildFullDisplayName}` +* **jobStatus** - `${Jenkins.jobStatus}` +* **description** - `${Jenkins.description}` (requires [Descriptor Setter](https://wiki.jenkins.io/display/JENKINS/Description+Setter+Plugin) plugin in Jenkins) + +When the Workflow is deployed, the Shell Script is run and the echo output using the variables is displayed. + +![](./static/using-the-jenkins-command-39.png) + +### Multibranch Pipeline Support + +For Harness to capture Jenkins environment variables, your Jenkins configuration requires the [EnvInject Plugin](https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin). The plugin does not provide full compatibility with the Pipeline Plugin. See [Known Incompatibilities](https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin#EnvInjectPlugin-Knownincompatibilities) from Jenkins.The Jenkins Multibranch Pipeline (Workflow Multibranch) feature enables you to automatically create a Jenkins pipeline for each branch on your source control repo. + +Each branch has its own [Jenkinsfile](https://jenkins.io/doc/book/pipeline/jenkinsfile/), which can be changed independently. This features enables you to handle branches better by automatically grouping builds from feature/experimental branches. + +In the Harness Workflow Jenkins command, multibranch pipelines are displayed alongside other Jobs, with the child branches as subordinate options: + +![](./static/using-the-jenkins-command-40.png)When you deploy the Harness Workflow, the branch you selected is built: + +![](./static/using-the-jenkins-command-41.png)For more information, see [Pipeline-as-code with Multibranch Workflows in Jenkins](https://jenkins.io/blog/2015/12/03/pipeline-as-code-with-multibranch-workflows-in-jenkins/) from Jenkins. + +### Reserved Words for Export Variable Names + +The following words cannot be used for names in Jenkins Output in the Context **Variable Name:** + +* arm +* ami +* aws +* host +* setupSweepingOutputAppService +* terragrunt +* terraform +* deploymentInstanceData +* setupSweepingOutputEcs +* deploySweepingOutputEcs +* runTaskDeploySweepingOutputEcs +* setupSweepingOutputAmi +* setupSweepingOutputAmiAlb +* ecsAllPhaseRollbackDone +* Azure VMSS all phase rollback +* k8s +* pcfDeploySweepingOutput +* CloudFormationCompletionFlag +* terraformPlan +* terraformApply +* terraformDestroy +* Elastigroup all phase rollback +* setupSweepingOutputSpotinst +* setupSweepingOutputSpotinstAlb + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/verify-workflow-new-template.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/verify-workflow-new-template.md new file mode 100644 index 00000000000..b999512918b --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/verify-workflow-new-template.md @@ -0,0 +1,36 @@ +--- +title: Verify Workflow +description: Provide details on how to set up a verification process. A verification step ties your Workflow and your Verification Provider to Harness Continuous Verification features. +sidebar_position: 40 +helpdocs_topic_id: mec513o1oc +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +One of the most important steps in a Workflow is verification. A verification step ties your Workflow and your Verification Provider to **Harness Continuous Verification** features. + +In order to obtain the names of the host(s) or container(s) where your Service is deployed, the **Verify Steps** should be defined in your Workflow **after** you have run at least one successful deployment.In this topic: + +* [Before You Begin](#before_you_begin) +* [Step: Add Verification Steps to Workflows](#add_verify_step_workflows) + + +### Before You Begin + +* [Add a Workflow](tags-how-tos.md) +* [Workflows](workflow-configuration.md) + + +### Step: Add Verification Steps to Workflows + +To set up a verification step, do the following: + +1. In a Workflow, click **Verify Service** and then **Add Step.** The **Add Step** settings appear. +2. In **Verifications**, click the Verification Provider connected to the Service you are deploying in this Workflow. The configuration dialog for the Service appears. +3. Fill out the Verification Provider's configuration dialog and click **Submit**. The **Verify Service** step displays the Verification Provider. + + ![](./static/verify-workflow-new-template-220.png) + +In a multi-phase deployment, the verification steps are in the sub-steps within each phase.You can read more about the different verification integrations in [What Is Continuous Verification (CV)?](https://harness.helpdocs.io/article/ina58fap5y-what-is-cv) and [Verification Providers](https://docs.harness.io/article/r6ut6tldy0-verification-providers). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-configuration.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-configuration.md new file mode 100644 index 00000000000..49e3d43cc61 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-configuration.md @@ -0,0 +1,268 @@ +--- +title: Workflows +description: Define deployment orchestration steps, including how a Service is deployed, verified, rolled back, and more. The common Workflow types are Canary, Blue/Green, and Rolling. +sidebar_position: 10 +helpdocs_topic_id: m220i1tnia +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Workflows define the deployment orchestration steps, including how a Service is deployed, verified, rolled back, and more. Some of the common Workflow types are Canary, Blue/Green, and Rolling. An Application might have different deployment orchestration steps for different Environments, each managed in a Workflow. + +If you're looking for Workflow How-tos, see the following: + +* [Add a Workflow](tags-how-tos.md) +* [Deploy Individual Workflow](deploy-a-workflow.md) +* [Verify Workflow](verify-workflow-new-template.md) +* [Add a Workflow Notification Strategy](add-notification-strategy-new-template.md) +* [Define Workflow Failure Strategy](define-workflow-failure-strategy-new-template.md) +* [Set Workflow Variables](add-workflow-variables-new-template.md) +* [Use Steps for Different Workflow Tasks](add-steps-for-different-tasks-in-a-wor-kflow.md) +* [Add Phases to a Workflow](add-workflow-phase-new-template.md) +* [Synchronize Workflows in your Pipeline Using Barrier](synchronize-workflows-in-your-pipeline-using-barrier.md) +* [Templatize a Workflow](templatize-a-workflow-new-template.md) +* [Clone a Workflow](clone-a-workflow.md) +* [Configure Workflow Using YAML](configure-workflow-using-yaml.md) + +### Before You Begin + +Before learning about workflows, you should have an understanding of the following: + +* [Application Components](../applications/application-configuration.md) +* [Add a Service](../setup-services/service-configuration.md) +* [Add an Environment](../environments/environment-configuration.md) + +### Workflow Types + +If you are new to deployment strategies, read [Deployment Concepts and Strategies](https://docs.harness.io/article/325x7awntc-deployment-concepts-and-strategies) to learn about common deployment strategies. This will help you understand the deployment strategies Harness Workflows implement. + +The following Workflow types are available when creating a Workflow. + +* **Basic** — A Basic deployment selects nodes and installs a service.See:* [AMI Basic Deployment](https://docs.harness.io/article/rd6ghl00va-ami-deployment) + * [Lambda Workflows and Deployments](https://docs.harness.io/article/491a6etr7a-4-lambda-workflows-and-deployments) + * [Helm Workflows and Deployments](https://docs.harness.io/article/m8ra49bqd5-4-helm-workflows) + * [IIS Workflows and Pipelines](https://docs.harness.io/article/z6ls3tgkqc-4-iis-workflows) + * [PCF Workflows and Deployments](https://docs.harness.io/article/c92izkztka-create-a-basic-pcf-deployment) + * Shell Script-based deployments, such as [Build and Deploy Pipelines](https://docs.harness.io/article/181zspq0b6-build-and-deploy-pipelines-overview) and [Traditional Deployments](https://docs.harness.io/article/6pwni5f9el-traditional-deployments-overview). + +* **Multi-Service** — A Multi-Service uses one or more phases composed of separate steps. | + +* **Canary** — A Canary deployment rolls out a new app version to small sets of users in separate phases, tests and verifies it at each phase, gradually rolling it out to your entire infrastructure. + See: + * [AMI Canary Deployment](https://docs.harness.io/article/agv5t7d156-ami-canary) + * [Create a Kubernetes Canary Deployment](https://harness.helpdocs.io/article/2xp0oyubjj-create-a-kubernetes-canary-deployment) + * [ECS Workflows](https://docs.harness.io/article/oinivtywnl-ecs-workflows) + * [PCF Workflows and Deployments](https://docs.harness.io/article/99bxiqfi1u-create-a-canary-pcf-deployment) + + :::note + **Kubernetes Canary Workflows:** While you can add multiple phases to a Kubernetes Canary Workflow, you should simply use the Canary and Primary Phases generated by Harness when you add the first two phases. Kubernetes deployments have built-in controls for rolling out in a controlled way. The Canary Phase is a way to test the new build, run your verification, then roll out in the Primary Phase. + ::: + +* **Build** — A Build deployment simply builds and collects artifacts. You can use it as part of a Pipeline that builds the latest artifact and deploys it, or as the first step in a Pipeline that is executed in response to a source update such as a Git push event. + + :::note + If you use a Build Workflow in a Pipeline, you cannot select an artifact when you deploy the Pipeline. A Build Workflow tells Harness you will be building the artifact for deployment as part of the Pipeline. Harness will use that artifact for the Pipeline deployment. + + See [Build and Deploy Pipelines Overview](https://docs.harness.io/article/181zspq0b6-build-and-deploy-pipelines-overview) and [Using Build Workflows in a Pipeline](https://docs.harness.io/article/slkhuejdkw-6-artifact-build-and-deploy-pipelines#using_build_workflows_in_a_pipeline). + +* **Rolling** — A Rolling deployment lets you gradually roll out your deployment, enabling and disabling services as necessary. + + See: + + * [Create a Kubernetes Rolling Deployment](https://harness.helpdocs.io/article/dl0l34ge8l-create-a-kubernetes-rolling-deployment) + * [Azure Workflows and Deployments](https://docs.harness.io/article/x87732ti68-4-azure-workflows-and-deployments) + + +* **Blue/Green** — In a Blue/Green deployment, network traffic to your service/artifact is routed between two identical environments called blue (staging) and green (production). Both environments run simultaneously, containing different versions or the service/artifact. + + See: + * [AMI Blue/Green Deployment](https://docs.harness.io/article/vw71c7rxhp-ami-blue-green) + * [ECS Blue/Green Workflows](https://docs.harness.io/article/7qtpb12dv1-ecs-blue-green-workflows) + * [Create a Kubernetes Blue/Green Deployment](https://harness.helpdocs.io/article/ukftzrngr1-create-a-kubernetes-blue-green-deployment) + * [PCF Workflows and Deployments](https://docs.harness.io/article/52muxcsr1v-create-a-blue-green-pcf-deployment) + + +When you submit, the Workflow display the steps needed to perform depending on the Workflow type you selected. + +### Workflow Variables + +You can set variables in the **Workflow Variables** section of your Workflow, and use them in the Workflow step commands and settings. + +See [Set Workflow Variables](add-workflow-variables-new-template.md). + +For information on variables and expressions, see [Variables and Expressions in Harness](https://docs.harness.io/article/9dvxcegm90-variables) and [Passing Variables into Workflows and Pipelines from Triggers](../expressions/passing-variable-into-workflows.md). + +### Template a Workflow + +See [Templatize a Workflow](templatize-a-workflow-new-template.md). + +You can turn a Workflow into a Workflow template ("templatize it") by using variables for important settings such as Environment, Service, and Infrastructure Definition. When the Workflow is deployed, the user must provide values for the settings you have defined as variables. + +When you turn a Workflow into a template, if the Workflow still retains references to the entities it was originally created with (such as a Service), any attempts to delete these entities will result in an error, because the template is still using them. You will need to remove the references from the template before you can delete them. + +#### Workflow Templates and Service Types + +The Workflow template only works with Services using the same Deployment and Artifact Type as the Service used to create the Workflow. This applies to Services of Deployment Type **Secure Shell (SSH)**. + +For example, let's say you created a Workflow using a Service with the Deployment Type **Secure Shell (SSH)** and Artifact Type **JAR**. If you have another Service with the same Deployment Type, but the Artifact Type is **WAR**, the Workflow template will not show it as an option during deployment. + +#### Templatize Phases in Canary Workflows + +For Canary Workflows, you edit the Phase settings and click the **[T]** next to **Service**. The **[T]** was automatically selected for **Infrastructure Definition** when you clicked the **[T]** for **Environment**. + +![](./static/workflow-configuration-221.png) + +#### Templatized Workflows in Pipelines + +Once you have templatized a Workflow, you can use it in multiple stages of a pipeline. + +For example, you can templatize the **Environment** and **Infrastructure Definition** of a Workflow, and then use the same Workflow for both the QA and Production stages of a Pipeline. When you add the Workflow to each stage, you simply provide QA and Production-specific values for **Environment** and **Infrastructure Definition** variables. + + + +| | | +| --- | --- | +| **Workflow Variables in QA Stage of Pipeline** | **Workflow Variables in Production Stage of Pipeline** | +| ![](./static/_wf-vars-qa-left.png) | ![](./static/_wf-vars-prod-right.png) | + +### Workflow Phases + +In multi-phase deployments, such as a Canary Deployment, Workflow steps are grouped into phases. Here is a Canary Workflow before the phases and sub-steps are added: + +![](./static/workflow-configuration-222.png) + +:::note +You cannot run a Workflow's phases in parallel. Consider using multiple Workflows. +::: + +### Barriers + +When deploying interdependent services, such as microservices or a large and complicated application, there might be a need to coordinate the timing of the different components' deployments. A common example is the need to verify a group of services only after *all the services* are deployed successfully. + +Harness Workflows address this scenario using barriers. Barriers allow you to synchronize different Workflows in your Pipeline, and control the flow of your deployment systematically. + +Barriers have an effect only when two or more Workflows use the same barrier name, and are executed in parallel in a Pipeline. When executed in parallel, both Workflows will cross the barrier at the same time. + +If a Workflow fails before reaching its barrier point, the Workflow signals the other Workflows that have the same barrier, and the other Workflows will react as if they failed as well. At that point, each Workflow will act according to its [Failure Strategy](#failure_strategies). + +![](./static/workflow-configuration-223.png)For more information on how to synchronize your Workflows using Barriers, see [Synchronize Workflows in your Pipeline Using Barriers](synchronize-workflows-in-your-pipeline-using-barrier.md). + +### Rollback Steps + +You define the steps of a Workflow rollback in **Rollback Steps**. Typically you want to rollback failed containers and container orchestration setup. You can also verify that the rollback has restored the last working version of your Service. + +Harness performs rollback differently depending on the target platform (deployment type), deployment strategy, and the steps where you can configure how we handle old versions. See [Kubernetes Rollback](https://docs.harness.io/article/v41e8oo00e-kubernetes-rollback) and [ECS Rollbacks](https://docs.harness.io/article/d7rnemtfuz-ecs-rollback) as examples. + +In general, during a successful deployment, Harness deletes all old versions of the deployed service except for the last successfully deployed version. This version is kept for rollback. If rollback occurs, Harness restores the last successful version but not the older versions it deleted. + +For Docker, Kubernetes, AWS CodeDeploy, and Lambda deployments, Harness rolls back the deployment to the state that received the new code. + +In case of a JAR, WAR, RPM, TAR, ZIP and other deployments, Harness provides default rollback steps (Disable, Stop, Deploy, Enable, Wrap-up). You can add custom commands in cases where you need to customize the rollback procedure. + +:::note +* If you deploy a Workflow and choose the **Abort** option during the running deployment, the Rollback Steps for the Workflow are not executed. Abort stops the deployment execution without rollback or cleanup. To execute the Rollback Steps, click the **Rollback** button. + +* For post-production rollback, see [Rollback Production Deployments](post-deployment-rollback.md). +::: + +To set up rollback steps, do the following: + +1. In a Workflow, click **Rollback Steps** to see the default steps. Here is an example of the default rollback steps in a Workflow that deploys a Docker image to a Kubernetes cluster: + + ![](./static/workflow-configuration-224.png) + +### Notification Strategy + +By default, when a Workflow fails, the Account Administrator is notified. You can specify a notification strategy for a Workflow (or for a Workflow phase in a Canary or Multi-Service Workflow) that sends notifications using different criteria. + +See [Add a Workflow Notification Strategy](add-notification-strategy-new-template.md). + +### Failure Strategies + +A Failure Strategy defines how your Workflow handles different failure conditions. For example, if you are deploying a Service to a cluster of 100 nodes, what percentage of connectivity errors would you allow before failing the deployment? + +There are two ways to define a failure strategy: + +* The **Failure Strategy** settings for the entire Workflow. + + ![](./static/workflow-configuration-225.png) + +* Step-level failure strategy for a Workflow step section. + + ![](./static/workflow-configuration-226.png) + +See [Define Workflow Failure Strategy](define-workflow-failure-strategy-new-template.md). + +### Concurrency Strategy + +You can edit a Workflow's **Concurrency Strategy** section to override Harness' default [Workflow queuing](workflow-queuing.md) behavior. For details, see [Using Concurrency Strategy to Control Queuing](workflow-queuing.md#concurrency-strategy). + +![](./static/workflow-configuration-227.png) + +### Rollback a Running Workflow + +You can rollback a running Workflow from the **Deployments** page. + +![](./static/workflow-configuration-228.png) + +To rollback a Production deployment from the Services dashboard, see [Rollback Production Deployments](post-deployment-rollback.md).This **Rollback** option requires the following User Group Account and Application permissions: + +* **Account:** `Manage Applications` +* **Application:** `Rollback Workflow` + +![](./static/workflow-configuration-229.png) + +You can also add the **Rollback Workflow** Application permission via the GraphQL API: + + +``` +mutation { + updateUserGroupPermissions(input: { + clientMutationId: "123" + userGroupId: "Gh9IDnVrQOSjckFbk_NJWg" + permissions: { + appPermissions: { + actions:[ROLLBACK_WORKFLOW] + permissionType: ALL + applications: { + filterType: ALL + } + deployments: { + filterTypes: NON_PRODUCTION_ENVIRONMENTS + } + } + } + }) { + clientMutationId + } +} +``` +#### Rollback Workflow added if Execute Workflow used Previously + +All User Groups that had the **Execute Workflow** permission enabled will now have **Rollback Workflow** enabled, also. You can disable it if needed. + +#### Platform and Workflow Support + +Rollback for running Workflows is currently supported the following platforms and strategies: + +* **Kubernetes** deployments: Basic, Blue/Green, Canary, Rolling Workflows. +* **SSH** deployments: Blue/Green, Canary, and Basic Workflows. +* **PCF (Pivotal Cloud Foundry)** deployments: Blue/Green, Canary, and Basic Workflows. +* **WinRM (IIS and .NET)** deployments: Blue/Green, Canary, and Basic Workflows. +* **ECS** deployments: all Workflow types, and both EC2 and Fargate clusters. +* **AMI/ASG** deployments: Blue/Green, Canary, and Basic Workflows. + +Harness anticipates expanding this feature to other deployment platforms. + +### Next Steps + +Read the following topics to build on what you've learned: + +* [Add a Workflow](tags-how-tos.md) +* [Deploy Individual Workflow](deploy-a-workflow.md) +* [Add a Workflow Notification Strategy](add-notification-strategy-new-template.md) +* [Define Workflow Failure Strategy](define-workflow-failure-strategy-new-template.md) +* [Synchronize Workflows in your Pipeline Using Barriers](synchronize-workflows-in-your-pipeline-using-barrier.md) +* [Workflow Queuing](workflow-queuing.md) + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-queuing.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-queuing.md new file mode 100644 index 00000000000..48c7083711f --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-queuing.md @@ -0,0 +1,143 @@ +--- +title: Workflow Queuing +description: How Harness queues Workflows to prevent conflicts on shared target infrastructure. Includes instructions for overriding default queuing behavior. +sidebar_position: 190 +helpdocs_topic_id: hui1k7seo1 +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic outlines how Harness queues Workflows, to prevent conflicts when two or more Workflows simultaneously deploy the same Harness Service to the same Harness Infrastructure Definition. + +### Before You Begin + +Ensure that you understand the following: + +* [Services](../setup-services/service-configuration.md) +* [Workflows](workflow-configuration.md) +* [Infrastructure Definitions](../environments/infrastructure-definitions.md) + +### Overview + +When multiple Harness Workflows simultaneously deploy to the same infrastructure, this can generate conflicts. To prevent such conflicts, Harness normally places a *resource lock* on the infrastructure, and queues the Workflows in FIFO (First In, First Out) order. + +You can override this behavior, as covered [below](#concurrency_strategy). Queuing is particularly valuable for Pipelines that execute multiple Workflows in parallel. + +### Limitations + +Currently, the following queue limitation is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it is available for Trial and Community Editions.Harness allows a maximum of **20 Workflow executions** in the queue that locks an infrastructure. Subsequent Workflows using that infrastructure will fail if the queue is full. + +The queue limitation is to prevent a misconfigured Trigger or other execution mechanism from overloading your queue and preventing important deployments. + +### Using Concurrency Strategy to Control Queuing + +By default, Harness Workflows have their **Concurrency Strategy** set to **Acquire lock on the targeted service infrastructure**. This is the setting that enables queuing for shared infrastructure. + +![](./static/workflow-queuing-103.png) + +To exempt a Workflow from queuing behavior, click the pencil icon to open the **Concurrency Strategy** dialog shown here. Set the **Concurrency Control** drop-down to **Synchronization not required**, and click **Submit**. + +![](./static/workflow-queuing-104.png) + +### Synchronization Not Required Best Practices + +The golden rule with **Synchronization Not Required** is: Only use a concurrency strategy of **Synchronization Not Required** when it does not matter if multiple Workflows are running concurrently. + +For example, if a Workflow simply hits an HTTP endpoint to post a message and message order does not matter, or for some other operation that is already an encapsulated transaction. + +When to use **Acquire lock on the targeted service infrastructure**: + +* If concurrently running Workflows are acting on the same infrastructure. Running Workflows like this in synch can cause interference in many ways. +* For any Workflow that modifies state over time to reach a new state, it must have a concurrency strategy that causes the Workflows to queue rather than overlap. + +### How Harness Locks Infrastructure + +Here is an example of how infrastructure locking works. In this Harness [Infrastructure Definition](../environments/infrastructure-definitions.md), the Kubernetes cluster's **Namespace** field is populated by the variable `${workflow.variables.namespace}`: + +![](./static/workflow-queuing-105.png) + +In this Kubernetes Workflow, we define the corresponding `namespace` variable: + +![](./static/workflow-queuing-106.png) + +In the **Workflow Variables** dialog, we've assigned the variable no default value: + +![](./static/workflow-queuing-107.png) + +As we begin deployment of this Workflow, we assign the variable the value `target`: + +![](./static/workflow-queuing-108.png) + +Once the Workflow deploys, the **Details** panel confirms that a lock has been placed on the resulting `namespace: target` and Harness Service combination: + +![](./static/workflow-queuing-109.png) + +Harness locks on the unique **combination** of namespace and Harness Service. We do not lock the Service from being deployed to another namespace. Harness simply makes the Workflow wait if there is another Workflow running using the same Service and the same namespace together.If we open the details page for the Infrastructure Definition that we started with, it displays a newly created Infrastructure Mapping for the `target` namespace we specified: + +![](./static/workflow-queuing-110.png) + +### Acquiring Resource Locks + +When pending Workflows are contending for shared infrastructure, Harness uses the above mechanism to place the Workflows in a *resource lock queue*. The first-launched Workflow gets a *resource lock* on this infrastructure, which it holds until its deployment is resolved. This temporarily blocks other Workflows from using the infrastructure—only one Workflow at a time can have a lock on a given infrastructure. + +When contending for shared infrastructure, most Workflows will therefore display an **Acquire Resource Lock** step in Harness' Deployments page: + +![](./static/workflow-queuing-111.png) + +This step appears even if no queue is present, because it's specified by Harness' **Acquire lock on the targeted service infrastructure** [default setting](#concurrency_strategy). There are two exceptions: + +* No **Acquire Resource Lock** will appear in Workflows of *Build* deployment type. +* No **Acquire Resource Lock** step will appear in Workflows that have been [configured to ignore queueing](#concurrency_strategy). + +Harness seeks to acquire a Resource Lock only once per Workflow. The lock's scope is the current Workflow. The **Acquire Resource Lock** step occurs in the first deployment or setup phase that *follows* any **Pre‑Deployment** phase or step. + +This example, using a Pivotal Cloud Foundry Blue/Green Workflow, shows the **Acquire Resource Lock** step's typical position in a deployment: + +![](./static/workflow-queuing-112.png) + +### Queuing in Action + +Let's look at a full example of how queuing works. Assume that we have two similar Harness Kubernetes Workflows, `gke demo` and `gke demo-template-clone`. Both deploy to the same infrastructure, because they share the same [Infrastructure Definition](../environments/infrastructure-definitions.md). + +When both Workflows are deployed simultaneously, Harness might initiate the deployment of `gke demo` slightly before `gke demo‑template‑clone`. This scenario applies to a Pipeline whose stages run these Workflows in parallel, but in that order: + +![](./static/workflow-queuing-113.png) + +If we examine the `gke demo-template-clone` Workflow's Deployments page, we might initially see something like this: + +![](./static/workflow-queuing-114.png) + +This deployment is paused at its **Acquire Resource Lock** step. The **Details** panel shows why: + +![](./static/workflow-queuing-115.png) + +The `gke demo-template-clone` deployment is second in the **Resource Lock Queue**, so it currently has **BLOCKED** status. + +Meanwhile, the `gke demo` is first in the queue. If we immediately switch to its Deployments page, we might see something like this: + +![](./static/workflow-queuing-116.png) + +This Workflow's **Acquire Resource Lock** step has completed—it has acquired the lock. The **Details** panel confirms that this first-in-queue deployment has **ACTIVE** status: + +![](./static/workflow-queuing-117.png) + +So the `gke demo` Workflow can now proceed through completion: + +![](./static/workflow-queuing-118.png) + +Once the `gke demo` Workflow has completed deployment, this clears the queue. Therefore, the `gke demo‑template‑clone` can now acquire the lock, and proceed to deploy: + +![](./static/workflow-queuing-119.png) + +### Queuing with Infrastructure Provisioners + +Workflows incorporating [Infrastructure Provisioners](../infrastructure-provisioner/add-an-infra-provisioner.md) are queued the same way as Workflows based on predefined infrastructure. Infrastructure Provisioners commands are always added in the Workflow's **Pre‑Deployment** phase. This sets up the new infrastructure, enabling Harness to properly queue and lock deployments to that infrastructure in the following phase. + +![](./static/workflow-queuing-120.png) + +### Next Steps + +* To more precisely synchronize multiple Workflows within a Pipeline, use [Barriers](workflow-configuration.md#barriers). +* To queue deployments account-wide, add a Resource Constraint. See [Resource Restrictions](resource-restrictions.md). + diff --git a/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-steps-ui-changes.md b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-steps-ui-changes.md new file mode 100644 index 00000000000..d8c4717a873 --- /dev/null +++ b/docs/first-gen/continuous-delivery/model-cd-pipeline/workflows/workflow-steps-ui-changes.md @@ -0,0 +1,39 @@ +--- +title: Workflow Steps UI Changes +description: Harness is changing how you add steps in Workflows to offer new features for managing commonly-used steps. The settings inside each step will remain the same, but the user experience will change slig… +sidebar_position: 240 +helpdocs_topic_id: tbl31wsyfm +helpdocs_category_id: a8jhf8hizv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness is changing how you add steps in Workflows to offer new features for managing commonly-used steps. + +The settings inside each step will remain the same, but the user experience will change slightly. + +The new UI for Workflow steps will be implemented soon. This document provides an overview of the changes to help with the transition. + +Let's take a look at some of the changes. + +### Adding Commands + +The following table shows the difference between the current Add Command dialog and the new Add Step UI: + +| | | +| --- | --- | +| **Adding Commands Currently** | **Add Steps in the New UI** | +| ![](./static/_add-cmd-cur-left.png) | ![](./static/_add-cmd-new-right.png) | + +The new Add Step UI has the following features: + +![](./static/workflow-steps-ui-changes-230.png) + +Here is an animated GIF showing how steps are added in the new UI: + +![](./static/workflow-steps-ui-changes-231.gif) + +### Do I Have to Do Anything? + +No. Your current Workflows and their step settings will not change. The only change is in the user experience. + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/_category_.json b/docs/first-gen/continuous-delivery/pcf-deployments/_category_.json new file mode 100644 index 00000000000..b2b927be707 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "Tanzu Application Service (formerly Pivotal)", "position": 90, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Tanzu Application Service (formerly Pivotal)"}, "customProps": { "helpdocs_category_id": "emle05cclq"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/add-container-images-for-pcf-deployments.md b/docs/first-gen/continuous-delivery/pcf-deployments/add-container-images-for-pcf-deployments.md new file mode 100644 index 00000000000..4380e88c0f7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/add-container-images-for-pcf-deployments.md @@ -0,0 +1,135 @@ +--- +title: Add Container Images for Tanzu Deployments +description: Once you set up an Artifact Server, Harness can pull artifacts and add them to the Harness Service you will deploy to PCF. +sidebar_position: 30 +helpdocs_topic_id: jxsna1a0mi +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness integrates with many different types of repositories and artifact providers. We call these Artifact Servers, and they help you pull your artifacts into your Harness Applications. + +Once you set up an Artifact Server, Harness can pull artifacts and add them to the Harness Service you will deploy to Tanzu Application Service (TAS, formerly PCF). + +### Before You Begin + +* See [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* [Tanzu Application Service (TAS) Quickstart](https://docs.harness.io/article/hy819vmsux-pivotal-cloud-foundry-quickstart) + +### Step 1: Add an Artifact Server + +For steps on setting up an Artifact Server, [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +1. In Harness, click **Setup,** and then click **Connectors**. +2. Click **Artifact Servers**, and then click **Add Artifact Server**. Enter the following settings. + +### Step 2: Type + +Depending on your Artifact Server, select from the drop down list. + +For this example, select **Artifactory**. + +### Step 3: Display Name + +Enter a name to identify the Artifact Server. + +For example, **Artifactory Public**. + +### Step 4: Artifactory URL + +Enter the URL for the artifact server. For example, **https://harness.jfrog.io/harness**. + +Enter the Username/Password if the repo is not anonymous. + +### Step 5: Test and Submit + +Click **Test** and the **Submit**. + +If the test fails, that means the [Delegate](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) can't connect to the Artifact Server URL. + +Make sure that the host running the Delegate can make outbound connections to the Artifact Server URL. + +### Step 6: Create the Harness Service + +In your Harness Application, in Services, create a new Service. + +Enter a name for the Service. For details on how Harness manages Tanzu app names, see [Tanzu App Naming](tanzu-app-naming-with-harness.md). + +For the Service **Deployment Type**, select **Tanzu Application Services**. + +![](./static/add-container-images-for-pcf-deployments-00.png) + +Click **Submit**. + +The new Service is created. Now you can add your container images as an artifact source. + +### Review: Enable CF CLI 7 + +Currently, this feature is behind the Feature Flag `CF_CLI7`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Enable this option if you want to use CF CLI 7. By default, Harness uses CF CLI 6. + +Certain CLI commands have been changed between the CLI versions. See [Upgrading to CF CLI 7](https://docs.cloudfoundry.org/cf-cli/v7.html#table) from Cloud Foundry. + +If you enable **Enable CF CLI 7**, the Harness Delegate will use that CLI version to execute the correct set of commands. + +### Step 7: Add the Artifact Source to the Harness Service + +The artifact source for your Harness Service is taken from one of the Artifact Servers that are compatible with TAS. For example, an AWS S3 artifact source. + +Harness supports the following TAS artifact servers/types. + +Metadata-only Sources: + +* Jenkins +* AWS S3 +* Artifactory (includes Docker) +* Nexus +* Bamboo + +File-based Sources: + +* Docker Registry +* Artifactory (Tgz files) +* Nexus (Tgz files) +* Google Container Service (GCS) +* AWS Elastic Container Registry (ECR) +* SMB +* SFTP +* Custom Repository + +Harness supports any single file (non-folder) deployed using `cf push`. TAR, WAR, JAR, ZIP, and Docker are supported.Is your artifact in an unsupported format? See [Preprocess Tanzu Artifacts to Match Supported Types](preprocess-artifacts-to-match-supported-types.md).To add an artifact to your Harness TAS Service, do the following: + +1. In your Service, click **Add Artifact Source**, and select the artifact source. +2. Configure the settings for the Artifact Source. + + Harness uses artifact metadata only. During deployment runtime, Harness passes the metadata to the target host(s) where it is used to obtain the artifact. + + Ensure that the target host has network connectivity to the Artifact Server. For more information, see [Service Types and Artifact Sources](https://docs.harness.io/article/qluiky79j8-service-types-and-artifact-sources). + +3. Click **Submit**. The artifact is added to the Service. + +Next we will describe our application and TAS routes using the Service **Manifests** section. + +### Review: Docker Support in Artifact Sources + +The following Harness Artifact Sources support Docker: + +* Artifactory +* Google Container Registry (GCR) +* Amazon Elastic Container Registry (Amazon ECR) +* Docker Registry + +For Artifactory, ensure you select the Use Docker Format option: + +![](./static/add-container-images-for-pcf-deployments-01.png) + +TAS treats Artifactory as private registry. Harness supports no authentication and Basic authentication. You can use either in your Artifactory repos. + +For more information on how TAS supports Docker, see [Push a Docker Image from a Registry](https://docs.cloudfoundry.org/devguide/deploy-apps/push-docker.html#registry) from TAS. + +### Next Steps + +* [Adding and Editing Inline Tanzu Manifest Files](adding-and-editing-inline-pcf-manifest-files.md) +* [Upload Local and Remote Tanzu Resource Files](upload-local-and-remote-pcf-resource-files.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/adding-and-editing-inline-pcf-manifest-files.md b/docs/first-gen/continuous-delivery/pcf-deployments/adding-and-editing-inline-pcf-manifest-files.md new file mode 100644 index 00000000000..27c9810558f --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/adding-and-editing-inline-pcf-manifest-files.md @@ -0,0 +1,60 @@ +--- +title: Adding and Editing Inline Tanzu Manifest Files +description: When you create the PCF Service, the Manifests section is created and the default manifest.yml and vars.yml files are added. +sidebar_position: 40 +helpdocs_topic_id: 3ekpbmpr4e +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Manifests provide consistency and reproducibility, and help automate in deploying apps. For more information about manifest files, see [Deploying with Application Manifest](https://docs.pivotal.io/pivotalcf/2-4/devguide/deploy-apps/manifest.html) from Tanzu.When you create the Tanzu Application Service (TAS, formerly PCF) Service in Harness, the **Manifests** section is created and the default manifest.yml and vars.yml files are added. + +The use of PCF in the default manifest.yml and vars.yml file is because TAS was formerly PCF. + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md). +* For details on how Harness manages Tanzu app names, see [Tanzu App Naming](tanzu-app-naming-with-harness.md). + +### Visual Summary + +Here is an example showing how the variables in **manifest.yml** are given values in **vars.yml**: + +![](./static/adding-and-editing-inline-pcf-manifest-files-71.png) + +You can also use variables for partial values. For example, you can specify `host` in your vars.yml file and `- route: ((host)).env.com` in your manifest.yml file.TAS Manifest deployments are a common TAS strategy. You can learn more about it in [Deploying with App Manifests](https://docs.pivotal.io/platform/application-service/2-7/devguide/deploy-apps/manifest.html) from TAS. + +Harness supports all of the typical features of TAS manifests, as described in [Deploying with App Manifests](https://docs.pivotal.io/platform/application-service/2-7/devguide/deploy-apps/manifest.html) from TAS, but to deploy multiple apps, you will need to use multiple Harness Services. + +### Step 1: Edit vars.yaml file + +This file contains the following default variables and values: + +* `PCF_APP_NAME: ${app.name}__${service.name}__${env.name}` +* `PCF_APP_MEMORY: 750M` +* `INSTANCES: 1` + +These are referenced in the manifest.yaml file. + +#### Change the TAS App Name + +You can change the TAS app name here if you do not want Harness to generate one using a concatenation of the Harness Application, Service, and Environment names (`${app.name}__${service.name}__${env.name}`). + +You can add more variables in vars.yaml and override them as described in [Using Harness Config Variables in Tanzu Manifests](using-harness-config-variables-in-pcf-manifests.md). + +For details on how Harness manages Tanzu app names, see [Tanzu App Naming](tanzu-app-naming-with-harness.md). + +### Step 2: Edit manifest.yml file + +Define the default name, memory limit, and number of instances. + +You can override variable values such as `((PCF_APP_NAME))`, `((APP_MEMORY))`, and `((INSTANCES))` in the **vars.yml** file. + +### Next Steps + +* [Upload Local and Remote Tanzu Resource Files](upload-local-and-remote-pcf-resource-files.md) +* [Using Harness Config Variables in Tanzu Manifests](using-harness-config-variables-in-pcf-manifests.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/connect-to-your-target-pcf-account.md b/docs/first-gen/continuous-delivery/pcf-deployments/connect-to-your-target-pcf-account.md new file mode 100644 index 00000000000..4ef800846d9 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/connect-to-your-target-pcf-account.md @@ -0,0 +1,80 @@ +--- +title: Connect to Your Target Tanzu Account +description: Set up the Harness Delegate in your PCF environment and add the Cloud Provider used to connect to your PCF cloud for deployment. +sidebar_position: 20 +helpdocs_topic_id: nh4afrhvkl +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic sets up the Harness Delegate in your Tanzu Application Service (TAS, formerly PCF) environment and adds the Cloud Provider used to connect to your Tanzu cloud for deployment. + + +### Before You Begin + +* See [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts). + +### Step 1: Set Up the Harness Delegate + +The Harness Delegate is a service you run in your local network or VPC to connect your artifact servers, TAS infrastructure, and any other providers with the Harness Manager. + +If you are running your TAS Cloud in AWS, you can use a Shell Script Delegate run on an EC2 instance in the same VPC and subnet as your TAS Cloud, or an ECS Delegate run in an ECS cluster in the same VPC. + +For information on setting up Harness Delegates, see [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +If you want to install the CF CLI on the Delegate, use a Harness Delegate Profile and the script shown in [Cloud Foundry CLI](https://docs.harness.io/article/nxhlbmbgkj-common-delegate-profile-scripts#cloud_foundry_cli). + +### Step 2: Add the Cloud Foundry CLI + +The host running the Harness Delegate must run the CF CLI in order to execute the required commands. + +See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md). + +#### Using CF CLI 7 + +By default, Harness uses CF CLI 6. Certain CLI commands have been changed between the CLI versions. See [Upgrading to CF CLI 7](https://docs.cloudfoundry.org/cf-cli/v7.html#table) from Cloud Foundry. + +If you enable the **Enable CF CLI 7** option on the Harness Service you are deploying, the Harness Delegate will use that CLI version to execute the correct set of commands. + +See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md). + +### Step 3: Add the Harness TAS Cloud Provider + +A Harness TAS Cloud Provider connects Harness to your TAS account and allows the Harness Delegate to make API calls. + +The **TAS Cloud Provider** has the following settings. + +#### Display Name + +Enter a name for the Cloud Provider. You will use this name when selecting this Cloud Provider in Harness Infrastructure Definitions. + +#### Endpoint URL + +Enter the API endpoint URL, without URL scheme. For example, **api.run.pivotal.io**. Omit **http://**.For more information, see [Identifying the API Endpoint for your PAS Instance](https://docs.pivotal.io/pivotalcf/2-3/opsguide/api-endpoint.html) from Pivotal. + +#### Username / Password + +Username and password for the TAS account to use for this connection. + +#### Usage Scope + +If you want to restrict the use of a provider to specific applications and environments, do the following: + +In **Usage Scope**, click the drop-down under **Applications**, and click the name of the application. + +In **Environments**, click the name of the environment. + +### Review: TAS Permissions + +Make sure the TAS user account is assigned Admin, Org Manager, or Space Manager role. The user account must be able to update spaces, orgs, and applications. + +For more information, see [Orgs, Spaces, Roles, and Permissions](https://docs.pivotal.io/pivotalcf/2-3/concepts/roles.html) from Tanzu. + +For steps on setting up all Cloud Providers, see [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +### Next Steps + +* [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md). +* [Adding and Editing Inline Tanzu Manifest Files](adding-and-editing-inline-pcf-manifest-files.md). + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/create-a-basic-pcf-deployment.md b/docs/first-gen/continuous-delivery/pcf-deployments/create-a-basic-pcf-deployment.md new file mode 100644 index 00000000000..e67149a5196 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/create-a-basic-pcf-deployment.md @@ -0,0 +1,208 @@ +--- +title: Create a Basic Tanzu Deployment +description: A PCF Workflow performing a Basic deployment simply takes your Harness PCF Service and deploys it to your PCF Infrastructure Definition. +sidebar_position: 110 +helpdocs_topic_id: c92izkztka +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Tanzu Application Service (formerly PCF) Workflow performing a Basic deployment simply takes your Harness TAS Service and deploys it to your PCF Infrastructure Definition. + +Once the TAS app is set up in the Workflow using the **App Setup** command, you can resize the number of instances specified in the Service manifest.yml or App Setup command using the **App Resize** command. + +Here is an example of a successful TAS Basic deployment: + +![](./static/create-a-basic-pcf-deployment-24.png) + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Step 1: Set Up a TAS Basic Deployment + +To set up a TAS Basic deployment, do the following: + +1. In your Harness Application, connect to your TAS account, as described in [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +2. Create your Harness TAS Service and add the artifact to deploy, as described in [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md). +3. Create your TAS Infrastructure Definition, as described in [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). +4. In your Harness Application, click **Workflows**. +5. Click **Add Workflow**. The **Workflow** dialog appears. +6. Name your Workflow, and then, in **Workflow Type**, select **Basic Deployment**. +7. In **Environment**, select the Environment containing your target Infrastructure Definition. +8. In **Service**, select the TAS Service you want to deploy. +9. In **Infrastructure Definition**, select your target Infrastructure Definition. + +When you are done, the dialog will look something like this: + +![](./static/create-a-basic-pcf-deployment-25.png) + +Click **Submit**. The TAS Basic Workflow is created. + +### Step 2: App Setup + +The App Setup command uses the manifest.yml in your Harness TAS Service to set up your app. + +![](./static/create-a-basic-pcf-deployment-26.png) + +The **Match running instances** setting can be used after your first deployment to override the `instances` setting in the manifest.yml. + +To add routes in addition to the routes defined in the Service manifest, select routes in **Additional Routes**. + +For information on using the **Use App Autoscaler Plugin** settings, see [Use the App Autoscaler Service](use-the-app-autoscaler-service.md). + +In **Timeout**, set how long you want the Harness Delegate to wait for the TAS cloud to respond to API requests before timing out. + +In **Delegate Selectors**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting. + +#### Version Management + +Currently, this feature is behind the Feature Flag `CF_APP_NON_VERSIONING_INACTIVE_ROLLBACK`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.For details on how Harness manages Tanzu app names and how this feature impacts naming, see [Tanzu App Naming](tanzu-app-naming-with-harness.md). + +### Step 3: App Resize + +When you first create your TAS Workflow, the App Resize command is displayed as incomplete. Harness simply needs you to confirm or change the default number of desired instances, **100 Percent**. + +![](./static/create-a-basic-pcf-deployment-27.png) + +You can select to use a percentage of the number specified in your manifest.yml, or if you used the App Setup **Match desired count with current running instances** setting, the current number of running instances. You can also use a count to explicitly set the number of desired instances. + +Click **Advanced** to see **Desired Instances - Old Version**. Here you can set the number of instances for the previous version of the app. By default, the app will downsize to the same number as the number of new app instances. + +You can only have one App Resize step in a Basic TAS Workflow. + +#### Downsize or Retain Instances + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.You can choose one of the following resize options: + +* **Retain instances:** the number entered in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app will remain running. +* **Downsize instances:** the number entered in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app are downsized. + +### Step 4: App Rollback + +In **Rollback Steps**, you can see the **App Rollback** command. + +![](./static/create-a-basic-pcf-deployment-28.png) + +There is nothing to set in this command. It is simply the command to rollback to the old version of the app in case of a deployment failure. + +##### Deploy a TAS Basic Workflow + +To deploy your TAS Basic Workflow, click **Deploy**. + +![](./static/create-a-basic-pcf-deployment-29.png) + +Select the artifact for your new app and click **Submit**. The Workflow is deployed. + +The **App Setup** command output shows your app was created successfully: + + +``` +---------- Starting PCF App Setup Command + +# Fetching all existing applications +# No Existing applications found + +# Creating new Application +# Manifest File Content: +--- +applications: +- name: ExampleForDoc__PCF__basic__Staging__0 + memory: ((PCF_APP_MEMORY)) + instances: 0 + path: /home/ubuntu/harness-delegate/./repository/pcfartifacts/BY3yUoB7Q3ibJicbwmgn8Q/1573586245429SampleWebApp.war + random-route: true + +# CF_HOME value: /home/ubuntu/harness-delegate/./repository/pcfartifacts/BY3yUoB7Q3ibJicbwmgn8Q +# Performing "login" +API endpoint: api.run.pivotal.io +Authenticating... +OK + +Targeted org Harness +Targeted space AD00001863 + +API endpoint: https://api.run.pivotal.io (API version: 2.142.0) +User: john.doe@harness.io +Org: Harness +Space: AD00001863 +# Login Successful +# Performing "cf push" +Pushing from manifest to org Harness / space AD00001863 as john.doe@harness.io... +Using manifest file /home/ubuntu/harness-delegate/./repository/pcfartifacts/BY3yUoB7Q3ibJicbwmgn8Q/ExampleForDoc__PCF__basic__Staging__0_1.yml +Getting app info... +Creating app with these attributes... ++ name: ExampleForDoc__PCF__basic__Staging__0 + path: /home/ubuntu/harness-delegate/repository/pcfartifacts/BY3yUoB7Q3ibJicbwmgn8Q/1573586245429SampleWebApp.war ++ instances: 0 ++ memory: 350M + routes: ++ examplefordocpcfbasicstaging0-zany-waterbuck.cfapps.io + +Creating app ExampleForDoc__PCF__basic__Staging__0... +Mapping routes... +Comparing local files to remote cache... +Packaging files to upload... +Uploading files... + 0 B / 4.70 KiB 0.00% 4.70 KiB / 4.70 KiB 100.00% 4.70 KiB / 4.70 KiB 100.00% 4.70 KiB / 4.70 KiB 100.00% 4.70 KiB / 4.70 KiB 100.00% 4.70 KiB / 4.70 KiB 100.00% 4.70 KiB / 4.70 KiB 100.00% 1s +Waiting for API to complete processing files... +... +There are no running instances of this process. + +# Application created successfully +# App Details: +NAME: ExampleForDoc__PCF__basic__Staging__0 +INSTANCE-COUNT: 0 +ROUTES: [examplefordocpcfbasicstaging0-zany-waterbuck.cfapps.io] + + ---------- PCF Setup process completed successfully +# Deleting any temporary files created +``` +Next, the **App Resize** command shows the app instances upsized to the new instance count, in our example, 1. + + +``` +---------- Starting PCF Resize Command +# Downsizing previous application version/s +# No Application is available for downsize +# Upsizing new application: +APPLICATION-NAME: ExampleForDoc__PCF__basic__Staging__0 +CURRENT-INSTANCE-COUNT: 0 +DESIRED-INSTANCE-COUNT: 1 +# Application upsized successfully + +# Application state details after upsize: +NAME: ExampleForDoc__PCF__basic__Staging__0 +INSTANCE-COUNT: 1 +ROUTES: [examplefordocpcfbasicstaging0-zany-waterbuck.cfapps.io] + +Instance Details: +Index: 0 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 +--------- PCF Resize completed successfully +``` +You TAS Basic Workflow is complete. + +### Next Steps + +* [Create a Canary Tanzu Deployment](create-a-canary-pcf-deployment.md) +* [Create a Blue/Green Tanzu Deployment](create-a-blue-green-pcf-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/create-a-blue-green-pcf-deployment.md b/docs/first-gen/continuous-delivery/pcf-deployments/create-a-blue-green-pcf-deployment.md new file mode 100644 index 00000000000..1898a40b222 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/create-a-blue-green-pcf-deployment.md @@ -0,0 +1,368 @@ +--- +title: Create a Blue/Green Tanzu Deployment +description: Harness PCF Blue/Green deployments use the routes in the PCF manifest.yml and a temporary route you specify in the Harness Workflow. +sidebar_position: 130 +helpdocs_topic_id: 52muxcsr1v +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Tanzu Application Service (formerly PCF) Blue/Green deployments use the route(s) in the TAS manifest.yml and a temporary route you specify in the Harness Workflow. + +The Workflow deploys the app using the temporary route first using the **App Setup** command. Next, in the **App Resize** command, Harness maintains the number of instances at 100% of the `instances` specified in the manifest.yml. + +For Blue/Green deployments, the **App Resize** step is always 100% because it does not change the number of instances as it did in the Canary deployment. In Blue/Green, you are simply deploying the new app to the number of instances set in the **App Setup** step and keeping the old app at the same number of instances (100% count). + +Once that deployment is successful, the Workflow **Swap Routes** command switches the networking routing, directing production traffic (Green) to the new app and stage traffic (Blue) to the old app. + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Visual Summary + +Here's the output after a successful Blue/Green TAS Deployment. + +![](./static/create-a-blue-green-pcf-deployment-52.png) + +### Step 1: Specify the TAS Service Routes + +In the manifest.yml in your Harness TAS Service, you can specify the route(s) to use for the Blue/Green deployment. For example: + + +``` + ... + routes: + - route: example.com +``` +Each route for the app is created if it does not already exist. + +As you will see when you set up the Blue/Green Workflow, you specify a temporary route in the App Setup command's **Temporary Routes** setting: + +![](./static/create-a-blue-green-pcf-deployment-53.png) + +### Step 2: Set Up a TAS Blue/Green Workflow + +To explain TAS Blue/Green Workflow commands and settings, we will create a Blue/Green Workflow that uses a temporary route and, if it is successful, deploys 100% of instances using the primary route. + +To implement this Blue/Green Workflow, do the following: + +1. In your Harness Application, click **Workflows**. +2. In **Workflows**, click Add Workflow. The **Workflow** dialog appears. +3. In **Name**, enter a name for the Workflow. +4. In **Workflow Type**, select **Blue/Green Deployment**. +5. In **Environment**, select the Environment you created for TAS. +6. In **Service**, select a Harness TAS Service you have created. The Service manifest.yml must contain a `route`. The route can be an existing route or not. If the doesn't exist, Harness will create it for you. This is the route that will be used at the final stage of the Blue/Green deployment. The temporary route for the prior stage will be selected in the **App Setup** step of Workflow. +7. In **Infrastructure Definition**, select the Infrastructure Definition that describes the target TAS space. +8. Click **Submit**. + +The new Blue/Green Workflow is displayed, along with the preconfigured steps: + +![](./static/create-a-blue-green-pcf-deployment-54.png) + +### Step 3: App Setup + +Click **App Setup** to see the default, preconfigured command: + +![](./static/create-a-blue-green-pcf-deployment-55.png) + +None of the settings in this dialog are mandatory, but for Blue/Green Workflows, the **Match running instances**, **Additional Routes**, and **Temporary Routes** settings are important. + +If you select **Match running instances**, Harness will ignore the number of instances set in the Service's manifest.yml `instances` property and use the number of currently running instances as the desired instance count during deployment. + +The first time you deploy this Workflow, there is no reason to select **Match running instances** as there are no current running instances. + +**Additional Routes** is automatically populated with the routes Harness can find using the Infrastructure Definition you selected for this Workflow + +**Additional Routes** has two uses in Blue/Green deployments: + +* Select the routes that you want mapped to the app in addition to the routes already mapped in the app in the manifest in your Harness Service. +* You can also omit routes in the manifest in your Harness Service, and simply select them in **Additional Routes**. The routes selected in **Additional Routes** will be used as the final (green) routes for the app. + +**Temporary Routes** is automatically populated with the routes Harness can find using the Infrastructure Definition you selected for this Workflow. + +In **Temporary Routes**, you can select one or more temporary routes for Harness to use when it creates the TAS app. Later, in the **Swap State** step, Harness will replace these routes with the routes in the manifest.yml `routes` property in your Service. + +If you do not select a route in **Temporary Routes**, Harness will create one automatically. + +For information on using the **Use App Autoscaler Plugin** settings, see [Use the App Autoscaler Service](use-the-app-autoscaler-service.md). + +In **Timeout**, set how long you want the Harness Delegate to wait for the TAS cloud to respond to API requests before timing out. + +In **Delegate Selectors**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector. + +For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.#### Version Management + +Currently, this feature is behind the Feature Flag `CF_APP_NON_VERSIONING_INACTIVE_ROLLBACK`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.For details on how Harness manages Tanzu app names and how this feature impacts naming, see [Tanzu App Naming](tanzu-app-naming-with-harness.md). + +**When are the apps renamed?** The new app version is named during the **App Setup** step and according to the option you selected in **Version Management**, but the previous app version is not renamed until the **Swap Route** step in order to avoid errors with any monitoring tools you have pointed at the app version receiving production traffic.### Step 4: App Resize + +Click **App Resize** to see the default settings. + +![](./static/create-a-blue-green-pcf-deployment-56.png) + +For Blue/Green deployments, the **App Resize** step is always 100% because it does not change the number of instances as it did in the Canary deployment. In Blue/Green, you are simply deploying the new app to the number of instances set in the **App Setup** step and keeping the old app at the same number of instances. + +#### App Resize V2 or Default App Resize + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.You can choose one of the following resize options: + +* **App Resize V2:** the number enter in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app will remain running. +* **Default App Resize:** the number enter in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app are downsized. + +### Step 5: Verify Staging + +There are no commands in **Verify Staging** because you have not set up verification steps in this tutorial, and you would not add them in the initial deployment because there are no other deployments for the steps to use in comparison. + +Later, when you are developing Blue/Green Workflows, add verification steps to verify the deployment of your app using the temporary route(s). This way, Harness will only proceed to the **Swap Routes** step if verification does not detects failures. For more information about verifying deployments, see [Continuous Verification](https://docs.harness.io/category/gurgsl2gqt-continuous-verification). + +### Step 6: Swap Routes + +The final step in the TAS Blue/Green Workflow is the **Swap Routes** command. + +![](./static/create-a-blue-green-pcf-deployment-57.png) + +This command will swap the route(s) used by the deployed app from the temporary route(s) to the route(s) specified in the manifest.yml in your Service. + +**Previous app renamed.** During the **Swap Route** step, the previous app version is renamed according to the option you selected in **Version Management** in the **App Setup** step. The previous app version is not renamed until the **Swap Route** step in order to avoid errors with any monitoring tools you have pointed at the app version receiving production traffic.For detail on **Swap Routes** is a Workflow's **Rollback Steps**, see [Blue/Green Rollback](#blue_green_rollback).### Step 7: Deploy a TAS Blue/Green Workflow + +Now that the Blue/Green Workflow is configured, from the breadcrumb menu, click back to the main Workflow page. + +You can now deploy the Blue/Green Workflow. + +1. Click **Deploy**. +2. In **Start New Deployment**, select an artifact to deploy, and click **Submit**. + +The TAS Blue/Green Workflow deploys. + +Here you can see the **App Step** command deployed: + +![](./static/create-a-blue-green-pcf-deployment-58.png) + +When this step is deployed, the output will look something like this: + + +``` +---------- Starting PCF App Setup Command + +# Fetching all existing applications +# Existing applications: +ExampleForDoc__PCF__Blue__Green__Staging__2 + +# Processing Apps with Non-Zero Instances +# No Change For Most Recent Application: ExampleForDoc__PCF__Blue__Green__Staging__2 + +# No applications were eligible for deletion + +# Creating new Application +# Manifest File Content: +--- +applications: +- name: ExampleForDoc__PCF__Blue__Green__Staging__3 + memory: ((PCF_APP_MEMORY)) + instances: 0 + path: /home/ubuntu/harness-delegate/./repository/pcfartifacts/11uZOBMRQoSbDqr_WfRuaQ/1572474750133SampleWebApp.war + random-route: true + + +# CF_HOME value: /home/ubuntu/harness-delegate/./repository/pcfartifacts/11uZOBMRQoSbDqr_WfRuaQ +# Performing "login" +API endpoint: api.run.pivotal.io +Authenticating... +OK + +Targeted org Harness +Targeted space AD00001863 +... +Getting app info... +Creating app with these attributes... ++ name: ExampleForDoc__PCF__Blue__Green__Staging__3 + path: /home/ubuntu/harness-delegate/repository/pcfartifacts/11uZOBMRQoSbDqr_WfRuaQ/1572474750133SampleWebApp.war ++ instances: 0 ++ memory: 350M + routes: ++ examplefordocpcfbluegreenstaging3-insightful-cat.cfapps.io +... +# Application created successfully +# App Details: +NAME: ExampleForDoc__PCF__Blue__Green__Staging__3 +INSTANCE-COUNT: 0 +ROUTES: [examplefordocpcfbluegreenstaging3-insightful-cat.cfapps.io] + + ---------- PCF Setup process completed successfully +# Deleting any temporary files created +``` +The route specified in the Service manifest.yml is `docs.cfapps.io`: + +![](./static/create-a-blue-green-pcf-deployment-59.png) + +Note that in the **App Setup** step, a temporary route has been used: `examplefordocpcfbluegreenstaging3-insightful-cat.cfapps.io`. + +Here you can see the **App Resize** command deployed: + +![](./static/create-a-blue-green-pcf-deployment-60.png) + +When this step is deployed, the output will look something like this: + + +``` +---------- Starting PCF Resize Command + +# Upsizing new application: +APPLICATION-NAME: ExampleForDoc__PCF__Blue__Green__Staging__3 +CURRENT-INSTANCE-COUNT: 0 +DESIRED-INSTANCE-COUNT: 6 +# Application upsized successfully + +# Application state details after upsize: +NAME: ExampleForDoc__PCF__Blue__Green__Staging__3 +INSTANCE-COUNT: 6 +ROUTES: [examplefordocpcfbluegreenstaging3-insightful-cat.cfapps.io] + +Instance Details: +Index: 0 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 1 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 2 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 3 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 4 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 5 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +# BG Deployment. Old Application will not be downsized. +--------- PCF Resize completed successfully +``` +Note that 100% of the instances specified in the Service manifest.yml have been deployed (6). + +Here you can see the **Swap Routes** command deployed: + +![](./static/create-a-blue-green-pcf-deployment-61.png) + +When this step is deployed, the output will look something like this: + + +``` +--------- Starting PCF Route Update + +# Adding Routes +APPLICATION: ExampleForDoc__PCF__Blue__Green__Staging__3 +ROUTE: +[ +docs.cfapps.io +] + +# Unmapping Routes +APPLICATION: ExampleForDoc__PCF__Blue__Green__Staging__3 +ROUTES: +[ +examplefordocpcfbluegreenstaging3-insightful-cat.cfapps.io +] +# Unmapping Routes was successfully completed + +# Adding Routes +APPLICATION: ExampleForDoc__PCF__Blue__Green__Staging__2 +ROUTE: +[ +examplefordocpcfbluegreenstaging3-insightful-cat.cfapps.io +] + +# Unmapping Routes +APPLICATION: ExampleForDoc__PCF__Blue__Green__Staging__2 +ROUTES: +[ +docs.cfapps.io +] +# Unmapping Routes was successfully completed + +--------- PCF Route Update completed successfully +``` +Note how the new app receives the `docs.cfapps.io` route from the Service manifest.yml and the old route receives the temporary route `examplefordocpcfbluegreenstaging3-insightful-cat.cfapps.io`. + +The Blue/Green deployment is complete. + +### Blue/Green Rollback + +The **Rollback Steps** section in the Workflow contains two default steps, Swap Routes and App Rollback. + +When **Swap Routes** is used in a Workflow's **Rollback Steps**, the app that was active before deployment is restored to its original state with the same instances and routes it had before deployment. + +The failed app is deleted. + +#### Upsize inactive Service Option + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.When the **Upsize inactive Service** option in Swap Routes is enabled, on Blue/Green rollback Harness upsizes the previous stage service from 0 to its original count and maps it back to the stage route. + +Harness returns the services to their pre-deployment state. + +### App Versioning without Numbering + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.When you deploy an app it maintains its name without adding any suffix to the name to indicate its release version. A suffix are added to the previous version of the app. + +The first time you deploy the app, Harness creates the app with the name you entered in **App Setup**. + +When deploying new versions of that app, Harness uses the same name for the app and renames the previous version of the app with suffix `_INACTIVE`. + +For example, if the app is named **OrderService**, the first deployment will use the name **OrderService**. When the next version of the app is deployed, the new version is named **OrderService** and the previous version is now named **OrderService\_INACTIVE**. + +During rollback, the new app version is deleted and the previous app is renamed without the suffix. + +#### Blue/Green Rollback + +A TAS Blue/Green deployment keeps the new app version (receiving prod traffic) and the previous, inactive version (receiving stage traffic) running. + +During rollback of a new version, prod traffic is swapped back to the previous app version and it is renamed by removing the suffix. The failed version is given the stage traffic. + +### App Name Variables and Blue Green Deployments + +Some Tanzu variables display different information as the [Blue Green deployment](create-a-blue-green-pcf-deployment.md) progresses through its steps. + +See [App Name Variables and Blue Green Deployments](pcf-built-in-variables.md#app-name-variables-and-blue-green-deployments). + +### Next Steps + +* [Run CF CLI Commands and Scripts in a Workflow](run-cf-cli-commands-and-scripts-in-a-workflow.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/create-a-canary-pcf-deployment.md b/docs/first-gen/continuous-delivery/pcf-deployments/create-a-canary-pcf-deployment.md new file mode 100644 index 00000000000..b0ac11c162a --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/create-a-canary-pcf-deployment.md @@ -0,0 +1,369 @@ +--- +title: Create a Canary Tanzu Deployment +description: PCF Canary deployments contain two or more phases that deploy app instances gradually. +sidebar_position: 120 +helpdocs_topic_id: 99bxiqfi1u +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Tanzu Application Service (formerly PCF) Canary deployments contain two or more phases that deploy app instances gradually, ensuring the stability of a small percentage of instances before rolling out to your desired instance count. + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Visual Summary + +Here is an example of a successful TAS Canary deployment containing two phases: + +![](./static/create-a-canary-pcf-deployment-83.png) + +### Review: App Resizing in Canary Deployments + +To understand how app resizing works in a Canary deployment, let's look at an example of a 3 phase deployment. + +#### First Deployment + +Let's look at the very first deployment. There are no running instances before deployment, and so there is nothing to downsize. + +1. Phase 1 is set to 25% for new instances (**Desired Instances (cumulative)** in App Resize step). +2. Phase 2 is set to 50% for new instances. +3. Phase 3 is set to 100% for new instances. + +Now, let's imagine the TAS manifest specified in the Harness Service requests **4 instances** and there is no autoscaler plugin configured. + +Here's what will happen each time you deploy: + +First deployment: + +1. Phase 1 deploys 1 new instance. +2. Phase 2 deploys 2 new instances. +3. Phase 3 deploys all 4 desired instances. + +#### Second Deployment + +Now, let's look at what happens with the second deployment: + +1. **There are 4 running instances now.** These were deployed by the first deployment. All downsize percentages refer to this number. +2. Phase 1 deploys new app version to 1 instance and downsizes old app version to 3 instance. Downsize is 25%. It results in 25% of 4 which is 1 instance. +Current state: 1 new versioned instance and 3 old versioned instances running. +3. Phase 2 deploys new version to 2 instances and downsizes to 2 instances. Downsize is 50% of the number of old versioned instances (4). So 2 instances are downsized. +Current state: 2 new versioned instances and 2 old versioned instance running. +4. Phase 2 deploys to new version to 4 instances and downsizes to 0 old instances. +Final state: 4 new instances and 0 old instances. + +If you do not enter a value for the number of instances for the old version, Harness uses the number of new instances as its guide. For example, if you deployed 4 new instances and then select **50 Percent** in **Desired Instances - Old Version**, Harness downsizes the old app version to 50% of 4 instances.#### What about Autoscaler? + +If you were using an App Autoscaler plugin, it would be applied at the end of the deployment. For example, if Autoscaler is set to min 8 and max 10, Harness will set the desired number of instances to the minimum value. So the total number of new instances is 8. + +#### Downsize or Retain Instances + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.You can choose one of the following resize options in the App Resize step settings: + +* **Retain instances:** the number entered in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app will remain running. +* **Downsize instances:** the number entered in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app are downsized. + +### Review: Canary Workflow Phases + +To explain TAS Canary Workflow steps and settings, we will create a Canary Workflow that deploys a new app to 50% of instances and, if it is successful, deploys it to 100% of the instances. + +The Canary Workflow will contains two phases: + +1. Phase 1: + 1. **App Setup** command: 6 instances set up. The number of instances defined in the manifest.yml via the vars.yml variable value for instances:![](./static/create-a-canary-pcf-deployment-84.png) + 2. **App Resize** command: 50% desired instances. This ensures that the app deploys on a small number of instances before moving on to Phase 2. +2. Phase 2: + 1. **App Resize** command: 100% of instances. + +### Step 1: Add the Canary Workflow + +To implement this Canary workflow, do the following: + +1. In your Harness application, click **Workflows**. The **Workflows** page appears. +2. Click **Add Workflow**. The **Workflow** dialog appears. +3. In **Name**, enter a name that describes what the Workflow does. For example, for a simple Canary deployment, **Canary TAS**. +4. In **Workflow Type**, select **Canary Deployment**. +5. In **Environment**, select the Environment containing the Infrastructure Definition for the target space where you will deploy your TAS app. +6. When the **Workflow** dialog is finished, click **SUBMIT**. The TAS Canary Workflow appears. + +First, we will set up **Phase 1**, where we will deploy 50% of the 6 instances defined in our Service's **Manifests** section. + +### Step 2: Configure Canary Phase 1 + +To configure the Canary phases, do the following: + +1. In the Workflow, in **Deployment Phases**, click **Add Phase**. The **Workflow Phase** dialog appears. +2. In **Service**, ensure you select the Harness Service that contains the manifest for the TAS app you want to deploy. +3. In **Infrastructure Definition**, select the [Infrastructure Definition](https://docs.harness.io/article/n39w05njjv-environment-configuration#step_2_add_infrastructure_definition) that defines the target space where you want to deploy your app. +4. Click **Submit**. Phase 1 appears. + +![](./static/create-a-canary-pcf-deployment-85.png) + +Now we configure this phase to deploy 50% of the TAS app instances we have set in the Service manifest.yml. + +### Step 3: Configure App Setup for Phase 1 + +1. Click **App Setup**. The **App Setup** dialog appears. + + ![](./static/create-a-canary-pcf-deployment-86.png) + + You don't need to change any settings in App Setup, but let's review the default settings: + + | | | + | --- | --- | + | **Setting** | **Description** | + | **Match running instances** | The first time you deploy this Workflow, this setting isn't used because we have no running instances.In future deployments, you might wish to select this setting. For Canary Workflows, it isn't relevant because you will be setting the desired instance count for the phase. | + | **Resize Strategy** | Specify the order in which you want Harness to downsize and add old and new instances.The first time you deploy this Workflow, this setting isn't used because we have no running instances. | + | **Active Versions To Keep** | Enter the number of previous app versions to downsize and keep. You can upsize these versions later if needed. The most recent app version will not be downsized. | + | **Additional Routes** | Select any routes that you want to map to your app, in addition to the routes specified in the manifest in your Harness Service. | + | **Delegate Selectors** | Select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors).Harness will use Delegates matching the Selectors you add.If you use one Selector, Harness will use any Delegate that has that Selector.If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected.You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector.For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting. | + | **Use App Autoscaler Plugin** | Enable this setting if you have the [App Autoscaler](https://docs.pivotal.io/application-service/2-7/appsman-services/autoscaler/using-autoscaler-cli.html) service running in your target Pivotal space and bound to the app you are deploying.For more information on using the plugin with Harness, see [App Autoscaler CLI Plugin](use-the-app-autoscaler-service.md). | + | **Timeout** | Set how long you want the Harness Delegate to wait for the TAS cloud to respond to API requests before timing out. | + | **Version Management** | For details on how Harness manages Tanzu app names and how this feature impacts naming, see [Tanzu App Naming](tanzu-app-naming-with-harness.md). | + +2. Click **Submit** or close the dialog. + +Next, we will resize the number of instances to 50% of the instance count set in your Service manifest.yml. + +### Step 4: Configure App Resize for Phase 1 + +1. Click **App Resize**. The **App Resize** dialog appears. +2. In **Desired Instances**, enter **50** and choose **Percent**. The app will deploy on 3 instances successfully before the Workflow moves onto Phase 2. +3. Click **SUBMIT**. + +#### Downsize or Retain Instances + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.You can choose one of the following resize options: + +* **Retain instances:** the number entered in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app will remain running. +* **Downsize instances:** the number entered in **Advanced Settings** for **Desired Instances - Old App** determines how many instances of the old app are downsized. + +### Step 5: Configure Canary Phase 2 + +Next, we'll add Phase 2 where we will deploy to 100% of the instances specified in the Service manifest.yml. + +1. From the breadcrumb menu, click back to the main Canary Workflow page. +2. In **Deployment Phases**, under **Phase 1**, click **Add Phase**. The **Workflow Phase** dialog appears. +3. Select the same Service and [Infrastructure Definition](https://docs.harness.io/article/n39w05njjv-environment-configuration#step_2_add_infrastructure_definition) that you selected in Phase 1, and then click **Submit**. + +The **Phase 2** steps appear. + +![](./static/create-a-canary-pcf-deployment-87.png) + +These steps will be executed when the Phase 1 steps are successful. + +### Step 6: Configure App Resize for Phase 2 + +1. Click **App Resize**. The **App Resize** settings appear. + ![](./static/create-a-canary-pcf-deployment-88.png) + In Phase 1, the app successfully deployed to 50% of the instances set in the Service manifest.yml. In Phase 2, you can deploy it to 100% of the instances. +2. In **Desired Instances**, enter **100** and choose **Percent**, and then click **Submit**. This will deploy the new app to 100% of the instances you set up in Phase 1. The Phase 2 steps are now complete. + +If your manifest does not specify the number of instances, Harness defaults to 2 instances. + +In **Advanced Settings**, you can specify **Desired Instances - Old Version**. This allows you to manage how many instances of the old app version to keep running. + +Harness downsizes the number of instances hosting the previous version to achieve the number of instances you request. For example, if you enter 40% for the old app version, then the old app version will have 40% instances up at the end of the step. Harness downsizes the old app instances by 60% to get the 40% you requested. + +If you do not enter a value, Harness uses the number of new instances as its guide. For example, if you deployed 4 new instances and then select **50 Percent** in **Desired Instances - Old Version**, Harness downsizes the old app version to 50% of 4 instances. + +If you are using the App Autoscaler plugin, then autoscaling is applied after the final phase of deployment. After all phases are completed and the number of old version instances has reached the desired number, then the final number of instances will be as configured as defined by the Autoscaler.### Step 7: Deploy a Canary TAS Workflow + +Now that the Canary Workflow is configured, from the breadcrumb menu, click back to the main Canary Workflow page. You can see both Phases are complete. + +![](./static/create-a-canary-pcf-deployment-89.png) + +You can now deploy the Canary Workflow. + +1. Click **Deploy**. +2. In **Start New Deployment**, select an artifact to deploy. + + ![](./static/create-a-canary-pcf-deployment-90.png) + +3. Click **Submit** to deploy your app. + +Here you can see the **App Setup** step in Phase 1: + +![](./static/create-a-canary-pcf-deployment-91.png) + +When this step is deployed, the output will look something like this: + + +``` +---------- Starting PCF App Setup Command + +# Fetching all existing applications +# No Existing applications found + +# Creating new Application +# Manifest File Content: +--- +applications: +- name: ExampleForDoc__PCF__Latest__Staging__0 + memory: ((PCF_APP_MEMORY)) + instances: 0 + path: /home/ubuntu/harness-delegate/./repository/pcfartifacts/VsOd7PkCTgWyl0SEEZ5E0w/1572389711331todolist.war + random-route: true +... + +name: ExampleForDoc__PCF__Latest__Staging__0 +requested state: started +routes: examplefordocpcflateststaging0-anxious-wolverine.cfapps.io +last uploaded: Tue 29 Oct 22:55:40 UTC 2019 +stack: cflinuxfs3 +buildpacks: client-certificate-mapper=1.11.0_RELEASE container-security-provider=1.16.0_RELEASE java-buildpack=v4.24-offline-https://github.com/cloudfoundry/java-buildpack.git#a2dd394 java-opts java-security jvmkill-agent=1.16.0_RELEASE open-jdk-like-jre=... + +type: web +instances: 0/0 +memory usage: 350M +... +There are no running instances of this process. +# Application created successfully +# App Details: +NAME: ExampleForDoc__PCF__Latest__Staging__0 +INSTANCE-COUNT: 0 +ROUTES: [examplefordocpcflateststaging0-anxious-wolverine.cfapps.io] + + + ---------- PCF Setup process completed successfully +# Deleting any temporary files created +``` +Note that since this is the first time this Workflow is deployed, there are no running instances. + +Next is the **App Resize** step in Phase 1: + +![](./static/create-a-canary-pcf-deployment-92.png) + +When this step is deployed, the output will look something like this: + + +``` +---------- Starting PCF Resize Command + +# Downsizing previous application version/s +# No Application is available for downsize +# Upsizing new application: +APPLICATION-NAME: ExampleForDoc__PCF__Latest__Staging__0 +CURRENT-INSTANCE-COUNT: 0 +DESIRED-INSTANCE-COUNT: 3 +# Application upsized successfully + +# Application state details after upsize: +NAME: ExampleForDoc__PCF__Latest__Staging__0 +INSTANCE-COUNT: 3 +ROUTES: [examplefordocpcflateststaging0-anxious-wolverine.cfapps.io] + +Instance Details: +Index: 0 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 1 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 2 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +--------- PCF Resize completed successfully +``` +Note the `DESIRED-INSTANCE-COUNT: 3` and `INSTANCE-COUNT: 3` information. This is the result of setting 50% in **Desired Instances**. + +In Phase 2 we see the final step, **App Resize**: + +![](./static/create-a-canary-pcf-deployment-93.png) + +When this App Resize step is deployed, the output will look something like the following: + + +``` +---------- Starting PCF Resize Command + +# Downsizing previous application version/s +# No Application is available for downsize +# Upsizing new application: +APPLICATION-NAME: ExampleForDoc__PCF__Latest__Staging__1 +CURRENT-INSTANCE-COUNT: 3 +DESIRED-INSTANCE-COUNT: 6 +# Application upsized successfully + +# Application state details after upsize: +NAME: ExampleForDoc__PCF__Latest__Staging__1 +INSTANCE-COUNT: 6 +ROUTES: [examplefordocpcflateststaging1-fearless-eland.cfapps.io] + +Instance Details: +Index: 0 +State: STARTING +Disk Usage: 138027008 +CPU: 0.0 +Memory Usage: 7250563 + +Index: 1 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 2 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 3 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 4 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +Index: 5 +State: STARTING +Disk Usage: 0 +CPU: 0.0 +Memory Usage: 0 + +--------- PCF Resize completed successfully +``` +You can see that the app was deployed to 100% of the instances set in the manifest.yml (6). + +### App Versioning without Numbering + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it's available for Trial and Community Editions. + +See [New features added to Harness](https://changelog.harness.io/?categories=fix,improvement,new) and [Features behind Feature Flags](https://changelog.harness.io/?categories=early-access) (Early Access) for Feature Flag information.When you deploy an app it maintains its name without adding any suffix to the name to indicate its release version. A suffix are added to the previous version of the app. + +The first time you deploy the app, Harness creates the app with the name you entered in **App Setup**. + +When deploying new versions of that app, Harness uses the same name for the app and renames the previous version of the app with suffix `_INACTIVE`. + +For example, if the app is named **OrderService**, the first deployment will use the name **OrderService**. When the next version of the app is deployed, the new version is named **OrderService** and the previous version is now named **OrderService\_INACTIVE**. + +During rollback, the new app version is deleted and the previous app is renamed without the suffix. + +### Next Steps + +* [Create a Blue/Green Tanzu Deployment](create-a-blue-green-pcf-deployment.md) +* [Run CF CLI Commands and Scripts in a Workflow](run-cf-cli-commands-and-scripts-in-a-workflow.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/define-your-pcf-target-infrastructure.md b/docs/first-gen/continuous-delivery/pcf-deployments/define-your-pcf-target-infrastructure.md new file mode 100644 index 00000000000..6e8c0e8f5b8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/define-your-pcf-target-infrastructure.md @@ -0,0 +1,80 @@ +--- +title: Define Your Tanzu Target Infrastructure +description: Create Infrastructure Definitions that describe your target deployment environments in the Environment. +sidebar_position: 100 +helpdocs_topic_id: r1crlrpjk4 +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +In the Environment, you create [Infrastructure Definitions](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) that describe your target deployment environments. Tanzu Application Service (TAS, formerly PCF) Infrastructure Definitions specify the following: + +* The TAS deployment type. +* The TAS Cloud Provider that you added, as described in [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#pivotal_cloud_foundry_pcf). +* The TAS organization to use. +* The target TAS space that the app you are deploying is scoped to. +* Any specific Harness Services that you want to scope the Infrastructure Definition to. If you choose not to scope to specific Services, the Infrastructure Definition may be used with any TAS Service. + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Step 1: Add Infrastructure Definition + +To add an Infrastructure Definition, do the following: + +1. In your Environment, click **Add Infrastructure Definition**. The **Infrastructure Definition** dialog appears. + +The **Infrastructure Definition** dialog has the following fields. + +### Step 2: Name + +Enter a name for your Infrastructure Definition. You will use this name to select this Infrastructure Definition when you create Harness Workflows. + +### Step 3: Cloud Provider Type + +Select **Pivotal Cloud Foundry**. + +### Step 4: Deployment Type + +Select **Pivotal Cloud Foundry**. + +### Step 5: Cloud Provider + +Select the Cloud Provider to use to connect to the foundation. This will be one of the TAS Cloud Providers you set up in [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-infrastructure-providers#pivotal_cloud_foundry_pcf). + +The roles associated with the TAS user account used in the Cloud Provider determine what orgs will appear in the **Organization** setting. + +### Step 6: Organization + +The active TAS orgs available to the TAS user account used in the Cloud Provider are listed. Select the TAS org for the development account. + +### Step 7: Space + +Select the space where the application you are deploying is scoped. + +### Step 8: Scope to Specific Services + +Select this option to scope this Infrastructure Definition to the Harness TAS Service you want to deploy. + +If you do not select this setting, you can select this Infrastructure Definition when you create a Workflow for **any** TAS Service. + +### Option: Use Variables in the Infrastructure Definition + +You can use Service variables in the TAS Infrastructure Definition **Organization** and **Space** settings: + +![](./static/define-your-pcf-target-infrastructure-23.png) + +This allows you to set the orgs and spaces in a Service, and have the Infrastructure Definition act as a template that multiple Services can use. + +The orgs specified must be available to the TAS user account used to set up the TAS Cloud Provider used in the Infrastructure Definition. + +### Next Steps + +* [Override Tanzu Manifests and Config Variables and Files](override-pcf-manifests-and-config-variables-and-files.md) +* [Create a Basic Tanzu Deployment](create-a-basic-pcf-deployment.md) +* [Create a Canary Tanzu Deployment](create-a-canary-pcf-deployment.md) +* [Create a Blue/Green Tanzu Deployment](create-a-blue-green-pcf-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/install-cloud-foundry-cli-6-and-7-on-harness-delegates.md b/docs/first-gen/continuous-delivery/pcf-deployments/install-cloud-foundry-cli-6-and-7-on-harness-delegates.md new file mode 100644 index 00000000000..465ca240b75 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/install-cloud-foundry-cli-6-and-7-on-harness-delegates.md @@ -0,0 +1,364 @@ +--- +title: Install Cloud Foundry CLI Versions on the Harness Delegate +description: The host running the Harness Delegate must run the Cloud Foundry CLI in order to execute the CF commands used by Harness during a Tanzu Application Service (TAS) deployment. You can follow the steps… +sidebar_position: 190 +helpdocs_topic_id: 8tsb75aldu +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The host running the Harness Delegate must run the Cloud Foundry CLI in order to execute the CF commands used by Harness during a Tanzu Application Service (TAS) deployment. + +You can follow the steps in [Installing the cf CLI](https://docs.pivotal.io/pivotalcf/2-3/cf-cli/install-go-cli.html) from Tanzu to install the CLI, or you can also use a Delegate Profile to install the CLI, as described in this topic. + +The version of the CF CLI you install on the Delegate should always match the TAS features you are using in your Harness TAS deployment. For example, if you are using `buildpacks` in your manifest.yml in your Harness Service, the CLI you install on the Delegate should be version 3.6 or later. + +This topic provides examples of Delegate Profile scripts that install CF CLI 6 and 7: + +* [Review: Using CF CLI Versions](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#review-using-cf-cli-versions) +* [Review: Delegate Capability Check for CF CLI](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#review-delegate-capability-check-for-cf-cli) + + [Limitations](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#limitations) +* [Select the CF CLI Version in a Harness Service](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#select-the-cf-cli-version-in-a-harness-service) +* [Install the CF CLI on Harness Delegates using a Package Manager](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#install-the-cf-cli-on-harness-delegates-using-a-package-manager) + + [CF CLI 6](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#cf-cli-6) + + [CF CLI 7](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#cf-cli-7) +* [Install the CF CLI on Harness Delegates using a Compressed Binary](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#install-the-cf-cli-on-harness-delegates-using-a-compressed-binary) + + [CF CLI 6](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#cf-cli-6-2) + + [CF CLI 7](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#cf-cli-7-2) +* [Install Two Different CF CLI Versions using a Compressed Binary](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#install-two-different-cf-cli-versions-using-a-compressed-binary) + + [CF CLI 6](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#cf-cli-6-3) + + [CF CLI 7](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#cf-cli-7-3) + + [Package Managers take Precedence over Compressed Binary](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md#package-managers-take-precedence-over-compressed-binary) + +### Review: Using CF CLI Versions + +Setting up Harness to use CF CLI versions involves the following steps: + +1. Install the CF CLI version on a Delegate manually or using a [Delegate Profile](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles). +2. Set the CF CLI version on the Harness Service(s) you are using for TAS deployments. + +Details and options for these steps are described below. + +### Review: Delegate Capability Check for CF CLI + +Once you have installed the CF CLI on a Delegate using a [Delegate Profile](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles), and set the CF CLI version on the Harness Services (both steps are described below), the Harness Delegate performs the following capability check at deployment runtime: + +1. When the Workflow starts, the Delegate capability check determines whether a specific version is installed on available Delegates. +The required version is determined by whether or not the **Enable CF CLI 7** option is selected in the Harness Service being deployed.Currently, the **Enable CF CLI 7** feature is behind the Feature Flag `CF_CLI7`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. +2. The Delegate capability check executes the `cf --version` command on the Delegates to verify the required version. +3. If the result of the version command is empty, the appropriate Delegate env variable is examined: for CF 6 `CF_CLI6_PATH` and for CF 7 `CF_CLI6_PATH`. +4. If there is no installed required version on the Delegate, the `No eligible Delegate ….` error message appears in Harness. +5. If the capability check finds a Delegate with the installed required version, the Delegate task is sent to that Delegate and the capability check results are recorded and valid for the next **6 hours**. + +#### Limitations + +In some cases, you might have uninstalled a CF CLI version and then installed a different version within the 6 hour capability check window. + +In these cases, the Delegate might be looking for the old version. If so, you will see errors like: `Unable to find CF CLI version on delegate, requested version: v6` or `Unable to find CF CLI version on delegate, requested version: v7`. + +### Select the CF CLI Version in a Harness Service + +After you have installed the CF CLI on a Delegate using a [Delegate Profile](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles), you must select the CF CLI version in the Harness Service you are using for your TAS deployment. + +![](./static/install-cloud-foundry-cli-6-and-7-on-harness-delegates-22.png) + +* **CF CLI 6:** By default, Harness uses CF CLI 6. If you are using CF CLI 6, then ensure that the **Enable CF CLI 7** setting is not selected. +* **CF CLI 7:** To use CF CLI 7, select **Enable CF CLI 7**. + +Currently, the **Enable CF CLI 7** feature is behind the Feature Flag `CF_CLI7`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +### Install the CF CLI on Harness Delegates using a Package Manager + +Two different CF versions cannot be installed on the same Delegate using a package manager, but it can be done using compressed binaries.Create a [Delegate Profile](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles). + +Add the following script to the Delegate Profile: + +#### CF CLI 6 + +This script installs CF CLI 6 and `autoscaler` and `Create-Service-Push` plugins: + + +``` +apt-get install wget +wget -q -O - https://packages.cloudfoundry.org/debian/cli.cloudfoundry.org.key | apt-key add - +echo "deb https://packages.cloudfoundry.org/debian stable main" | tee /etc/apt/sources.list.d/cloudfoundry-cli.list +apt-get update +apt-get install cf-cli + +# autoscaler plugin +# download and install pivnet +wget -O pivnet github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin; +pivnet login --api-token= + +# download and install autoscaler plugin by pivnet +pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441 +cf install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295 + +# install Create-Service-Push plugin from community +cf install-plugin -r CF-Community "Create-Service-Push" + +# verify cf version +cf --version + +# verify plugins +cf plugins +``` +Apply the Delegate Profile to the Delegate(s) that will be used for your TAS deployment. + +The output of `cf --version` should be: + + +``` +cf version 6.53.0+8e2b70a4a.2020-10-01 +``` +The output of `cf plugins` should be: + + +``` +App Autoscaler 2.0.295 autoscaling-apps Displays apps bound to the autoscaler +App Autoscaler 2.0.295 autoscaling-events Displays previous autoscaling events for the app +App Autoscaler 2.0.295 autoscaling-rules Displays rules for an autoscaled app +App Autoscaler 2.0.295 autoscaling-slcs Displays scheduled limit changes for the app +App Autoscaler 2.0.295 configure-autoscaling Configures autoscaling using a manifest file +App Autoscaler 2.0.295 create-autoscaling-rule Create rule for an autoscaled app +App Autoscaler 2.0.295 create-autoscaling-slc Create scheduled instance limit change for an autoscaled app +App Autoscaler 2.0.295 delete-autoscaling-rule Delete rule for an autoscaled app +App Autoscaler 2.0.295 delete-autoscaling-rules Delete all rules for an autoscaled app +App Autoscaler 2.0.295 delete-autoscaling-slc Delete scheduled limit change for an autoscaled app +App Autoscaler 2.0.295 disable-autoscaling Disables autoscaling for the app +App Autoscaler 2.0.295 enable-autoscaling Enables autoscaling for the app +App Autoscaler 2.0.295 update-autoscaling-limits Updates autoscaling instance limits for the app +Create-Service-Push 1.3.2 create-service-push, cspush Works in the same manner as cf push, except that it will create services defined in a services-manifest.yml file first before performing a cf push. +``` +#### CF CLI 7 + +This script installs CF CLI 7 and `autoscaler` and `Create-Service-Push` plugins: + + +``` +apt-get install wget +wget -q -O - https://packages.cloudfoundry.org/debian/cli.cloudfoundry.org.key | apt-key add - +echo "deb https://packages.cloudfoundry.org/debian stable main" | tee /etc/apt/sources.list.d/cloudfoundry-cli.list +sudo apt-get update +sudo apt-get install cf7-cli + +# autoscaler plugin +# download and install pivnet +wget -O pivnet github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin; +pivnet login --api-token= + +# download and install autoscaler plugin by pivnet +pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441 +cf install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295 + +# install Create-Service-Push plugin +# unable to use Create-Service-Push from community repo due to following error +# https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin/issues/13 +wget https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin/releases/download/1.3.2/CreateServicePushPlugin.linux64 +cf install-plugin CreateServicePushPlugin.linux64 + +# verify cf version +cf --version + +# verify plugins +cf plugins +``` +Apply the Delegate Profile to the Delegate(s) that will be used for your TAS deployment. + +The output of `cf --version` should be: + + +``` +cf version 7.2.0+be4a5ce2b.2020-12-10 +``` +The output of `cf plugins` should be the same as the output for CF CLI 6. + +### Install the CF CLI on Harness Delegates using a Compressed Binary + +Two different CF versions cannot be installed on the same Delegate using a package manager, but it can be done using compressed binaries.Create a [Delegate Profile](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles). + +Add the following script to the Delegate Profile: + +#### CF CLI 6 + +This script installs the CF CLI 6 compressed binary and `autoscaler` and `Create-Service-Push` plugins: + + +``` +# download compressed binary +curl -L "https://packages.cloudfoundry.org/stable?release=linux64-binary&source=github&version=v6" | tar -zx + +# ...move it to /usr/local/bin or a location you know is in your $PATH +mv cf /usr/local/bin + +# autoscaler plugin +# download and install pivnet +apt-get install wget +wget -O pivnet github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin; +pivnet login --api-token= + +# download and install autoscaler plugin by pivnet +pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441 +cf install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295 + +# install Create-Service-Push plugin from community +cf install-plugin -r CF-Community "Create-Service-Push" + +# verify cf version +cf --version + +# verify plugins +cf plugins +``` +If there is a requirement to install a specific CLI version, update the `version` path param in the above download URL with a specific version. + +Let’s say you want to install `version=6.52.0` . The download URL should be `https://packages.cloudfoundry.org/stable?release=linux64-binary&source=github&version=6.52.0`. + +Apply the Delegate Profile to the Delegate(s) that will be used for your Tanzu deployment. + +#### CF CLI 7 + +This script installs the CF CLI 7 compressed binary, `autoscaler` and `Create-Service-Push` plugins: + + +``` +# download compressed binary +curl -L "https://packages.cloudfoundry.org/stable?release=linux64-binary&version=v7&source=github" | tar -zx + +# ...move it to /usr/local/bin or a location you know is in your $PATH, ("cf" is symlink) +mv cf7 /usr/local/bin +mv cf /usr/local/bin + +# autoscaler plugin +# download and install pivnet to /tmp +apt-get install wget +wget -O pivnet github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin; +pivnet login --api-token= + +# download and install autoscaler plugin by pivnet +pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441 +cf install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295 + +# install Create-Service-Push plugin +# unable to use Create-Service-Push from community repo due to following error +# https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin/issues/13 +wget https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin/releases/download/1.3.2/CreateServicePushPlugin.linux64 +cf install-plugin CreateServicePushPlugin.linux64 + +# verify cf version +cf --version + +# verify plugins +cf plugins +``` +When you download and extract the package, you will get the CF CLI 7 executable (**cf7**) and its symlink (**cf**). The symlink must also be moved to the same location as the executable as it is used in the **CF CLI Command** step that use `cf`.If there is a requirement to install a specific CLI version, update the `version` path param from the above download URL with a specific version. + +Let’s say you want to install `version=7.2.0` . The download URL should be `https://packages.cloudfoundry.org/stable?release=linux64-binary&source=github&version=7.2.0` + +Apply the Delegate Profile to the Delegate(s) that will be used for your Tanzu deployment. + +### Install Two Different CF CLI Versions using a Compressed Binary + +Let's look at a few use cases using two different CF CLI version compressed binaries: + +* Install one CLI version by package manager and another version using a compressed binary on the same Delegate. +* Install both versions using compressed binaries. +* Install a compressed binary where a security scan is done and included a security fix. + +In order to satisfy the above cases, we can use the following approach. + +Create a [Delegate Profile](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_profiles). + +Add the following script to the Delegate Profile: + +#### CF CLI 6 + +This script installs the CF CLI 6 compressed binary and `autoscaler` and `Create-Service-Push` plugins: + + +``` +# download compressed binary, provide url to compressed binary +curl -L "" | tar -zx + +# ...move it to path on your file system +mv cf / + +# autoscaler plugin +# download and install pivnet +apt-get install wget +wget -O pivnet github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin; +pivnet login --api-token= + +# download and install autoscaler plugin by pivnet +pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441 + install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295 + +# install Create-Service-Push plugin from community + install-plugin -r CF-Community "Create-Service-Push" + +# verify cf version + --version + +# verify plugins + plugins +``` +`` should include the full path to CF. For example, if you install `cf` to the location `/home/cflibs/v6/`, then `` should be replaced by `/home/cflibs/v6/cf`.Apply the Delegate Profile to the Delegate(s) that will be used for your TAS deployment. + +Update the `CF_CLI6_PATH` env variable in Delegate config file and start/restart the Delegate. + +The value of the `CF_CLI6_PATH` env variable should be `` , for example `CF_CLI6_PATH=/home/cflibs/v6/cf`. + +#### CF CLI 7 + +This script installs the CF CLI 7 compressed binary and `autoscaler` and `Create-Service-Push` plugins: + + +``` +# download compressed binary, provide url to compressed binary +curl -L "" | tar -zx + +# ...move it to path on your file system +mv cf7 / +mv cf / + +# autoscaler plugin +# download and install pivnet +apt-get install wget +wget -O pivnet github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.55/pivnet-linux-amd64-0.0.55 && chmod +x pivnet && mv pivnet /usr/local/bin; +pivnet login --api-token= + +# download and install autoscaler plugin by pivnet +pivnet download-product-files --product-slug='pcf-app-autoscaler' --release-version='2.0.295' --product-file-id=912441 + install-plugin -f autoscaler-for-pcf-cliplugin-linux64-binary-2.0.295 + +# install Create-Service-Push plugin +# unable to use Create-Service-Push from community repo due to following error +# https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin/issues/13 +wget https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin/releases/download/1.3.2/CreateServicePushPlugin.linux64 + install-plugin CreateServicePushPlugin.linux64 + +# verify cf version + --version + +# verify plugins + plugins +``` +When you download and extract the package, you will get the CF CLI 7 executable (**cf7**) and its symlink (**cf**). The symlink must also be moved to the same location as the executable as it is used in the **CF CLI Command** step that use `cf`.`` should include the full path to CF. For example, if you install `cf` to the location `/home/cflibs/v7/`, then `` should be replaced by `/home/cflibs/v7/cf`.Apply the Delegate Profile to the Delegate(s) that will be used for your TAS deployment. + +Update the `CF_CLI7_PATH` env variable in the Delegate config file and start/restart the Delegate. + +The value of the `CF_CLI7_PATH` env variable should be `` , for example `CF_CLI7_PATH=/home/cflibs/v7/cf`. + +#### Package Managers take Precedence over Compressed Binary + +Two different CF versions cannot be installed on the same Delegate by using a package manager, but it can be done using compressed binaries. + +If you install the same CF CLI versions on the same Delegate, one by a package manager and another using a compressed binary, the CF installed by CF package manager takes precedence over the compressed binary during the Delegate capability check. + +#### Notes + +* The two different binary versions should be installed in different folders and different locations and the commands in the **CF CLI Command** step should be updated for these. +For example, if you install one version in `home/cfcli/cf` and a different version in `home/cfcli/cf7`, when Harness tries to execute CF CLI commands in the **CF CLI Command** step that simply use `cf`, it will aways use the `home/cfcli/cf` version. +* If you install plugins for one version you do not need to install then for the second version. Plugins should be compatible for both versions. + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/override-pcf-manifests-and-config-variables-and-files.md b/docs/first-gen/continuous-delivery/pcf-deployments/override-pcf-manifests-and-config-variables-and-files.md new file mode 100644 index 00000000000..8bb338a44ad --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/override-pcf-manifests-and-config-variables-and-files.md @@ -0,0 +1,144 @@ +--- +title: Override Tanzu Manifests and Config Variables and Files +description: Configure your Environment to override settings of the Harness PCF Services that use the Environment, thereby making the Environment dictate PCF manifest values. +sidebar_position: 80 +helpdocs_topic_id: r0vp331jnq +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Tanzu Application Service (formerly PCF) Service and Environment are used together when you set up a Harness Workflow to deploy your TAS app. You can configure your Environment to override settings of the Harness TAS Services that use the Environment, thereby making the Environment dictate TAS manifest values. + +For example, a TAS Service uses a manifest.yml file that specifies specific routes, but an Environment might need to change the routes because it is deploying the app in the manifest to a QA space. + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Option 1: Variable Override + +You can overwrite Service variables when one or more Services are paired with this Environment in a Workflow. + +To overwrite a Service variable, do the following: + +1. In the Harness Service, note the name of the Service variable in **Config Variables**.![](./static/override-pcf-manifests-and-config-variables-and-files-72.png) +2. In the Harness Environment, click **Service Configuration Override**. The **Service Configuration Override** dialog appears.![](./static/override-pcf-manifests-and-config-variables-and-files-73.png) +3. In **Service**, select the Harness Service that contains the variable you want to overwrite. If you select **All Services**, you will have to manually enter the name of the variable you want to overwrite. The following steps use a single Service. + + When you have selected a Service, the **Override Type** options appear. + ![](./static/override-pcf-manifests-and-config-variables-and-files-74.png) + +4. Click **Variable Override**. +5. In **Configuration Variable**, select the variable you want to overwrite. +6. In **Override Scope**, the only option is **Entire Environment**, currently. +7. In **Type**, select **Text** or **Encrypted Text**. +8. In **Override Value**, if you selected **Text** in **Type**, enter a new value. If you selected **Encrypted Text**, select an existing Encrypted Text secret. Encrypted Text secrets are set up in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + + When you are done, the dialog will look something like this: + + ![](./static/override-pcf-manifests-and-config-variables-and-files-75.png) + +9. Click **Submit**. The override is added to the Environment: + + ![](./static/override-pcf-manifests-and-config-variables-and-files-76.png) + +### Option 2: TAS Manifests Override + +The most commonly-used override is for TAS manifests. You can override the entire manifest.yml of a Service or any of its values. + +To overwrite any property in an inline or remote manifest, the manifest.yml must use vars.yml for the property value.For example, if you hardcode `route: example.com` in your inline or remote manifest.yml, you cannot overwrite it in **Service Configuration Overrides**. You must use a variable like `route: ((ROUTE1))` in manifest.yml and then provide a value for the variable like `ROUTE1: example.com` in vars.yml. + +For example, here are inline manifest.yml and vars.yml files using variables for routes. These variables are then overwritten in **Service Configuration Overrides**: + +![](./static/override-pcf-manifests-and-config-variables-and-files-77.png) + +You can only perform one overwrite of a single Service. If you attempt to add a second overwrite of the same Service, you will receive this error: `Can’t add, this override already exists. Please use the edit button to update.` + +To overwrite a TAS manifest, do the following: + +1. In the Harness Service, note the name(s) of the Service vars.yml property in **Manifests** that you want to overwrite. If you are using remote manifest files, go to your remote repro and note the name(s). +2. In the Harness Environment, click **Service Configuration Override**. The **Service Configuration Override** dialog appears. +3. In **Service**, select the Harness Service that contains the variable you want to overwrite. If you select **All Services**, you will have to manually enter the name of the variable you want to overwrite. The following steps use a single Service. +The **Override Type** options appear. +4. Click **TAS Manifests**. +5. Click **Local** or **Remote**. The steps for each option are below. + +#### Overwrite using Local Values + +You can overwrite values in the vars.yml configured in your Service. It does not matter if the vars.yml in your Service is inline and remote. + +To overwrite the variable values configured in your Harness Service, you can simply enter the vars.yml variables you want to overwrite, and enter new values. Here is an example overwriting routes in an inline vars.yml: + +![](./static/override-pcf-manifests-and-config-variables-and-files-78.png) + +#### Overwrite using Remote Values + +Typically, a single manifest.yml is used in a Service and then remote vars.yml files are used to supply different variable values. + +To overwrite manifest property values using remote files, you simply point to a remote Git folder that contains the manifest.yml or vars.yml files containing the new values. + +For example, here is a Service with an inline manifest.yml and vars.yml, and it uses the remote Git repo folder **pcf-dev/vars.yml** file to overwrite the vars.yml values: + +![](./static/override-pcf-manifests-and-config-variables-and-files-79.png) + +The remote vars.yml file does not need to supply all of the variables for the vars.yml file it is overwriting. You can simply overwrite the variables you want. + +As you can see in the above example, the remote vars.yml file only overwrites the **routes** in the inline vars.yml file. + +##### File Path + +If the manifest you select is incorrect due to missing attributes or special characters, deployment will continue and Harness will use the next manifest available at Service level. + +If the manifest you select is incorrect due to missing attributes or special characters, deployment will fail. + +### Review: Multiple Manifests at the Highest Level will Fail Deployment + +Currently, this feature is behind the feature flag `SINGLE_MANIFEST_SUPPORT`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Manifests can defined at the following levels, from highest to lowest priority: + +1. **Environment level for a specific Service:** defined at Environment level with its scope set to a specific Service.![](./static/override-pcf-manifests-and-config-variables-and-files-80.png) +2. **Environment level for All Services:** defined at Environment level with its scope set to All Services.![](./static/override-pcf-manifests-and-config-variables-and-files-81.png) +3. **Service level manifest:** the app manifest set up in the Harness Service.![](./static/override-pcf-manifests-and-config-variables-and-files-82.png)See: + * [Adding and Editing Inline Tanzu Manifest Files](adding-and-editing-inline-pcf-manifest-files.md) + * [Upload Local and Remote Tanzu Resource Files](upload-local-and-remote-pcf-resource-files.md) + +Whenever manifests are present at multiple levels, the manifest present at the level having **highest priority** is set as the final manifest. + +For example, if you have a manifest override defined at Environment level with its scope set to a specific Service, it is the final manifest. No other manifests in the Environment or Service will be used. + +If there are multiple manifests at the level used by Harness (the highest level with a manifest set), Harness will fail deployment. + +Harness performs this check for **Application Manifest** and **Autoscalar Manifest**. It does not apply to **Variable Manifests**. + +### Review: Variable Precedence + +If multiple variables of the same name are defined in different places, such as a Service's **Manifests** or **Config Variables** sections or an Environment's **Service Configuration Overrides** section, the variables get overwritten according to the following precedence, from highest to lowest: + +1. **Service Configuration Overrides** variables for a specific Service are of the greatest precedence, and override all others. +2. **Service Configuration Overrides** variables for all Services. +3. Variables in a Service **Config Variables** section. +4. Variables defined in inline or remote files in Service **Manifests** section. + +For more information, see [Override a Service Configuration](https://docs.harness.io/article/n39w05njjv-environment-configuration#override_a_service_configuration). + +Variable precedence is different from app or autoscalar manifest precedence. + +#### App and Autoscalar Manifests Precedence + +App and autoscalar manifests are defined at the following levels, from highest to lowest priority: + +1. **Environment level for a specific Service:** defined at Environment level with its scope set to a specific Service. +2. **Environment level for All Services:** defined at Environment level with its scope set to All Services. +3. **Service level manifest:** the app manifest set up in the Harness Service. +See: + * [Adding and Editing Inline Tanzu Manifest Files](adding-and-editing-inline-pcf-manifest-files.md) + * [Upload Local and Remote Tanzu Resource Files](upload-local-and-remote-pcf-resource-files.md) + +### Next Steps + +* [Create a Basic Tanzu Deployment](create-a-basic-pcf-deployment.md) +* [Create a Canary Tanzu Deployment](create-a-canary-pcf-deployment.md) +* [Create a Blue/Green Tanzu Deployment](create-a-blue-green-pcf-deployment.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/pcf-built-in-variables.md b/docs/first-gen/continuous-delivery/pcf-deployments/pcf-built-in-variables.md new file mode 100644 index 00000000000..c9c1044c3e5 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/pcf-built-in-variables.md @@ -0,0 +1,500 @@ +--- +title: Tanzu Built-in Variables +description: Harness includes some variables to help you output PCF deployment information in your Workflows. +sidebar_position: 150 +helpdocs_topic_id: ojd73hseby +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +TAS was formerly Pivotal Cloud Foundry (PCF). The variables in this topic use `pcf` as a result. The variables work the same as before. There is no need to change any existing implementations.Harness includes the following variables to help you output TAS deployment information in your Workflows, such as in the [CF Command](run-cf-cli-commands-and-scripts-in-a-workflow.md#step-run-the-cf-cli-command) or the [Shell Script command](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output). + +This topic covers the following: + + +- [Variables List](#variables-list) + + [`${service.manifest}` ](#servicemanifest) + + [`${service.manifest.repoRoot}` ](#servicemanifestreporoot) + + [`${pcf.newAppRoutes}` ](#pcfnewapproutes) + + [`${pcf.newAppName}` ](#pcfnewappname) + + [`${pcf.newAppGuid}` ](#pcfnewappguid) + + [`${pcf.oldAppName}` ](#pcfoldappname) + + [`${pcf.activeAppName}` and `${pcf.inActiveAppName}` ](#pcfactiveappname-and-pcfinactiveappname) + + [`${pcf.oldAppGuid}` ](#pcfoldappguid) + + [`${pcf.oldAppRoutes}` ](#pcfoldapproutes) + + [`${pcf.finalRoutes}` ](#pcffinalroutes) + + [`${pcf.tempRoutes}` ](#pcftemproutes) + + [`${infra.pcf.cloudProvider.name}` ](#infrapcfcloudprovidername) + + [`${infra.pcf.organization}` ](#infrapcforganization) + + [`${infra.pcf.space}` ](#infrapcfspace) + + [`${host.pcfElement.applicationId}` ](#hostpcfelementapplicationid) + + [`${host.pcfElement.displayName}` ](#hostpcfelementdisplayname) + + [`${host.pcfElement.instanceIndex}` Shows the [CF\_INSTANCE\_INDEX](https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#CF-INSTANCE-INDEX). ](#hostpcfelementinstanceindex-shows-the-cf_instance_indexhttpsdocscloudfoundryorgdevguidedeploy-appsenvironment-variablehtmlcf-instance-index) +- [Harness TAS Environment Variables](#harness-tas-environment-variables) +- [App Name Variables and Blue Green Deployments](#app-name-variables-and-blue-green-deployments) + * [Version to Non-Version](#version-to-non-version) + + [Variable Resolution during Successful Deployments](#variable-resolution-during-successful-deployments) + + [Failure during App Setup or App Resize](#failure-during-app-setup-or-app-resize) + + [Variable Resolution during App Resize Step Failure](#variable-resolution-during-app-resize-step-failure) + + [Failure during Swap Route Step](#failure-during-swap-route-step) + + [Variables Resolution during Swap Route Step Failure](#variables-resolution-during-swap-route-step-failure) + * [Non-Version to Non-Version](#non-version-to-non-version) + + [Variables Resolution during Successful Deployments](#variables-resolution-during-successful-deployments) + + [Failure during the App Setup or App Resize Steps](#failure-during-the-app-setup-or-app-resize-steps) + + [Variables Resolution during App Setup or App Resize Failure](#variables-resolution-during-app-setup-or-app-resize-failure) + + [Failure during the Swap Route Step](#failure-during-the-swap-route-step) + + [Variables Resolution during Swap Route Failure](#variables-resolution-during-swap-route-failure) + * [Non-Version to Version](#non-version-to-version) + + [Variables Resolution during a Successful Deployment](#variables-resolution-during-a-successful-deployment) + + [Failures during the App Setup or App Resize Steps](#failures-during-the-app-setup-or-app-resize-steps) + + [Variables Resolution during App Setup or App Resize Failure](#variables-resolution-during-app-setup-or-app-resize-failure-1) + + [Failure during the Swap Route Step](#failure-during-the-swap-route-step-1) + + [Variables Resolution during the Swap Route Step Failure](#variables-resolution-during-the-swap-route-step-failure) + * [Notes](#notes) + + +## Variables List + +:::note +In Blue/Green deployments, the outputs for the `${pcf.finalRoutes}`, `${pcf.oldAppRoutes}` and `${pcf.tempRoutes}` variables do not change, but the outputs for `${pcf.newAppRoutes}` change as the routes are swapped in the **Swap Routes** or **Rollback** step. Simply put, `${pcf.newAppRoutes}` reflects the routes at a point in time (after the **Swap Routes** or **Rollback** step). +::: + +#### `${service.manifest}` + +Refers to the folder containing your manifest files. See [Scripts and Variables](run-cf-cli-commands-and-scripts-in-a-workflow.md#option-scripts-and-variables). + +#### `${service.manifest.repoRoot}` + +Refers to the remote Git repo root folder containing your manifest files. See [Scripts and Variables](run-cf-cli-commands-and-scripts-in-a-workflow.md#option-scripts-and-variables). + +#### `${pcf.newAppRoutes}` + +An array of all the routes defined in your manifest.yml in your Harness Service. + +You can reference any route in the array using its index, such as `${pcf.newAppRoutes[0]}`. + +You can use the route to create a URL in a script, such as `http://${pcf.newAppRoutes[0]}`. + +In a Blue/Green deployment, `${pcf.newAppRoutes}` are the same as `${pcf.tempRoutes}` until the Swap Routes step is run, after which `${pcf.newAppRoutes}` is the same as `${pcf.finalRoutes}`. + +:::note +This variable expression should be used after the **App Setup** step. +::: + +#### `${pcf.newAppName}` + +New app name. + +:::note +This variable expression should be used after the **App Setup** step. +::: + +#### `${pcf.newAppGuid}` + +New app GUID.This variable expression should be used after the **App Setup** step. + +#### `${pcf.oldAppName}` + +Old app name. This is the app that is replaced by your newly deployed app. + +:::note +This variable expression should be used after the **App Setup** step. +::: + +#### `${pcf.activeAppName}` and `${pcf.inActiveAppName}` + +See [App Name Variables and Blue Green Deployments](pcf-built-in-variables.md#app-name-variables-and-blue-green-deployments) below. + +#### `${pcf.oldAppGuid}` +pcf.tempRoutesOld app GUID. + +:::note +This variable expression should be used after the **App Setup** step. +::: + +#### `${pcf.oldAppRoutes}` + +An array of the routes that were used for the old app. + +You can reference any route in the array using its index, such as `${pcf.oldAppRoutes[0]}`. + +You can use the route to create a URL in a script, such as `http://${pcf.oldAppRoutes[0]}`. + +:::note +This variable expression should be used after the **App Setup** step. +::: + +#### `${pcf.finalRoutes}` + +An array of the active routes once deployment is successful.This variable expression should be used after the **App Setup** step. + +#### `${pcf.tempRoutes}` + +An array of the temporary routes used for Blue/Green deployments. + +:::note +This variable expression should be used after the **App Setup** step. +::: + +#### `${infra.pcf.cloudProvider.name}` + +The Cloud Provider name used in the Infrastructure Definition set up in the Workflow. + +#### `${infra.pcf.organization}` + +The Organization name used in the Infrastructure Definition set up in the Workflow. + +#### `${infra.pcf.space}` + +The Space name used in the Infrastructure Definition set up in the Workflow. + +#### `${host.pcfElement.applicationId}` + +Shows the TAS app ID [Environment variable](https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#view-env). Such as you would see in a cf env command: + +``` +... +{ + "VCAP_APPLICATION": { + "application_id": "fa05c1a9-0fc1-4fbd-bae1-139850dec7a3", + "application_name": "my-app", + ... +``` + +#### `${host.pcfElement.displayName}` + +Shows the TAS app name (`"application_name": "my-app"` in the example above). + +#### `${host.pcfElement.instanceIndex}` Shows the [CF\_INSTANCE\_INDEX](https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#CF-INSTANCE-INDEX). + +Here is a script that outputs some of the variables: + + +``` +echo ${pcf.newAppRoutes[0]} + +echo New App Name: ${pcf.newAppName} +echo New App GUID: ${pcf.newAppGuid} +echo New App Routes: ${pcf.newAppRoutes} + +echo "\n\n" +echo Old App Name: ${pcf.oldAppName} +echo Old App GUID: ${pcf.oldAppGuid} +echo Old App Routes: ${pcf.oldAppRoutes} + +echo activeRoute: ${pcf.finalRoutes} +echo inActiveRoute: ${pcf.tempRoutes} + +echo ${infra.pcf.cloudProvider.name} +echo ${infra.pcf.organization} +echo ${infra.pcf.space} +``` +Here is an example of the output: + + +``` +Executing command ... + +Basic__demo__pcf__service__DEMO__0-meditating-foo-ii.bar.net + +New App Name: Basic__demo__pcf__service__DEMO__0 +New App GUID: ae7269df-9521-409e-b79e-167ee03c46d0 +New App Routes: [Basic__demo__pcf__service__DEMO__0-meditating-foo-ii.bar.net] + +\n\n +Old App Name: null +Old App GUID: null +Old App Routes: null + +activeRoute: [Basic__demo__pcf__service__DEMO__0-meditating-foo-ii.bar.net] +inActiveRoute: null + +ibmcloud cf +john.doe@example.com +dev + +Command completed with ExitCode (0) +``` +## Harness TAS Environment Variables + +For Blue/Green Workflow deployments only, Harness uses the TAS user-provided [environment variable](https://docs.pivotal.io/application-service/2-9/devguide/deploy-apps/environment-variable.html) `HARNESS_STATUS_IDENTIFIER` to identify new and old (active) apps. + +When a new app is deployed, the variable is `HARNESS_STATUS_IDENTIFIER : STAGE`. + +When Harness swaps route, the variable is updated to `HARNESS_STATUS_IDENTIFIER : ACTIVE` for the new app, and the old app variable becomes `HARNESS_STATUS_IDENTIFIER : STAGE`. + +If rollback occurs, Harness restores routes and environment variables for old and new apps. + +You can view this `HARNESS_STATUS_IDENTIFIER` variable in the TAS app in the TAS console. + +## App Name Variables and Blue Green Deployments + +Some Tanzu variables display different information as the [Blue Green deployment](create-a-blue-green-pcf-deployment.md) progresses through its steps: + +* `${pcf.oldAppName}` +* `${pcf.newAppName}` +* `${pcf.activeAppName}` +* `${pcf.inActiveAppName}` + +Some of these variables can only be resolved after certain steps in the Blue Green deployment, and their resolved values will change based on whether a step was successful. + +The variables resolution and changes discussed here apply to when the **App Name with Version History** setting is enabled in the the **App Setup** step, in **Version Management**. + +See [App Name with Version History](tanzu-app-naming-with-harness.md#app-name-with-version-history) for details on the setting before proceeding. + +There are possible 3 app versioning paths in a deployment with **App Name with Version History** enabled: + +* Version to Non-Version +* Non-Version to Non-Version +* Non-Version to Version + +In each of the above paths, there could be 4 possible outcomes which Harness encounters: + +* Deployment was successful. +* Deployment failed for one of the following reasons: + + Failure happened during App Setup + + Failure happened during App Resize + + Failure happened during or after Swap Route + +Let's look at the 3 app versioning paths and how each deployment outcome changes the resolution of the Tanzu variables for each Blue Green Workflow step. + +### Version to Non-Version + +Let's look at a successful deployment of this use case: + +![](./static/pcf-built-in-variables-13.png) + +As you can see, the new successful app version has no version suffix in its name and the previous version app has the suffix `_INACTIVE` added. + +Let's look at variable resolution during successful and failed deployments. + +#### Variable Resolution during Successful Deployments + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After App Setup step** | **After App Resize step** | **After Swap Route step** | +| `${pcf.oldAppName}` | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | OrderApp\_INACTIVE(UUID = 2) | +| `${pcf.newAppName}` | OrderApp\_INACTIVE(UUID = 3) | OrderApp\_INACTIVE(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.activeAppName}` | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | OrderApp(UUID = 3) | +| `${pcf.inActiveAppName}` | OrderApp\_INACTIVE(UUID = 3) | OrderApp\_INACTIVE(UUID = 3) | OrderApp\_INACTIVE(UUID = 2) | + +#### Failure during App Setup or App Resize + +If deployment fails during the App Setup or App Resize step, the following actions are taken: + +![](./static/pcf-built-in-variables-14.png) + +The new app version is deleted and the previous app version is restored. + +For the previous inactive application (**OrderApp\_1**), the temp routes and environment variable (**STAGE**) are not restored. To restore these, enable the [Upsize inactive Service Option](create-a-blue-green-pcf-deployment.md#upsize-inactive-service-option) in the Rollback step. + +#### Variable Resolution during App Resize Step Failure + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After App Resize step** | **After Swap Rollback step** | **After App Rollback step** | +| `${pcf.oldAppName}` | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | +| `${pcf.newAppName}` | OrderApp\_INACTIVE(UUID = 3) | OrderApp\_interim(UUID = 3) | NULL | +| `${pcf.activeAppName}` | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | +| `${pcf.inActiveAppName}` | OrderApp\_INACTIVE(UUID = 3) | OrderApp\_1(UUID = 1) | OrderApp\_1(UUID = 1) | + +#### Failure during Swap Route Step + +![](./static/pcf-built-in-variables-15.png) + +If deployment fails during the Swap Route step, the following actions are taken: + +* The new application is deleted. +* The previous active app (UUID = 2) is renamed back to its original name in version mode (OrderApp\_2). +* The prod routes are attached to the active app and temp routes are removed. The environment variable **ACTIVE** is set on the active app. + +For the previous inactive application (**OrderApp\_1**), the temp routes and environment variable (**STAGE**) are not restored. To restore these, enable the [Upsize inactive Service Option](create-a-blue-green-pcf-deployment.md#upsize-inactive-service-option) in the Rollback step. + +#### Variables Resolution during Swap Route Step Failure + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After Swap Route step** | **After Swap Rollback step** | **After App Rollback step** | +| `${pcf.oldAppName}` | OrderApp\_INACTIVE(UUID = 2) | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | +| `${pcf.newAppName}` | OrderApp(UUID = 3) | OrderApp\_interim(UUID = 3) | NULL | +| `${pcf.activeAppName}` | OrderApp(UUID = 3) | OrderApp\_2(UUID = 2) | OrderApp\_2(UUID = 2) | +| `${pcf.inActiveAppName}` | OrderApp\_INACTIVE(UUID = 2) | OrderApp\_1(UUID = 1) | OrderApp\_1(UUID = 1) | + +### Non-Version to Non-Version + +First, let's look at a successful deployment: + +![](./static/pcf-built-in-variables-16.png) + +You can see the previous version OverApp (v3) is given the suffix **INACTIVE** and the new version OrderApp (v4) is not given any suffix. + +During the App Setup step: + +* Temp routes and the environment variable are removed from the inactive application (**OrderApp\_INACTIVE**). +* The new application is created with suffix **\_\_INACTIVE** in the App Setup step and temp routes are attached to it. The environment variable is set to STAGE. + +During the Swap Route step the following actions are taken: + +* Prod routes are attached and temp routes are removed from the new application (**OrderApp\_INACTIVE**). The environment variable is set to **ACTIVE**. +* Temp routes are attached and prod routes are removed from the current active application +(**OrderApp**, **UUID = 3**). The environment variable is set to STAGE. +* New active application is renamed from **OrderApp\_INACTIVE** to **OrderApp** +(this is in non-versioning mode). +* The current active application is renamed from **OrderApp** to **OrderApp\_INACTIVE**. + +#### Variables Resolution during Successful Deployments + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After App Setup step** | **After App Resize step** | **After Swap Route step** | +| `${pcf.oldAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp\_INACTIVE(UUID = 3) | +| `${pcf.newAppName}` | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_INACTIVE(UUID = 4) | OrderApp(UUID = 4) | +| `${pcf.activeAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp(UUID = 4) | +| `${pcf.inActiveAppName}` | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_INACTIVE(UUID = 3) | + +#### Failure during the App Setup or App Resize Steps + +If deployments fails during App Setup or App Resize steps, the following actions are taken: + +![](./static/pcf-built-in-variables-17.png) + +The state of the previous active application (OrderApp) was not changed and there is nothing to restore. The new application is deleted. The previous inactive application name is restored. + +For the previous inactive application (**OrderApp\_INACTIVE**, **UUID = 2**), the temp routes and environment variable (**STAGE**) are not restored. To restore these, enable the [Upsize inactive Service Option](create-a-blue-green-pcf-deployment.md#upsize-inactive-service-option) in the Rollback step. + +#### Variables Resolution during App Setup or App Resize Failure + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After App Resize step** | **After Swap Rollback step** | **After App Rollback step** | +| `${pcf.oldAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.newAppName}` | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_interim(UUID = 4) | NULL | +| `${pcf.activeAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.inActiveAppName}` | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_INACTIVE(UUID = 2) | OrderApp\_INACTIVE(UUID = 2) | + +#### Failure during the Swap Route Step + +![](./static/pcf-built-in-variables-18.png) + +If deployment fails during the Swap Route step, the following actions are taken: + +* The new app is deleted. +* The previous active app (UUID = 3) is renamed back to its original name, **OrderApp**. +* The prod routes are attached and temp routes are removed from the previous app. The environment variable **ACTIVE** is set to the previous app. +* The inactive application (UUID = 2) is renamed to **OrderApp\_INACTIVE**. + +For the previous inactive application (**OrderApp\_INACTIVE**, **UUID = 2**), the temp routes and environment variable (**STAGE**) are not restored. To restore these, enable the [Upsize inactive Service Option](create-a-blue-green-pcf-deployment.md#upsize-inactive-service-option) in the Rollback step. + +#### Variables Resolution during Swap Route Failure + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After Swap Route step** | **After Swap Rollback step** | **After App Rollback step** | +| `${pcf.oldAppName}` | OrderApp\_INACTIVE(UUID = 3) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.newAppName}` | OrderApp(UUID = 4) | OrderApp\_interim(UUID = 4) | NULL | +| `${pcf.activeAppName}` | OrderApp(UUID = 4) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.inActiveAppName}` | OrderApp\_INACTIVE(UUID = 3) | OrderApp\_INACTIVE(UUID = 2) | OrderApp\_INACTIVE(UUID = 2) | + +  + +### Non-Version to Version + +First, let's look at a successful deployment: + +![](./static/pcf-built-in-variables-19.png) + +During the App Setup step, the following actions are taken: + +* Temp routes and the environment variable are removed from inactive application (**OrderApp\_INACTIVE**, **UUID = 2**). +* The new app is created with the suffix **\_\_INACTIVE** in the App Setup step and temp routes are attached to it. The environment variable is set to **STAGE**. + +During the Swap Route step, the following actions are taken: + +* Prod routes are attached and temp routes are removed from the new app (**OrderApp\_INACTIVE**). The environment variable is set to **ACTIVE**. +* Temp routes are attached and prod routes are removed from the current active application (**OrderApp**). The environment variable is set to **STAGE**. +* The current active application (**UUID = 3**) is renamed from **OrderApp** to **OrderApp\_3** +(this is in non-versioning mode). +* New active app is renamed from **OrderApp\_INACTIVE** to **OrderApp\_4**. + +If you change from non-version to version, in the next deployment the new app is created with the suffix **\_\_INACTIVE**. All the apps will be renamed to version mode during the Swap Route step. Harness follows this method to avoid renaming the active app while it is still in use. From the subsequent deployment, the new app will be created in version mode. For example, in above example the new app will be created with name **OrderApp\_5**. + +#### Variables Resolution during a Successful Deployment + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After App Setup step** | **After App Resize step** | **After Swap Route step** | +| `${pcf.oldAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp\_3(UUID = 3) | +| `${pcf.newAppName}` | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_4(UUID = 4) | +| `${pcf.activeAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp\_4(UUID = 4) | +| `${pcf.inActiveAppName}` | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_3(UUID = 3) | + +#### Failures during the App Setup or App Resize Steps + +If deployment fails during App Setup or App Resize step, the following actions are taken: + +![](./static/pcf-built-in-variables-20.png) + +The state of the previous active app (**OrderApp**) was not changed so there is nothing to restore. The new app is deleted. The previous inactive app name is restored. + +For the previous inactive application (**OrderApp\_INACTIVE**, **UUID = 2**), the temp routes and environment variable (**STAGE**) are not restored. To restore these, enable the [Upsize inactive Service Option](create-a-blue-green-pcf-deployment.md#upsize-inactive-service-option) in the Rollback step. + +#### Variables Resolution during App Setup or App Resize Failure + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After App Resize step** | **After Swap Rollback step** | **After App Rollback step** | +| `${pcf.oldAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.newAppName}` | OrderApp\_INACTIVE(UUID = 4) | OrderApp\_interim(UUID = 4) | NULL | +| `${pcf.activeAppName}` | OrderApp(UUID = 3) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.inActiveAppName}` | | | | + +#### Failure during the Swap Route Step + +![](./static/pcf-built-in-variables-21.png) + +If there failures during the Swap Route step, the following actions are taken: + +* The new app is deleted. +* The previous active app (**UUID = 3**) is renamed back to its original name, **OrderApp**. +* The prod routes are attached and temp routes are removed from previous app. The environment variable ACTIVE is set to it. +* The inactive application (**UUID = 2**) is renamed to **OrderApp\_INACTIVE**. + +For the previous inactive application (**OrderApp\_INACTIVE**, **UUID = 2**), the temp routes and environment variable (**STAGE**) are not restored. To restore these, enable the [Upsize inactive Service Option](create-a-blue-green-pcf-deployment.md#upsize-inactive-service-option) in the Rollback step. + +#### Variables Resolution during the Swap Route Step Failure + + + +| | | | | +| --- | --- | --- | --- | +| **Variables** | **After Swap Route step** | **After Swap Rollback step** | **After App Rollback step** | +| `${pcf.oldAppName}` | OrderApp\_3(UUID = 3) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.newAppName}` | OrderApp\_4(UUID = 4) | OrderApp\_interim(UUID = 4) | NULL | +| `${pcf.activeAppName}` | OrderApp\_4(UUID = 4) | OrderApp(UUID = 3) | OrderApp(UUID = 3) | +| `${pcf.inActiveAppName}` | OrderApp\_3(UUID = 3) | OrderApp\_INACTIVE(UUID = 2) | OrderApp\_INACTIVE(UUID = 2) | + +### Notes + +* During rollback, the new app is always deleted in every scenario. +* The variables `${pcf.activeAppName}` and `${pcf.inActiveAppName}` are available only in Blue Green deployments after the App Setup step. +* If deployment fails during the Swap Routes step, then the variables `${pcf.activeAppName}` and `${pcf.inActiveAppName}` might be in inconsistent state. + + These variables will hold the right value after the Swap Routes rollback and hence must be used after Swap Routes rollback step. +* If the [Upsize inactive Service Option](create-a-blue-green-pcf-deployment.md#upsize-inactive-service-option) in not enabled then the environment variable **HARNESS\_\_STAGE\_\_IDENTIFIER = STAGE** will not be set to the inactive app during rollback. + + In next deployment, the `${pcf.inActiveApp}` variable may not resolve to the correct name as **STAGE** is used to identify inactive app. +* During rollback, the new app name will be changed to `__interim` and later deleted. Consequently, `${pcf.newAppName}` will be updated and resolve accordingly. + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/pcf-tutorial-overview.md b/docs/first-gen/continuous-delivery/pcf-deployments/pcf-tutorial-overview.md new file mode 100644 index 00000000000..ea56c6a096d --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/pcf-tutorial-overview.md @@ -0,0 +1,27 @@ +--- +title: Tanzu Application Service Deployments +description: Harness provides support for the Pivotal Cloud Foundry (PCF) app development and deployment platform for public and private clouds. +sidebar_position: 10 +helpdocs_topic_id: 6m7w43yw4u +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for the latest version of Harness Tanzu Application Service (TAS, formerly PCF) support. This version is behind a feature flag. To migrate to this version, contact [Harness Support](mailto:support@harness.io).Harness provides support for the TAS app development and deployment platform for public and private clouds. These topics describe how to deploy your applications to PCF using Harness, including route mapping. + +* [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md) +* [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md) +* [Adding and Editing Inline Tanzu Manifest Files](adding-and-editing-inline-pcf-manifest-files.md) +* [Upload Local and Remote Tanzu Resource Files](upload-local-and-remote-pcf-resource-files.md) +* [Using Harness Config Variables in Tanzu Manifests](using-harness-config-variables-in-pcf-manifests.md) +* [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md) +* [Override Tanzu Manifests and Config Variables and Files](override-pcf-manifests-and-config-variables-and-files.md) +* [Create a Basic Tanzu Deployment](create-a-basic-pcf-deployment.md) +* [Create a Canary Tanzu Deployment](create-a-canary-pcf-deployment.md) +* [Create a Blue/Green Tanzu Deployment](create-a-blue-green-pcf-deployment.md) +* [Run CF CLI Commands and Scripts in a Workflow](run-cf-cli-commands-and-scripts-in-a-workflow.md) +* [Tanzu Built-in Variables](pcf-built-in-variables.md) +* [Use CLI Plugins in Harness Tanzu Deployments](use-cli-plugins-in-harness-pcf-deployments.md) +* [Use the App Autoscaler Service](use-the-app-autoscaler-service.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/preprocess-artifacts-to-match-supported-types.md b/docs/first-gen/continuous-delivery/pcf-deployments/preprocess-artifacts-to-match-supported-types.md new file mode 100644 index 00000000000..c73f30282bc --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/preprocess-artifacts-to-match-supported-types.md @@ -0,0 +1,130 @@ +--- +title: Preprocess Tanzu Artifacts to Match Supported Types +description: Currently, this feature is behind the Feature Flag CF_CUSTOM_EXTRACTION. Contact Harness Support to enable the feature.. Harness supports the most common Tanzu Application Services (formerly PCF) art… +sidebar_position: 180 +helpdocs_topic_id: xpeb2raihj +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `CF_CUSTOM_EXTRACTION`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Harness supports the most common Tanzu Application Services (formerly PCF) artifact package types. + +If your artifact doesn't match the supported types, you can run a script to preprocess the artifact (unzip, untar, etc). Preprocessing occurs when setting up the app during deployment. + +### Before You Begin + +* [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md) +* [Create a Basic Tanzu Deployment](create-a-basic-pcf-deployment.md) +* [Pivotal Cloud Foundry Quickstart](https://docs.harness.io/article/hy819vmsux-pivotal-cloud-foundry-quickstart) + +### Limitations + +* Preprocessing is for non-Docker artifacts. + +### Review: Supported Artifact Types + +Harness supports the following TAS artifact servers/types. + +Metadata-only Sources: + +* Jenkins +* AWS (Amazon Web Services) S3 +* Artifactory (includes Docker) +* Nexus +* Bamboo + +File-based Sources: + +* Docker Registry +* Artifactory (Tgz files) +* Nexus (Tgz files) +* Google Container Service (GCS) +* AWS Elastic Container Registry (ECR) +* SMB +* SFTP +* Custom Repository + +Harness supports any single file (non-folder) deployed using `cf push`. TAR, WAR, JAR, ZIP, and Docker are supported. + +### Step 1: Select Preprocessing in App Setup Step + +This step assumes you've created a TAS Workflow before. If not, see: + +* [Create a Basic Tanzu Deployment](create-a-basic-pcf-deployment.md) +* [Create a Canary Tanzu Deployment](create-a-canary-pcf-deployment.md) +* [Create a Blue/Green Tanzu Deployment](create-a-blue-green-pcf-deployment.md) + +Open the **App Setup** step in the Workflow. + +Select **Pre-Process Package for TAS Deployment**. The script option appears. + +### Step 2: Add Preprocessing Script + +Enter a script to perform preprocessing on the downloaded artifact before deployment. + +Reference the downloaded artifact using the expression `${downloadedArtifact}`. + +For example: + + +``` +tar -xvf ${downloadedArtifact} +``` +Copy the processed artifact to a directory using `${processedArtifactDir}`. + +For example: + + +``` +cp myfolder/helloworld.war ${processedArtifactDir} +``` +The entire preprocessing script might look like this: + + +``` +tar -xvf ${downloadedArtifact} + +cp myfolder/helloworld.war ${processedArtifactDir} +``` +Let's look at another example: + +Let's say you have a zip archive that contains a folder named **myArtifact**. Inside the myArtifact folder is an artifact named **myArtifact.war**. + +You unzip the archive: + + +``` +unzip ${downloadedArtifact} +``` +Once you unzip the archive the result is **myArtifact/myArtifact.war**. + +Next, you need to copy myArtifact.war to a directory. The directory is identified using the expression `${processedArtifactDir}`. + +For example: + + +``` +cp myArtifact/myArtifact.war ${processedArtifactDir} +``` +### Step 3: View the Preprocessing in the Deployment Logs + +When you deploy the Workflow the preprocessing is shown in the logs for the **App Setup** step. + +Here's an example: + + +``` +# Executing artifact processing script: +._package2 +package2/ +package2/._index.js +package2/index.js +package2/._package.json +package2/package.json +SUCCESS +``` +### See Also + +* [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/run-cf-cli-commands-and-scripts-in-a-workflow.md b/docs/first-gen/continuous-delivery/pcf-deployments/run-cf-cli-commands-and-scripts-in-a-workflow.md new file mode 100644 index 00000000000..7f4c3b4bf06 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/run-cf-cli-commands-and-scripts-in-a-workflow.md @@ -0,0 +1,124 @@ +--- +title: Run CF CLI Commands and Scripts in a Workflow +description: You can use the CF Command to run any CF CLI command or script at any point in your Harness PCF Workflows. +sidebar_position: 140 +helpdocs_topic_id: xai5fs8gko +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports Cloud Foundry CLI version 6 and 7. Support for version 7 is behind the Feature Flag `CF_CLI7`. You can read about it in [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md).You can use the CF Command to run any [CF CLI command](https://docs.cloudfoundry.org/cf-cli/cf-help.html) or script at any point in your Harness Tanzu (formerly PCF) Workflows. + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Step: Run the CF CLI Command + +Ensure that the Harness Delegate(s) used for your deployment have the correct version of the CF CLI installed. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).Here's an overview of the Configure CF Command settings. + +![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-34.png) + +You can also use the CF Command to create the service for the [App Autoscaler plugin](https://docs.pivotal.io/application-service/2-7/appsman-services/autoscaler/using-autoscaler-cli.html), as described in [Use CLI Plugins in Harness Tanzu Deployments](use-cli-plugins-in-harness-pcf-deployments.md). The CF Command script does not require `cf login`. Harness performs logins using the credentials in the TAS Cloud Provider set up in the Infrastructure Definition for the Workflow executing the CF Command. + +The CF Command has the settings described below. + +### Option: Scripts and Variables + +You can enter any CF CLI commands and scripts, but be sure to add the CF Command to a point in your Workflow where the targets of the script are available. If you add CF Command before the App Setup step, the new app is not available. + +There are two built-in Harness TAS variables you can use to reference the manifest and vars files used by the script: + +* If you are using inline Manifest files, the variable `${service.manifest}` refers to the folder containing your manifest files. + +[![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-35.png)](./static/run-cf-cli-commands-and-scripts-in-a-workflow-35.png) + +* If you are using remote Manifest files via a Git repo, `${service.manifest}` refers to the folder containing your manifest files and `${service.manifest.repoRoot}` refers to the root folder of the repo. + +[![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-37.png)](./static/run-cf-cli-commands-and-scripts-in-a-workflow-37.png) + +You can use the variables together to point to different locations. For example, here the manifest.yml file is one folder and the vars.yml is located using a path from the repo root folder: + + +``` +cf create-service-push --service-manifest ${service.manifest}/manifest.yml --no-push --vars-file ${service.manifest.repoRoot}/QA/vars.yml +cf plugins | grep autoscaling-apps +``` +These variables appear when you type `${service` in **Script**: + +![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-39.png) + +Environment Service Overrides, such as [Tanzu Manifest Overrides](override-pcf-manifests-and-config-variables-and-files.md), do not apply to or override the `${service.manifest}` variable. The `${service.manifest}` variable only looks in the Harness Service.You can also use variables in your script to templatize paths to manifest files. For example, if your Workflow Environment were templatized (see  [Template a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration#template_a_workflow)), you can use the Environment variable `${env.name}` in your path, like this: + +`${service.manifest.repoRoot}/${env.name}/vars.yml` + +When the Workflow is deployed, the user will have to provide a name for the Environment to use. The same name will be substituted for `${env.name}` in the path in your script. + +This substitution can be useful if you have folder names in your remote Git repo that match Harness Environment names, such as QA and PROD. The same Workflow and CF Command can be used for both Environments and use manifest files in separate repo folders. + +Harness checks out manifest files from your repo at deployment runtime. If any files in the repository contain non UTF-8 characters (binary, zip, etc), the checkout fails. For example, sometimes an operating system file such as .DS\_Store files get added to a repo. + +### Option: Delegate Selectors + +Ensure that the Harness Delegate(s) used for your deployment have the correct version of the CF CLI installed. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).In order for the commands in your script to execute, the Harness Delegate(s) running the script must have the CF CLI and any related CF plugins installed. + +Unless all of your Harness Delegates have the CF CLI and CF plugins installed, you can refer to the specific Delegates using [Delegate Selectors](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_selectors). + +In **Run only on delegates having the following selectors**, add the Delegate Selector(s) for the Delegates with the CF CLI and CF plugins installed. + +If you do not add any Delegates Selectors to the CF Command, when the CF Command runs, Harness will only use Delegates that have the CF CLI installed. + +However, if you are running plugins in CF Command, Harness cannot know which Delegates have the plugins installed. + +This is why the **Run only on delegates having the following selectors** setting ensures that CF Command only executes on Delegates that can run the plugins mentioned in the CF Command script. + +### Option: Timeout + +Set the timeout period for your CF Command. If the command execution hangs beyond the timeout, Harness will fail the step. + +### Option: Create and Add CF Command Templates + +You can also create CF Command templates in your Application or Account templates. Other users can then use these templates to quickly add the CF Commands to their Workflows. + +Here are the steps for creating and adding a CF Command template: + +1. Decide on whether you want to use Application or Account templates. + + Application templates can be used by any Workflow in the Application, and Account templates can be used by any Workflow in any Application. For an overview of template, see [Use Templates](../concepts-cd/deployment-types/use-templates.md). + For this example, we will create an Application template. + +2. In your Application, in **Application Resources**, click **Template Library**. +3. Click **Add Template**, and then click **CF Command**. The CF Command settings appear:![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-40.png) +4. Configure the template the same way you would configure the CF Command in a Workflow. +5. In **Variables**, enter the variable names and default values you want to use in the template. When a user adds or links this template to a Workflow, the user will provide the values for the variables. + + You can also type the variables in the **Script** field and Harness will prompt you to create them: + + ![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-41.png) + + Here's an example showing variables used in the command script: + + ![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-42.png) + +6. When you are done, click **Submit**. +7. Navigate to a Workflow for a TAS Service. +8. In the **Setup** section on the Workflow steps, click **Add Command**. +9. Select **Application Templates** to select from the command from the Application Template Library. +10. Locate your command and click **Link** or **Copy**. A copied template does not provide version control like a linked template. In this example, we'll click **Link**. +The CF Command template settings appear.![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-43.png) +11. Recommended: Add a Delegate Selector in **Run only on delegates having the following selectors**, as described in [Delegate Selectors](#delegate_selectors). +12. Provide values for the variables, if any. +13. Click **Submit**. + +The CF Command template is added to your Workflow. + +![](./static/run-cf-cli-commands-and-scripts-in-a-workflow-44.png) + +If you open the template, you can edit the **Run only on delegates having the following selectors** and **Variables** settings. + +### Next Steps + +* [Use CLI Plugins in Harness Tanzu Deployments](use-cli-plugins-in-harness-pcf-deployments.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/add-container-images-for-pcf-deployments-00.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/add-container-images-for-pcf-deployments-00.png new file mode 100644 index 00000000000..8be197a6384 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/add-container-images-for-pcf-deployments-00.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/add-container-images-for-pcf-deployments-01.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/add-container-images-for-pcf-deployments-01.png new file mode 100644 index 00000000000..b82afb7f67b Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/add-container-images-for-pcf-deployments-01.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/adding-and-editing-inline-pcf-manifest-files-71.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/adding-and-editing-inline-pcf-manifest-files-71.png new file mode 100644 index 00000000000..90eff240382 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/adding-and-editing-inline-pcf-manifest-files-71.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-24.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-24.png new file mode 100644 index 00000000000..9a7c198f0af Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-24.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-25.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-25.png new file mode 100644 index 00000000000..962ec644ff0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-25.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-26.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-26.png new file mode 100644 index 00000000000..851a4772b61 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-26.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-27.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-27.png new file mode 100644 index 00000000000..c493bebeb22 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-27.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-28.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-28.png new file mode 100644 index 00000000000..0a04fd457ae Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-28.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-29.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-29.png new file mode 100644 index 00000000000..a0c6d6c3fdf Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-basic-pcf-deployment-29.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-52.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-52.png new file mode 100644 index 00000000000..1d8259e128b Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-52.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-53.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-53.png new file mode 100644 index 00000000000..91d1047b8c8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-53.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-54.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-54.png new file mode 100644 index 00000000000..3a19881a754 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-54.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-55.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-55.png new file mode 100644 index 00000000000..14006e65df2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-55.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-56.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-56.png new file mode 100644 index 00000000000..5a518abec7c Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-56.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-57.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-57.png new file mode 100644 index 00000000000..e7ee924adc3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-57.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-58.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-58.png new file mode 100644 index 00000000000..2bc2da2421f Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-58.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-59.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-59.png new file mode 100644 index 00000000000..397adebb625 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-59.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-60.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-60.png new file mode 100644 index 00000000000..d6e381550c2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-60.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-61.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-61.png new file mode 100644 index 00000000000..b98b8bcfcd1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-blue-green-pcf-deployment-61.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-83.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-83.png new file mode 100644 index 00000000000..28c441736bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-83.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-84.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-84.png new file mode 100644 index 00000000000..a2d79f57c4b Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-84.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-85.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-85.png new file mode 100644 index 00000000000..62d884e620c Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-85.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-86.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-86.png new file mode 100644 index 00000000000..f7bb14fc4ed Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-86.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-87.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-87.png new file mode 100644 index 00000000000..de869a64f9e Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-87.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-88.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-88.png new file mode 100644 index 00000000000..ce4db5d7c4c Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-88.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-89.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-89.png new file mode 100644 index 00000000000..f45d70f206b Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-89.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-90.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-90.png new file mode 100644 index 00000000000..425978afa15 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-90.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-91.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-91.png new file mode 100644 index 00000000000..1c2d41ec341 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-91.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-92.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-92.png new file mode 100644 index 00000000000..58cd43840f8 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-92.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-93.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-93.png new file mode 100644 index 00000000000..28c441736bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/create-a-canary-pcf-deployment-93.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/define-your-pcf-target-infrastructure-23.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/define-your-pcf-target-infrastructure-23.png new file mode 100644 index 00000000000..904ba13a6a4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/define-your-pcf-target-infrastructure-23.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/install-cloud-foundry-cli-6-and-7-on-harness-delegates-22.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/install-cloud-foundry-cli-6-and-7-on-harness-delegates-22.png new file mode 100644 index 00000000000..cfcd05bc1bb Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/install-cloud-foundry-cli-6-and-7-on-harness-delegates-22.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-72.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-72.png new file mode 100644 index 00000000000..b48cf9ecc96 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-72.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-73.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-73.png new file mode 100644 index 00000000000..b74b49e313d Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-73.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-74.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-74.png new file mode 100644 index 00000000000..8ab23b0c152 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-74.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-75.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-75.png new file mode 100644 index 00000000000..21a7317a7f6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-75.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-76.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-76.png new file mode 100644 index 00000000000..08d3a200581 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-76.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-77.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-77.png new file mode 100644 index 00000000000..f24c7971759 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-77.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-78.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-78.png new file mode 100644 index 00000000000..b52ef465668 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-78.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-79.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-79.png new file mode 100644 index 00000000000..7b76835b8b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-79.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-80.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-80.png new file mode 100644 index 00000000000..73e5217d18c Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-80.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-81.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-81.png new file mode 100644 index 00000000000..5d75d5f43af Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-81.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-82.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-82.png new file mode 100644 index 00000000000..3559c41923d Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/override-pcf-manifests-and-config-variables-and-files-82.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-13.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-13.png new file mode 100644 index 00000000000..1e655aa4d79 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-13.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-14.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-14.png new file mode 100644 index 00000000000..27fb29d7f18 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-14.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-15.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-15.png new file mode 100644 index 00000000000..d2853e09afe Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-15.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-16.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-16.png new file mode 100644 index 00000000000..4bba3d15ea7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-16.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-17.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-17.png new file mode 100644 index 00000000000..4935833ade5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-17.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-18.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-18.png new file mode 100644 index 00000000000..13cb225d0bc Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-18.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-19.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-19.png new file mode 100644 index 00000000000..0de2aef148e Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-19.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-20.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-20.png new file mode 100644 index 00000000000..2a92bca8e70 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-20.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-21.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-21.png new file mode 100644 index 00000000000..7599de6bb87 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/pcf-built-in-variables-21.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-34.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-34.png new file mode 100644 index 00000000000..a25a478e4ac Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-34.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-35.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-35.png new file mode 100644 index 00000000000..c503960d679 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-35.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-36.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-36.png new file mode 100644 index 00000000000..c503960d679 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-36.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-37.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-37.png new file mode 100644 index 00000000000..d90bce50582 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-37.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-38.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-38.png new file mode 100644 index 00000000000..d90bce50582 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-38.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-39.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-39.png new file mode 100644 index 00000000000..5e249ed0897 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-39.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-40.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-40.png new file mode 100644 index 00000000000..825bb66350c Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-40.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-41.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-41.png new file mode 100644 index 00000000000..7639897a280 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-41.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-42.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-42.png new file mode 100644 index 00000000000..25e0acf32ec Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-42.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-43.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-43.png new file mode 100644 index 00000000000..ce2b7cc8607 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-43.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-44.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-44.png new file mode 100644 index 00000000000..ba34641b1d2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/run-cf-cli-commands-and-scripts-in-a-workflow-44.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/tanzu-app-naming-with-harness-32.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/tanzu-app-naming-with-harness-32.png new file mode 100644 index 00000000000..1104a9bf957 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/tanzu-app-naming-with-harness-32.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/tanzu-app-naming-with-harness-33.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/tanzu-app-naming-with-harness-33.png new file mode 100644 index 00000000000..d05f8ca0041 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/tanzu-app-naming-with-harness-33.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/test-tanzu-article-30.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/test-tanzu-article-30.png new file mode 100644 index 00000000000..03b7cac3f89 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/test-tanzu-article-30.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/test-tanzu-article-31.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/test-tanzu-article-31.png new file mode 100644 index 00000000000..5f7415d8018 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/test-tanzu-article-31.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-62.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-62.png new file mode 100644 index 00000000000..3ed782f7a6c Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-62.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-63.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-63.png new file mode 100644 index 00000000000..40e2d105769 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-63.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-64.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-64.png new file mode 100644 index 00000000000..28fc63cd4d6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-64.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-65.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-65.png new file mode 100644 index 00000000000..e63e49479ee Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-65.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-66.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-66.png new file mode 100644 index 00000000000..e075a488040 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-66.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-67.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-67.png new file mode 100644 index 00000000000..edf5b93bc2b Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-67.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-68.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-68.png new file mode 100644 index 00000000000..a071c02265f Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-68.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-69.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-69.png new file mode 100644 index 00000000000..d6e53943f15 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-69.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-70.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-70.png new file mode 100644 index 00000000000..d6e53943f15 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/upload-local-and-remote-pcf-resource-files-70.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-02.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-02.png new file mode 100644 index 00000000000..96c4082300a Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-02.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-03.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-03.png new file mode 100644 index 00000000000..8cb0b25e926 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-03.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-04.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-04.png new file mode 100644 index 00000000000..3795f7b97e7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-04.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-05.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-05.png new file mode 100644 index 00000000000..6c0d95528d3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-05.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-06.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-06.png new file mode 100644 index 00000000000..b3d2604bcb5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-06.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-07.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-07.png new file mode 100644 index 00000000000..e1751487510 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-07.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-08.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-08.png new file mode 100644 index 00000000000..8d8a4491364 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-08.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-09.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-09.png new file mode 100644 index 00000000000..c503960d679 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-09.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-10.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-10.png new file mode 100644 index 00000000000..d90bce50582 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-10.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-11.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-11.png new file mode 100644 index 00000000000..5e249ed0897 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-11.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-12.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-12.png new file mode 100644 index 00000000000..0ac5b49863a Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-cli-plugins-in-harness-pcf-deployments-12.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-47.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-47.png new file mode 100644 index 00000000000..8c2bc48c473 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-47.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-48.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-48.png new file mode 100644 index 00000000000..cd5b5ca3b3d Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-48.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-49.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-49.png new file mode 100644 index 00000000000..e84eda21699 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-49.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-50.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-50.png new file mode 100644 index 00000000000..55123954cec Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-50.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-51.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-51.png new file mode 100644 index 00000000000..5daa9327925 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/use-the-app-autoscaler-service-51.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/using-harness-config-variables-in-pcf-manifests-45.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/using-harness-config-variables-in-pcf-manifests-45.png new file mode 100644 index 00000000000..01ff3858a98 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/using-harness-config-variables-in-pcf-manifests-45.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/static/using-harness-config-variables-in-pcf-manifests-46.png b/docs/first-gen/continuous-delivery/pcf-deployments/static/using-harness-config-variables-in-pcf-manifests-46.png new file mode 100644 index 00000000000..9cfcb7eb595 Binary files /dev/null and b/docs/first-gen/continuous-delivery/pcf-deployments/static/using-harness-config-variables-in-pcf-manifests-46.png differ diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/tanzu-app-naming-with-harness.md b/docs/first-gen/continuous-delivery/pcf-deployments/tanzu-app-naming-with-harness.md new file mode 100644 index 00000000000..9545d4ed139 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/tanzu-app-naming-with-harness.md @@ -0,0 +1,173 @@ +--- +title: Tanzu App Naming +description: Learn about TAS app naming and versioning with Harness. +sidebar_position: 70 +helpdocs_topic_id: hzyz7oc5k9 +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +TAS app naming and versioning has a number of options. This topic covers these options and provides examples. + + +### Default App Naming + + +By default, the TAS apps deployed by Harness are named using a concatenation of the names of the Harness Application, Service, and Environment, separated by two underscores. + + +For example, if you have the following names your TAS app is named `MyApp__MyServ__MyEnv`: + + +* Application: `MyApp` +* Service: `MyServ` +* Environment: `MyEnv` + + +When you create a TAS Service in Harness, a default vars.yml file is created with the app name following the default naming convention, and the default manifest.yml file uses the name. + + + +![](./static/tanzu-app-naming-with-harness-32.png) +You can change the `APP_NAME` in vars.yml and specific a new name for you app and Harness will use that for all deployments. For example: + + +``` +APP_NAME: Order__App +``` + +#### Default Versioning + + +By default, Harness appends a numeric suffix to the end of every app release, and increments the number with each subsequent release. + + +For example, let's say your app name is Order\_\_App. Here's how the first two deployments are named: + + +1. First deployment: **Order\_\_App\_\_1**. +2. Second deployment: **Order\_\_App\_\_2**. + + +The previous app version keeps its name: **Order\_\_App\_\_1**. + + +### App Naming with Version Management + + +Currently, this feature is behind the Feature Flag `CF_APP_NON_VERSIONING_INACTIVE_ROLLBACK`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. +In a Harness Workflow using a Harness TAS Service, there is an **App Setup** step that uses the manifest.yml in your Harness TAS Service to set up your app. + + +In App Setup, there is a **Version Management** section that allows you to select how you want your deployed apps versioned. There are two options: + + +##### Incremental Versioning + + +The app name is given an incremental suffix with each deployment. + + +For example, the first time you deploy the app **Order\_\_App** it is **Order\_\_App\_\_0**. The next time this app is deployed the suffix will increase to **\_\_1**. + + +##### App Name with Version History + + +When you deploy an app it maintains its name without adding any suffix to the name to indicate its release version. A suffix is added to the previous version of the app. + + +The first time you deploy the app, Harness creates the app with the name you entered in **App Setup**. + + +When deploying new versions of that app, Harness uses the same name for the app and renames the previous version of the app with suffix `_INACTIVE`. + + +For example, if the app is named **Order\_\_App**, the first deployment will use the name **Order\_\_App**. When the next version of the app is deployed, the new version is named **Order\_\_App** and the previous version is now named **Order\_\_App\_\_INACTIVE**. + + +During rollback, the new app version is deleted and the previous app is renamed without the suffix. + + +##### Blue Green Deployments and Naming + + +App naming in Blue Green deployments follows the same patterns as the other deployment strategies, but it is useful to review the Blue Green scenario as it involves multiple stages. + + +Blue Green deployments involve staging a new app version in a stage environment and then swapping the stage and production routes between the stage and production environments. + + +When a new app is deployed, the following occurs: + + +1. During staging by the App Setup step, the new app is given the suffix `__INACTIVE`. + 1. If a failure occurs, the new app version is simply deleted and the current production app is untouched. +2. During the Swap Route step, as the routes are swapped and the new app becomes the production app, the new app name's suffix is changed in one of the following ways: + 1. Default versioning, as explained above in [Default App Naming](#default_app_naming). + 2. Or, if **Version Management** is enabled, the suffix is changed to according to whether you are using **Incremental Versioning** or **App Name with Version History**. +3. The previous app version is then renamed with the suffix `__INACTIVE`. + 1. If there is a failure, the new app version is deleted and the current production app has the `__INACTIVE` suffix removed and its previous name is restored. + + +### App Naming with Special Characters Support + + +Currently, this feature is behind the Feature Flag `CF_ALLOW_SPECIAL_CHARACTERS`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. +By default, a TAS app name must consist of alphanumeric characters. Harness supports special characters in TAS app naming. + + +By default, if your app named is **Order-App** Harness would convert the dash to underscores (**Order\_\_App**), but with special characters support enabled the dash is preserved (**Order-App**). + + +App name versioning follows the default Harness TAS app naming. + + +For example, if the app using special characters is named **Order-App**, the first deployment of the app is named **Order-App\_\_1**. + + +#### Version Management with Special Characters Support + + +If both the Version Management (described above) and Special Characters features are enabled, the first app version is Order-App. The second app version is **Order-App**, also, and the first app version is renamed **Order-App\_\_INACTIVE**. + + +#### App Naming Changes after enabling Special Characters Support + + +After enabling the Special Characters feature, the next deployment will be considered the first deployment and the app name will start with a new family name. + + +For example, if your app name was **test-gsup-1.0.45**, and you had deployed it 3 times, then all the apps will belong to this family naming + + +1. test\_\_gsup\_\_1\_\_0\_\_45 +2. test\_\_gsup\_\_1\_\_0\_\_45\_\_1 +3. test\_\_gsup\_\_1\_\_0\_\_45\_\_2 +4. test\_\_gsup\_\_1\_\_0\_\_45\_\_3 + + +After you enable the Special Characters feature and use a special character like a dash, the apps created in the next deployment will belong to a new family name: **test-gsup-1.0.45**. + + +Consequently, there will be no link between previous deployments and this new deployment. The new deployment is now considered as the first deployment. + + +This illustration provides examples of before and after enabling the Special Characters feature: + + +![](./static/tanzu-app-naming-with-harness-33.png) + +If something goes wrong after enabling the Special Characters feature, rollback will not work because the app family names have changed. Even Blue Green will not work as the previous app will belong to different family. + + +Similarly, if you attempt a [Post-Prod Rollback](https://docs.harness.io/article/2f36rsbrve-post-deployment-rollback) after a successful deployment, it will not work. + + +Just remember that after enabling the Special Characters feature the next deployment is considered as a new first deployment.  + + + +If user now disable the FF - CF\_ALLOW\_SPECIAL\_CHARACTERS, then we will move back to the previous family - [test\_\_gsup\_\_1\_\_0\_\_45] + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/test-tanzu-article.md b/docs/first-gen/continuous-delivery/pcf-deployments/test-tanzu-article.md new file mode 100644 index 00000000000..553ad411d6b --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/test-tanzu-article.md @@ -0,0 +1,142 @@ +--- +title: Add Packaged Tanzu Manifests +description: Currently, this feature is behind the Feature Flag CUSTOM_MANIFEST. Contact Harness Support to enable the feature.. You can use manifests in a packaged archive with the Custom Remote Manifests settin… +sidebar_position: 90 +helpdocs_topic_id: a5eyx0cecd +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `CUSTOM_MANIFEST`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. You can use manifests in a packaged archive with the **Custom Remote Manifests** setting in a Harness Tanzu Service. You add a script to the Service that pulls the package and extracts its contents. Next, you supply the path to the manifest, template, etc. + +A Harness TAS Service and Environment are used together when you set up a Harness Workflow to deploy your TAS app. You can configure your Environment to override the manifest path of the Harness TAS Services that deploy to the Environment. + +This topic describes how to pull packaged archives, reference their manifest, and override the references at the Environment level. + + +### Before You Begin + +* [Tanzu Application Service (TAS) Quickstart](https://docs.harness.io/article/hy819vmsux-pivotal-cloud-foundry-quickstart) +* [Create a Basic Tanzu Deployment](create-a-basic-pcf-deployment.md) +* [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* [Override Tanzu Manifests and Config Variables and Files](override-pcf-manifests-and-config-variables-and-files.md). + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + +### Option: Application Manifest API Developed in Custom Manifest + +Use Application Manifest API developed in Custom Manifest to use store type: `CUSTOM`. Define a fetching script for packaged manifest and the path to the manifest directory or template along with Delegate Selectors. + +### Step 1: Create a Harness Tanzu Service + +In your Harness Application, create your TAS Service, as described in [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md). + +### Step 2: Use Custom Remote Manifests + +In your Harness Tanzu Service, in **Manifests**, click more options (︙) and select **Custom Remote Manifests**. + +In **Manifest Format**, select **TAS Manifests**. + +Now you can add your script to pull the package containing your manifest. + +### Step 3: Add Script for Remote Package + +In **Script**, enter the script that pulls the package containing your manifest and extracts the manifest from the package. For example: + + +``` +curl -sSf -u "${secrets.getValue("username")}:${secrets.getValue("password")}" -O 'https://mycompany.jfrog.io/module/example/manifest.zip' + +unzip manifest.zip +``` +You can use Harness Service, Workflow, secrets, and built-in variables in the script. + +The script is run on the Harness Delegate selected for deployment. + +Harness creates a temporary working directory on the Delegate host for the downloaded package. You can reference the working directory in your script with `WORKING_DIRECTORY=$(pwd)` or `cd $(pwd)/some/other/directory`. + +Once you have deployed the Workflow, you can check which Delegate was selected in the **Delegates Evaluated** setting for the Workflow step that used the manifest. + +### Step 4: Add Path to Manifests + +Once you have a script that extracts your package, you provide Harness with the path to the manifest in the expanded folders and files. + +You can use Harness Service, Workflow, and built-in variables in the path. + +### Step 5: Add Delegate Selector + +In **Delegate Selector**, select the Selector for the Delegate(s) you want to use. You add Selectors to Delegates to make sure that they're used to execute the command. For more information, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +Harness will use Delegates matching the Selectors you add. + +If you use one Selector, Harness will use any Delegate that has that Selector. + +If you select two Selectors, a Delegate must have both Selectors to be selected. That Delegate might also have other Selectors, but it must have the two you selected. + +You can use expressions for Harness built-in variables or Account Default variables in **Delegate Selectors**. When the variable expression is resolved at deployment runtime, it must match an existing Delegate Selector.For example, if you have a Delegate Selector **prod** and the Workflow is using an Environment also named **prod**, the Delegate Selector can be `${env.name}`. This is very useful when you match Delegate Selectors to Application component names such as Environments, Services, etc. It's also a way to template the Delegate Selector setting.![](./static/test-tanzu-article-30.png) + +### Option: Override Manifests in Environment + +You can override Harness Service settings at the Harness Environment level using Service Configuration Overrides. See [PCF Manifest Override.](override-pcf-manifests-and-config-variables-and-files.md#option-2-pcf-manifests-override) + +Here's an example overriding Service file locations with new file locations: + +From the **Environment**, go to the **Service Configuration Overrides** section, and click **Add Configuration Overrides**. The **Service Configuration Override** settings appear. + +In **Service**, select the Tanzu Application Services. + +In  **Override Type** options, select TAS manifests. + +In **Store Type options,** select Custom. + +Custom Manifest Override Configuration gives two options: Inherited script from Service or Define a new script. + +Once you have a script that extracts your package, you provide Harness with the path to the manifest in the expanded folders and files. + +Once you have a script that extracts your package, you provide Harness with the path to the manifest in the expanded folders and files. Here's an example: + +![](./static/test-tanzu-article-31.png)Click **Submit**. + +### Option: Use a Harness Artifact Source + +Although the **Custom Remote Manifests** option is designed for when the manifest and deployment artifact are in the same package, you can use them separately with **Custom Remote Manifests.** + +You must reference the Harness Artifact Source using the [Harness built-in variables](pcf-built-in-variables.md). + +### Option: Use Local Script + +You can also use a local script to create your manifest in **Custom Remote Manifests**. + +You can use Harness Service, Workflow, secrets, and built-in variables in the script using the [Harness built-in variables](pcf-built-in-variables.md). + +Here's the script used: + +``` +Example Scriptcat <> vars.yml +APP\_NAME: my\_app +APP\_MEMORY: 750M +INSTANCES: 1 +EOT + + +cat <> manifest.yml +applications: +- name: ((APP\_NAME)) + memory: ((APP\_MEMORY)) + instances : ((INSTANCES)) + random-route: true +EOT +``` + +### Notes + +* If the artifact you are deploying with your manifest is public (DockerHub) and does not require credentials, you can use the standard public image reference, such as `image: harness/todolist-sample:11`. + +### See Also + +* [Preprocess Tanzu Artifacts to Match Supported Types](preprocess-artifacts-to-match-supported-types.md) +* [Tanzu Built-in Variables](pcf-built-in-variables.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/upload-local-and-remote-pcf-resource-files.md b/docs/first-gen/continuous-delivery/pcf-deployments/upload-local-and-remote-pcf-resource-files.md new file mode 100644 index 00000000000..07908080f97 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/upload-local-and-remote-pcf-resource-files.md @@ -0,0 +1,88 @@ +--- +title: Upload Local and Remote Tanzu Resource Files +description: You can upload local and remote Manifest and Variable files. +sidebar_position: 50 +helpdocs_topic_id: i5jxqsbkt7 +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can upload local and remote Tanzu Application Service (TAS, formerly PCF) Manifest and Variable files to your Harness TAS Service. + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md). +* See [Adding and Editing Inline Tanzu Manifest Files](adding-and-editing-inline-pcf-manifest-files.md). + +### Step 1: Upload Local Manifest and Variable Files + +You can upload `manifest.yml` and `vars.yml`files from your local drive into your Harness Service. + +Harness allows one manifest file and one or more variable files. At runtime, Harness will evaluate the files to identify which is the manifest file and which are the variable files. + +From the options menu, click **Upload Local Manifest Files**. + +![](./static/upload-local-and-remote-pcf-resource-files-62.png) + +The **Upload Local Manifest Files** dialog appears. + +![](./static/upload-local-and-remote-pcf-resource-files-63.png) + +Choose the local folder or files using your file explorer or drag and drop the files into the dialog. The selected files are listed. + +![](./static/upload-local-and-remote-pcf-resource-files-64.png) + +Click **Submit** to add the files. + +If you are uploading a manifest.yml or vars.yml file into the same folder with the default manifest.yml and vars.yml files, you will see the following warning. + +![](./static/upload-local-and-remote-pcf-resource-files-65.png) + +Simply click **Overwrite All** and then **Submit** to replace the default files. + +### Step 2: Upload Remote Manifest and Variable Files + +Harness checks out manifest files from your repo at deployment runtime. If any files in the repository contain non UTF-8 characters (binary, zip, etc), the checkout fails. For example, sometimes an operating system file such as .DS\_Store files get added to a repo.Once you have set up a Harness [Source Repro Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) using your remote Git repo, you can use TAS files from the remote repo in your TAS Service **Manifests** section. + +To use remote files, do the following: + +1. Create a [Harness Source Repro Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) that connects to the branch where your remote files are located. + When you set up the Source Repro Provider, you specify the repo URL and branch name: + ![](./static/upload-local-and-remote-pcf-resource-files-66.png) +2. In the TAS Service, in **Manifests**, click the options button, and then click **Link Remote Manifests**. + ![](./static/upload-local-and-remote-pcf-resource-files-67.png) + The **Remote Manifests** dialog appears. + ![](./static/upload-local-and-remote-pcf-resource-files-68.png) +3. In **Source Repository**, select the Source Repo Provider you set up, and that points to the remote Git repo containing your manifest files. +4. In **Commit ID**, select **Latest from Branch** or **Specific Commit ID**. + +**Which one should I pick?** Make your selection based on what you want to happen at runtime. If you want to always use the latest files from a repo branch at runtime, select **Latest from Branch**. If you want to use the files as they were at a specific commit, select **Specific Commit ID**. Any changes from additional commits will not be used at runtime. To use changes from additional commits, you will have to update commit ID. + +#### File/Folder Path Current Functionality + +The file path you enter in **File/Folder Path** will be used as the manifest for this Service. + +**Avoid listing a folder only:** If you enter a folder path only, Harness does not know which file in the folder to use, so Harness will list the files using Git and then select the last file listed. If the last file is invalid, Harness will select the second to last file, and so on. The order in which files are returned from Git is not constant, and so selecting the correct file is not always possible. Instead, provide the full path to the file. + +1. If you selected **Latest from Branch**, specify the **Branch** and **File/Folder** path to the remote manifest file (typically, a vars.yml file).![](./static/upload-local-and-remote-pcf-resource-files-69.png) +2. If you selected **Specific Commit ID**, specify the **Commit ID** and **File/Folder** path to the remote manifest files. + +Click **Submit**. Your remote git repo is added as the source for **Manifests**. + +#### File/Folder Path New Functionality + +Currently, this feature is behind the feature flag `SINGLE_MANIFEST_SUPPORT`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.The file path you enter in **File/Folder Path** will be used as the manifest for this Service. + +Harness requires the path to the manifest or vars file you are using. + +![](./static/upload-local-and-remote-pcf-resource-files-70.png) + +If you enter a folder path and **no file**, deployment will fail. + +### Next Steps + +* [Using Harness Config Variables in Tanzu Manifests](using-harness-config-variables-in-pcf-manifests.md) +* [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md) + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/use-cli-plugins-in-harness-pcf-deployments.md b/docs/first-gen/continuous-delivery/pcf-deployments/use-cli-plugins-in-harness-pcf-deployments.md new file mode 100644 index 00000000000..86751077680 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/use-cli-plugins-in-harness-pcf-deployments.md @@ -0,0 +1,185 @@ +--- +title: Use CLI Plugins in Harness Tanzu Deployments +description: Run Cloud Foundry plugins as a step in a Harness PCF Workflow. +sidebar_position: 160 +helpdocs_topic_id: ttu8ty2glb +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports Cloud Foundry CLI version 6 and 7. Support for version 7 is behind the Feature Flag `CF_CLI7`. You can read about it in [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md).Harness supports all Cloud Foundry plugins from the [CF plugin marketplace](https://plugins.cloudfoundry.org/), [Tanzu Network](https://network.pivotal.io/), and in-house, and enables you to run and use them in Harness TAS Workflow steps. + +Harness also includes first-class support for the [App Autoscaler plugin](https://docs.pivotal.io/application-service/2-7/appsman-services/autoscaler/using-autoscaler-cli.html), enabling you to create it as part of your Harness Workflow, bind it to your app, and enable or disable it as needed. Here is the App Autoscaler option as part of the **App Setup** command. + +![](./static/use-cli-plugins-in-harness-pcf-deployments-02.png) + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Visual Summary + +Harness runs CF plugins using the Workflow command **CF Command**. CF Command automatically sets the `CF_PLUGIN_HOME` directory, logs in (using the Harness TAS Cloud Provider), and runs the plugin using the script in CF Command. + +![](./static/use-cli-plugins-in-harness-pcf-deployments-03.png) + +### Review: Requirements for Running Plugins + +To run plugins using CF Command, you must have the following: + +* CF CLI Installed on Harness Delegates​ +* Plugins Installed on Harness Delegates​ +* Create-Service-Push Installed on Delegate + +#### CF CLI Installed on Harness Delegates + +Ensure that the Harness Delegate(s) used for your deployment have the correct version of the CF CLI installed. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).The CF CLI must be installed on the Harness Delegates used in deployment. This is a requirement for any TAS deployment with Harness. + +The CF CLI can be installed on the Delegate(s) using a Delegate Profile script. + +For more information, see [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md). + +In Harness, click **View Logs** to see the successful installation: + +![](./static/use-cli-plugins-in-harness-pcf-deployments-04.png) + +A single Delegate Profile can be used on all Delegates to ensure that any Delegates used have the CF CLI installed. + +#### Plugins Installed on Harness Delegates + +The plugin you want to run must be installed on the Harness Delegates that CF Command will use. You can tag a Harness Delegate and then select the Tag in the CF Command, ensuring that the CF Command runs your plugin on a Harness Delegate with the plugin installed. + +![](./static/use-cli-plugins-in-harness-pcf-deployments-05.png) + +You can install the plugin on the Harness Delegate using the same Delegate Profile you use to install the CF CLI on the Delegate(s). + +Here is an example installing the CF CLI and [Create-Service-Push](https://plugins.cloudfoundry.org/#Create-Service-Push) plugin: + + +``` +sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo +sudo yum -y install cf-cli + +echo y | cf install-plugin -r CF-Community "Create-Service-Push" +``` +If you are using the Kubernetes, ECS, or Helm Delegates, you can select the Profile when you download a new Delegate script. Typically, you will be using a Shell Script Delegate for TAS deployments. In that case, simply apply the Profile to each new Delegate: + +![](./static/use-cli-plugins-in-harness-pcf-deployments-06.png) + +#### Create-Service-Push Installed on Delegate + +The Create-Service-Push plugin must be installed on the Delegate(s) to use the App AutoScaler plugin. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).The [Create-Service-Push](https://plugins.cloudfoundry.org/#Create-Service-Push) plugin reads in a services' manifest.yml file, creates the services listed in it, and pushes an application. Create-Service-Push extends `cf push`. + +If you want to create TAS services from the inline or remote manifest files set up in your Harness Service **Manifests** section, you need to have Create-Service-Push installed on the Delegate. + +For example, you can see the `cf create-service-push` command used to run a plugin defined in the manifest here: + +![](./static/use-cli-plugins-in-harness-pcf-deployments-07.png) + +You can install the Create-Service-Push plugin in a Delegate Profile by itself: + + +``` +echo y | cf install-plugin -r CF-Community "Create-Service-Push" +``` +or in the Delegate Profile that installs the CF CLI: + + +``` +sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo +sudo yum -y install cf-cli + +echo y | cf install-plugin -r CF-Community "Create-Service-Push" +``` +Click **View Logs** to see the successful installation: + + +``` +Searching CF-Community for plugin Create-Service-Push... +Plugin Create-Service-Push 1.3.1 found in: CF-Community +Attention: Plugins are binaries written by potentially untrusted authors. +Install and use plugins at your own risk. +Do you want to install the plugin Create-Service-Push? [yN]: y +Starting download of plugin binary from repository CF-Community... + + 0 B / 9.82 MiB [------------------------------------------------------] 0.00% 9.82 MiB / 9.82 MiB [==============================================] 100.00% 0sInstalling plugin Create-Service-Push... +OK + +Plugin Create-Service-Push 1.3.1 successfully installed. +``` +### Step 1: Running CF Plugins Using the CF Command + +Once the CF plugin has been installed on a Harness Delegate, you can simply add the CF Command to your Workflow to run the plugin. + +1. In your TAS Workflow, decide where you want to execute a CF CLI command. If you want to run a plugin, you will likely want to add the CF Command to the **Setup** section. +2. click **Add Command**. **Add Command** appears. +3. Click **CF Command**. **CF Command** appears. + +![](./static/use-cli-plugins-in-harness-pcf-deployments-08.png) + +As the commented-out text states, the CF Command will perform the login steps of using the CF CLI. So you do not need to include login credentials in CF Command. CF Command will use the credentials set up in your Harness TAS Cloud Provider. + +### Step 2: Script + +Enter your CF CLI commands. + +There are two built-in Harness TAS variables you can use to reference the manifest and vars files used by the plugin you want to run: + +* If you are using inline Manifest files, the variable `${service.manifest}` refers to the folder containing your manifest files. + +![](./static/use-cli-plugins-in-harness-pcf-deployments-09.png) + +* If you are using remote Manifest files via a Git repo, `${service.manifest}` refers to the folder containing your manifest files and `${service.manifest.repoRoot}` refers to the root folder of the repo. + +![](./static/use-cli-plugins-in-harness-pcf-deployments-10.png) + +You can use the variables together to point to different locations. For example, here the manifest.yml file is one folder and the vars.yml is located using a path from the repo root folder: + + +``` +cf create-service-push --service-manifest ${service.manifest}/manifest.yml --no-push --vars-file ${service.manifest.repoRoot}/QA/vars.yml +cf plugins | grep autoscaling-apps +``` +These variables appear when you type `${service` in **Script**: + +![](./static/use-cli-plugins-in-harness-pcf-deployments-11.png) + +Environment Service Overrides, such as [Tanzu Manifest Overrides](override-pcf-manifests-and-config-variables-and-files.md), do not apply to or override the `${service.manifest}` variable. The `${service.manifest}` variable only looks in the Harness Service.You can also use variables in your script to templatize paths to manifest files. For example, if your Workflow Environment were templatized (see [Template a Workflow](https://docs.harness.io/article/m220i1tnia-workflow-configuration#template_a_workflow)), you can use the Environment variable `${env.name}` in your path, like this: + +`${service.manifest.repoRoot}/${env.name}/vars.yml` + +When the Workflow is deployed, the user will have to provide a name for the Environment to use. The same name will be substituted for `${env.name}` in the path in your script. + +This substitution can be useful if you have folder names in your remote Git repo that match Harness Environment names, such as QA and PROD. The same Workflow and CF Command can be used for both Environments and use manifest files in separate repo folders. + +### Step 3: Delegate Selectors + +Ensure that the Harness Delegate(s) used for your deployment have the correct version of the CF CLI installed. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).In order for the plugin in your script to execute, the Harness Delegate(s) running the script must have the plugin installed. + +Unless all of your Harness Delegates have the plugin installed, you can refer to the specific Delegates with the plugin installed using [Delegate Selectors](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#delegate_selectors). Add the Delegate Selectors for the Delegates with the plugins installed. + +![](./static/use-cli-plugins-in-harness-pcf-deployments-12.png) + +If you do not add any Delegates Selectors to the CF Command, when the CF Command runs, Harness will only use Delegates that have the CF CLI installed. + +However, if you are running plugins in CF Command, Harness cannot know which Delegates have the plugins installed. + +This is why the Delegate Selectors setting ensures that CF Command only executes on Delegates that can run the plugins mentioned in the CF Command script. + +### Review: Plugin Directory + +Ensure that the Harness Delegate(s) used for your deployment have the correct version of the CF CLI installed. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).By default, the CF CLI stores plugins in `$CF_HOME/.cf/plugins`, which defaults to `$HOME/.cf/plugins`. For most cases, this location does not need to change. + +To change the root directory of this path from `$CF_HOME`, set the `CF_PLUGIN_HOME` environment variable. + +For example: + +`export CF_PLUGIN_HOME=''` + +You can set the `CF_PLUGIN_HOME` environment variable before you install the Delegate. This will ensure that the Delegate Profile that you use to install the CF CLI uses the new `CF_PLUGIN_HOME`. + +For more information, see [Changing the Plugin Directory](https://docs.cloudfoundry.org/cf-cli/use-cli-plugins.html#plugin-directory) from Pivotal. + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/use-the-app-autoscaler-service.md b/docs/first-gen/continuous-delivery/pcf-deployments/use-the-app-autoscaler-service.md new file mode 100644 index 00000000000..e34499e82f7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/use-the-app-autoscaler-service.md @@ -0,0 +1,200 @@ +--- +title: Use the App Autoscaler Service +description: The App Autoscaler plugin has first-class support in Harness, enabling you to ensure app performance and control the cost of running apps. +sidebar_position: 170 +helpdocs_topic_id: 4xh8u7l86h +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports Cloud Foundry CLI version 6 and 7. Support for version 7 is behind a Feature Flag. You can read about it in [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md).Harness supports [App Autoscaler Plugin release 2.0.233](https://network.pivotal.io/products/pcf-app-autoscaler#/releases/491414).The [App Autoscaler plugin](https://docs.pivotal.io/application-service/2-7/appsman-services/autoscaler/using-autoscaler-cli.html) has first-class support in Harness, enabling you to ensure app performance and control the cost of running apps. + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Visual Summary + +The following diagram illustrates how you can define, bind, create, and use App Autoscaler with the TAS apps deployed by Harness. + +![](./static/use-the-app-autoscaler-service-47.png) + +### Review: Requirements for App Autoscaler + +Ensure that the Harness Delegate(s) used for your deployment have the correct version of the CF CLI installed. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).If you are using the App Autoscaler plugin, then autoscaling is applied after the final phase of deployment. + +Once all phases are completed and the number of old version instances has reached the desired number, then the final number of instances will be as configured as defined by the Autoscaler. + +For example, if a deployment results in 4 new instances, but Autoscaler is set to min 8 and max 10, Harness will set the desired number of instances to the minimum value. So the total number of new instances is 8. + +To use App Autoscaler, you must have the following requirements: + +The App Autoscaler plugin must be installed on the Delegate(s) that will execute TAS deployments. The steps in this section assume that the App Autoscaler plugin is installed on your Delegates. + +Because of limitations in the CF CLI, the best way to install the App Autoscaler plugin on the Delegate is the following: + +1. Download the release from the [Pivotal App Autoscaler CLI Plugin](https://network.pivotal.io/products/pcf-app-autoscaler) page. +2. Store the release in a repo in your network that can be accessed by the Harness Delegate. This will allow you to use cURL to copy the release to the Delegate host(s). +3. Install the App Autoscaler plugin on your Delegates using a Delegate Profile. +This profile will run each time the Delegate is restarted and the CF CLI cannot reinstall the plugin simply, so you must uninstall the plugin and then reinstall it in your Delegate Profile: + + +``` +cf uninstall-plugin "App Autoscaler" +curl /path/to/release-in-repo +cf install-plugin local-path/binary +``` +Click **View Logs** on the Delegate Profile to see the successful installation. + +You can also choose to install the plugin manually on each Delegate using the steps provided by Pivotal, in [Using the App Autoscaler CLI](https://docs.pivotal.io/application-service/2-7/appsman-services/autoscaler/using-autoscaler-cli.html#:~:text=The%20App%20Autoscaler%20automatically%20scales,line%20interface%20(cf%20CLI).). + +### Step 1: Define the App Autoscaler Service in Your Manifest File + +1. In your Harness TAS Service, select your manifest.yml file and click **Edit**.![](./static/use-the-app-autoscaler-service-48.png) +2. Add a `create-services` block that describes the App Autoscaler service you want to create: + + create-services: + - name: "myautoscaler" + broker: "app-autoscaler" + plan: "standard" + ... + + +Now that the App Autoscaler service is defined in your manifest.yml, you can bind it to the app. + +### Step 2: Bind the App Autoscaler Service to Your App + +1. In `applications`, add a `services` block with the name of the App Autoscaler service: + + ``` + ... + services: + - myautoscaler + + create-services: + - name: "myautoscaler" + broker: "app-autoscaler" + plan: "standard" + ... + ``` + For more information on services, see the [Pivotal documentation](https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block). + + Now that the App Autoscaler service is defined and bound to your app, you can add an App Autoscaler manifest file that configures the settings for the service. + +2. Click **Save** to save your manifest.yml file. + +### Step 3: Add Your App Autoscaler Manifest File + +App Autoscaler manifest files are described in [Configure with a Manifest](https://docs.pivotal.io/application-service/2-7/appsman-services/autoscaler/using-autoscaler-cli.html#configure-autoscaling) from Pivotal. + +1. In your Harness Service, click the options button on **Files**, and then click **Add File**.![](./static/use-the-app-autoscaler-service-49.png) +2. In **Add File**, enter the name of the App Autoscaler manifest file, such as **autoscaler.yml**. The file is added to the **Manifests** section. +You can use any name for the App Autoscaler manifest file. Harness will determine which file to use for the service. +3. Select the App Autoscaler manifest file and click **Edit**. +4. Configure your rules, add instance limits, and set scheduled limit changes for the service. Here is an example: + + ``` + --- + instance_limits: + min: 1 + max: 2 + rules: + - rule_type: "http_latency" + rule_sub_type: "avg_99th" + threshold: + min: 10 + max: 20 + scheduled_limit_changes: + - recurrence: 10 + executes_at: "2032-01-01T00:00:00Z" + instance_limits: + min: 10 + max: 20 + ``` +5. Click **Save**. The App Autoscaler manifest file is complete.![](./static/use-the-app-autoscaler-service-50.png) + +### Step 4: Create the App Autoscaler Service Using CF Command + +Ensure that the Harness Delegate(s) used for your deployment have the correct version of the CF CLI installed. See [Install Cloud Foundry CLI Versions on the Harness Delegate](install-cloud-foundry-cli-6-and-7-on-harness-delegates.md).To create the App Autoscaler Service, you add a CF Command to your Workflow that uses the [Create-Service-Push](https://plugins.cloudfoundry.org/#Create-Service-Push) plugin. + +If the App Autoscaler service is already created and running in your target space, you can skip this step. When Harness deploys the app that is already bound to the App Autoscaler service, it will use the existing App Autoscaler service. + +If the App Autoscaler service is already running, you do not need to remove the CF Command from your Workflow. Harness will check to see if the App Autoscaler service exists before creating it. + +To create the App Autoscaler service using CF Command, do the following: + +1. Open the Harness Workflow that will deploy the Harness Service containing manifests for the app and its bound App Autoscaler service. +2. In your Workflow, click **Add Command** anywhere before the **App Setup** command. Typically, this will be in **Setup**. +3. In **Add Command**, click **CF Command**. **CF Command** appears. +4. In **Script**, enter the following command: + + +``` +cf create-service-push --service-manifest ${service.manifest}/manifest.yml --vars-file ${service.manifest}/vars.yml --no-push +``` +In this example, the app manifest file is named manifest.yml. You can replace manifest.yml with the name of your app manifest. You do not need to specify the name of the App Autoscaler manifest. + +For inline manifest files, you can use the `${service.manifest}` variable. For remote manifest files stored in Git, you can use both the `${service.manifest}` and `${service.manifest.repoRoot}` variables. For more information, see [Scripts](run-cf-cli-commands-and-scripts-in-a-workflow.md#option-scripts-and-variables). + +The `--no-push` parameter creates the services but does not push the app. The app will be pushed by the **App Setup** command. If you omit `--no-push` then App Setup will create a new revision of the app. For this reason, it is a best practice is always include `--no-push`. +5. Ensure that you enter the Delegate Selectors for the Delegates that have the CF CLI and Create-Service-Push plugin installed. For more information, see [Delegate Selectors](use-cli-plugins-in-harness-pcf-deployments.md#step-3-delegate-selectors). +6. Click **Submit**. The CF Command is added. + +### Step 5: Enable App Autoscaler in the App Setup Step + +The **App Setup** command in a Workflow includes a **Use App Autoscaler Plugin** setting so you can enable and disable autoscaling as needed. + +![](./static/use-the-app-autoscaler-service-51.png) + +Select **Use App Autoscalar Plugin** to enable the App Autoscaler service bound to your app. + +When you deploy your Workflow, the App Autoscalar service is created using the command `create-service app-autoscaler standard myautoscaler`: + + +``` +# ------------------------------------------ + +# CF_HOME value: /Users/johndoe/pcf/harness-delegate/repository/pcfartifacts/RR4DmcgKSzylo4enFUK5gw +# CF_PLUGIN_HOME value: /Users/johndoe +# Performing "login" +API endpoint: api.run.pivotal.io + +Authenticating... +OK + +Targeted org Harness + +Targeted space AD00001863 + +API endpoint: https://api.run.pivotal.io (API version: 3.77.0) +User: john.doe@harness.io +Org: Harness +Space: AD00001863 +# Login Successful +# Executing pcf plugin script : +Found Service Manifest File: /Users/johndoe/pcf/harness-delegate/repository/pcfartifacts/RR4DmcgKSzylo4enFUK5gw/manifests/deploy.yml +myautoscaler - will now be created as a brokered service. +Now Running CLI Command: create-service app-autoscaler standard myautoscaler +Creating service instance myautoscaler in org Harness / space AD00001863 as adwait.bhandare@harness.io... +OK +| +--no-push applied: Your application will not be pushed to CF ... +# Exit value =0 + + ---------- PCF Run Plugin Command completed successfully +``` +### Option: Use an Existing App Autoscaler Service + +You might already have the App Autoscaler service running in your target space, and so some of the steps described earlier can be skipped. + +If you already have the App Autoscaler service running in your target Pivotal space, then you can simply bind the service in the app manifest file using the `services` parameter and enable the **Use App Autoscalar Plugin** in **App Setup**. You do not need to set up the following: + +* You do not need `create-services` in the manifest.yml file for your app. +* You do not need a manifest file for the App Autoscaler service. +* You do not need to use the CF Command and `cf create-service-push` to create the App Autoscaler service. + +Including any of these unnecessary components will not cause a problem. Harness automatically checks for an existing App Autoscaler service before creating a new service. + diff --git a/docs/first-gen/continuous-delivery/pcf-deployments/using-harness-config-variables-in-pcf-manifests.md b/docs/first-gen/continuous-delivery/pcf-deployments/using-harness-config-variables-in-pcf-manifests.md new file mode 100644 index 00000000000..f8a32b337a8 --- /dev/null +++ b/docs/first-gen/continuous-delivery/pcf-deployments/using-harness-config-variables-in-pcf-manifests.md @@ -0,0 +1,46 @@ +--- +title: Using Harness Config Variables in Tanzu Manifests +description: Configuration variables and files enable you to specify information in the Service that can be referenced in other parts of the Harness Application. +sidebar_position: 60 +helpdocs_topic_id: mutc1hz25o +helpdocs_category_id: emle05cclq +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Service Configuration variables and files enable you to specify information in the Harness Service that can be referenced in other parts of the Harness Application. In this topic we'll cover using these Service Configuration variables and files for Tanzu Application Service (TAS, formerly PCF). + + +### Before You Begin + +* See [Connect to Your Target Tanzu Account](connect-to-your-target-pcf-account.md). +* See [Add Container Images for Tanzu Deployments](add-container-images-for-pcf-deployments.md). +* See [Adding and Editing Inline Tanzu Manifest Files](adding-and-editing-inline-pcf-manifest-files.md). +* See [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md). + +### Review: Configuration Variables and Files + +For example, you can specify a variable in the Service once, and then use it in multiple Workflows without having to manage multiple values. + +* **Config Variables** - You can create Service variables to use in your Manifests files, and in Environments and Workflows. Any Service variables are added as environment variables when the app is created in the Pivotal environment (**cf push**). Later, when you want to reference a Service variable, you use the syntax `${serviceVariable.var_name}`. +* **Config Files** - You can upload config files with variables to be used when deploying the Service. Later, when you want to reference a Service config file, you use the syntax `${configFile.getAsString("fileName")}` for unencrypted text files and `${configFile.getAsBase64("fileName")}` for encrypted text files. + +For details on configuration variables and files, see [Add Service Config Variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) and [Add Service Config Files](https://docs.harness.io/article/iwtoq9lrky-add-service-level-configuration-files). + +### Step: Using Config Variables in Manifests + +You can use **Config Variables** in your Service in place of values in manifest.yml and vars.yml. + +![](./static/using-harness-config-variables-in-pcf-manifests-45.png) + +You can then overwrite this variable in a Harness Environment's **Service Configuration Overrides**, and the new value is used when the Service and Environment are used for deployment. + +![](./static/using-harness-config-variables-in-pcf-manifests-46.png) + +Overwriting Service variables is described in more detail in TAS Environments. + +### Next Steps + +* [Define Your Tanzu Target Infrastructure](define-your-pcf-target-infrastructure.md) +* [Override Tanzu Manifests and Config Variables and Files](override-pcf-manifests-and-config-variables-and-files.md) + diff --git a/docs/first-gen/continuous-delivery/terraform-category/_category_.json b/docs/first-gen/continuous-delivery/terraform-category/_category_.json new file mode 100644 index 00000000000..e7be075e2a7 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/_category_.json @@ -0,0 +1 @@ +{"label": "Terraform", "position": 100, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Terraform"}, "customProps": { "helpdocs_category_id": "gkm7rtubpk"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/terraform-category/add-terraform-scripts.md b/docs/first-gen/continuous-delivery/terraform-category/add-terraform-scripts.md new file mode 100644 index 00000000000..f15672999b6 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/add-terraform-scripts.md @@ -0,0 +1,143 @@ +--- +title: Add Terraform Scripts +description: Set up a Harness Infrastructure Provisioner for Terraform. +sidebar_position: 30 +helpdocs_topic_id: ux2enus2ku +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up a Harness Infrastructure Provisioner for Terraform. + +Once the Harness Infrastructure Provisioner is set up, you can use it to do the following: + +* Define a deployment target in a Harness Infrastructure Definition, provision the target infrastructure in a Workflow as part of its pre-deployment step, and then deploy to the target infrastructure. +* Provision any non-target infrastructure. + +Harness supports first class Terraform provisioning for AWS-based infrastructures (SSH, ASG, ECS, Lambda), Google Kubernetes (GKE), Azure WebApps, and physical data centers via shell scripts. + +Harness Terraform Infrastructure Provisioner are only supported in Canary and Multi-Service Workflows. For AMI/ASG and ECS deployments, Terraform Infrastructure Provisioners are also supported in Blue/Green Workflows. + + +### Before You Begin + +* Get an overview how how Harness supports Terraform: [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md). +* Ensure you have your Harness account settings prepared for Terraform: [Set Up Your Harness Account for Terraform](terraform-delegates.md). + +### Review: Terraform Syntax Support + +Harness supports Terraform scripts written in Terraform syntax versions 11 and 12. + +#### Terraform Syntax Support for Versions 13-1.0 + +Currently, this feature is behind the Feature Flag `TERRAFORM_CONFIG_INSPECT_VERSION_SELECTOR`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.When this feature flag is enabled, Harness supports Terraform scripts written in Terraform syntax versions 11, 12, 13, 14, 15, and 1.0. + +### Visual Summary + +This topic describes step 1 in the Harness Terraform Provisioning implementation process displayed below. + +The graphic shows how the scripts you add in this topic are used to provision the target infrastructure for a deployment: + +![](./static/add-terraform-scripts-04.png) + +Once you have completed this topic: + +* If you are going to provision the deployment target infrastructure, move onto the next step: [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md). +* If want to simply provision any non-target infrastructure, see [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md). + +### Step 1: Add a Terraform Provisioner + +To set up a Terraform Infrastructure Provisioner, do the following: + +In your Harness Application, click **Infrastructure Provisioners**. + +Click **Add Infrastructure Provisioner**, and then click **Terraform**. The **Add Terraform Provisioner** dialog appears. + +In **Name**, enter the name for this provisioner. You will use this name to select this provisioner in Harness Infrastructure Definitions and the Workflow steps Terraform Provision, Terraform Apply, and Terraform Destroy. + +Click **Next**. The **Script Repository** section appears. This is where you provide the location of your Terraform script in your Git repo. + +### Step 2: Select Your Terraform Script Repo + +In **Script Repository**, in **Git Repository**, select the [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) you added for the Git repo where your script is located. + +In **Commit**, select **Latest from Branch** or **Specific Commit ID**: + +**Specific Commit ID** also supports [Git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging).* If you selected **Latest from Branch**, in **Git Repository Branch**, enter the repo branch to use. For example, **master**. For master, you can also use a dot (`.`). +* If you selected **Specific Commit ID**, in **Commit ID**, enter the Git commit ID or Git tag to use. + +In **Terraform Configuration Root Directory**, enter the folder where the script is located. Here is an example showing the Git repo on GitHub and the **Script Repository** settings: + +![](./static/add-terraform-scripts-05.png) + +When you click **Next**, the **Plan Configuration** section is displayed. + +Before you move onto **Plan Configuration**, let's review the option of using expressions in **Script Repository**. + +### Option: Use Expressions for Script Repository + +You can also use expressions in the **Git Repository Branch** and **Terraform Configuration Root Directory** and have them replaced by Workflow variable values when the Terraform Provisioner is used by the Workflow. For example, a Workflow can have variables for **branch** and **path**: + +![](./static/add-terraform-scripts-06.png) + +In **Script Repository**, you can enter variables as `${workflow.variables.branch}` and `${workflow.variables.path}`: + +![](./static/add-terraform-scripts-07.png) + +When the Workflow is deployed, you are prompted to provide values for the Workflow variables, which are then applied to the Script Repository settings: + +![](./static/add-terraform-scripts-08.png) + +This allows the same Terraform Provisioner to be used by multiple Workflows, where each Workflow can use a different branch and path for the **Script Repository**. + +### Step 3: Select Secret Manager for Terraform Plan + +In **Plan Configuration**, in **Terraform Plan Storage Configuration**, select a Secrets Manager to use for encrypting/decrypting and saving the Terraform plan file. + +See [Add a Secrets Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager). + +A Terraform plan is a sensitive file that could be misused to alter cloud provider resources if someone has access to it. Harness avoids this issue by never passing the Terraform plan file as plain text. + +Harness only passes the Terraform plan between the Harness Manager and Delegate as an encrypted file using a Harness Secrets Manager. + +When the `terraform plan` command is run on the Harness Delegate, the Delegate encrypts the plan and saves it to the Secrets Manager you selected. The encrypted data is passed to the Harness Manager. + +When the plan is going to be applied, the Harness Manager passes the encrypted data to the Delegate. + +The Delegate decrypts the encrypted plan and applies it using the `terraform apply` command. + +### Option 2: Skip Terraform Refresh When Inheriting Terraform Plan + +To understand this setting, let's review some of the options available later when you will use this Terraform Infrastructure Provisioner with a [Terraform Provision](terraform-provisioner-step.md) or [Terraform Apply](using-the-terraform-apply-command.md) step in your Workflow. + +When you add either of those steps, you can run them as a Terraform plan using their **Set as Terraform Plan** setting. + +Next, you have the option of exporting the Terraform plan from one Terraform step (using the **Export Terraform Plan to Apply Step** setting) and inheriting the Terraform plan in the next Terraform step (using the **Inherit following configurations from Terraform Plan** setting). + +Essentially, these settings allow you to use your Terraform Provision step as a [Terraform plan dry run](https://www.terraform.io/docs/commands/plan.html) (`terraform plan -out=tfplan`). + +During this inheritance, Harness runs a Terraform refresh, then a plan, and finally executes the new plan. + +If do not want Harness to perform a refresh, enable the **Skip Terraform Refresh when inheriting Terraform plan** option in your Terraform Infrastructure Provisioner. + +When this setting is enabled, Harness will directly apply the plan without reconciling any state changes that might have occurred outside of Harness between `plan` and `apply`. + +This setting is available because a Terraform refresh is not always an idempotent command. It can have some side effects on the state even when no infrastructure was changed. In such cases, terraform apply `tfplan` commands might fail. + +### Step 4: Complete the Terraform Provisioner + +When you are done, the **Terraform Provisioner** will look something like this: + +![](./static/add-terraform-scripts-09.png) + +Now you can use this provisioner in both Infrastructure Definitions and Workflows. + +### Next Steps + +* **Infrastructure Definitions:** use the Terraform Infrastructure Provisioner to define a Harness Infrastructure Definition. You do this by mapping your script outputs to the required Harness Infrastructure Definition settings. Harness supports provisioning for many different platforms. See the following: + + [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md) +* **Workflows:** + + Once you have created the Infrastructure Definition and added it to a Workflow, you add a Terraform Provisioner Step to the Workflow to run your script and provision the infra: [Provision using the Terraform Provisioner Step](terraform-provisioner-step.md). + + You can also use the Terraform Infrastructure Provisioner with the Terraform Apply Workflow step to provision any non-target infrastructure. See [Using the Terraform Apply Command](using-the-terraform-apply-command.md). + diff --git a/docs/first-gen/continuous-delivery/terraform-category/mapgcp-kube-terraform-infra.md b/docs/first-gen/continuous-delivery/terraform-category/mapgcp-kube-terraform-infra.md new file mode 100644 index 00000000000..b308f13cfc0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/mapgcp-kube-terraform-infra.md @@ -0,0 +1,253 @@ +--- +title: Map Dynamically Provisioned Infrastructure using Terraform +description: Use the Terraform Infrastructure Provisioner to create a Harness Infrastructure Definition. +sidebar_position: 40 +helpdocs_topic_id: a2f2bh35el +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to use a Harness Terraform Infrastructure Provisioner to create a Harness Infrastructure Definition. When you select the **Map Dynamically Provisioned Infrastructure** option in an Infrastructure Definition, you select an Infrastructure Provisioner and then map its outputs to required settings. + +![](./static/mapgcp-kube-terraform-infra-38.png) + +Once you are done, you add the Infrastructure Definition to a Workflow as its deployment target. Finally, you add a Terraform Provisioner step to that Workflow to provision the infrastructure. + +When the Workflow runs, it provisions the infrastructure using the Terraform Provisioner step and then deploys to the provisioned infrastructure using the Infrastructure Definition. + +This topic describes how to map Terraform script outputs for all of the supported platforms. + + +### Before You Begin + +* Get an overview how how Harness supports Terraform — [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md). +* Ensure you have your Harness account settings prepared for Terraform — [Set Up Your Harness Account for Terraform](terraform-delegates.md). +* Create a Harness Terraform Infrastructure Provisioner — [Add Terraform Scripts](add-terraform-scripts.md). + +### Visual Summary + +This topic describes step 2 in the Harness Terraform Provisioning implementation process: + +![](./static/mapgcp-kube-terraform-infra-39.png) + +Once you have completed this topic, you can move onto steps 3 through 6 in [Provision using the Terraform Provisioner Step](terraform-provisioner-step.md). + +### Limitations + +Harness Terraform Infrastructure Provisioner are only supported in Canary and Multi-Service Workflows. For AMI/ASG and ECS deployments, Terraform Infrastructure Provisioners are also supported in Blue/Green Workflows. + +### Step: Add the Infrastructure Definition + +As noted above, ensure you have done [Set Up Your Harness Account for Terraform](terraform-delegates.md) and [Add Terraform Scripts](add-terraform-scripts.md) before using the Terraform Infrastructure Provisioner to create the Infrastructure Definition. + +To use a Terraform Infrastructure Provisioner to create an Infrastructure Definition, do the following: + +1. In the same Harness Application where you created the Terraform Infrastructure Provisioner, in an existing Environment, click **Infrastructure Definition**. The **Infrastructure Definition** dialog appears. +2. In **Name**, enter the name for the Infrastructure Definition. You will use this name to select the Infrastructure Definition when you set up Workflows and Workflow Phases. +3. In **Cloud Provider Type**, select the type of Cloud Provider to use to connect to the target platform, such as Amazon Web Services, Kubernetes Cluster, etc. +4. In **Deployment Type**, select the same type of deployment as the Services you plan to deploy to this infrastructure. +It is Deployment Type that determines which Services can be scoped in **Scope to specific Services** and in Workflow and Phase setup. +5. Click **Map Dynamically Provisioned Infrastructure**. +6. In **Provisioner**, select your Terraform Infrastructure Provisioner. +7. In the remaining settings, map the required fields to your Terraform script outputs. The required fields are described in the option sections below. + +You map the Terraform script outputs using this syntax, where `exact_name` is the name of the output: + + +``` +${terraform.*exact\_name*} +``` +When you map a Terraform script output to a Harness Infrastructure Definition setting, the variable for the output, `${terraform.exact_name​}`, can be used anywhere in the Workflow that uses that Terraform Provisioner. + +### Option 1: Map an Agnostic Kubernetes Cluster + +Provisioning Kubernetes is supported with the Kubernetes Cluster Cloud Provider and Google Cloud Platform Cloud Provider, but not the Azure Cloud Provider.Harness supports platform-agnostic Kubernetes cluster connections using its [Kubernetes Cluster Cloud Provider](https://docs.harness.io/article/l68rujg6mp-add-kubernetes-cluster-cloud-provider). + +When you set up an Infrastructure Definition using a Kubernetes Cluster Cloud Provider you can map your Terraform script outputs to the required Infrastructure Definition settings. + +The agnostic Kubernetes deployment type requires mapping for the **Namespace** and **Release Name** settings. + +The following example shows the Terraform script outputs used for the mandatory platform-agnostic Kubernetes deployment type fields: + +![](./static/mapgcp-kube-terraform-infra-40.png) + +For information on Kubernetes deployments, see [Kubernetes How-tos](../kubernetes-deployments/kubernetes-deployments-overview.md). + +### Option 2: ​Map a GCP Kubernetes Infrastructure​ + +The GCP Kubernetes deployment type requires the **Cluster Name** and **Namespace** settings. + +Provisioning Kubernetes is supported with the Kubernetes Cluster Cloud Provider and Google Cloud Platform Cloud Provider, but not the Azure Cloud Provider.The following example shows the Terraform script outputs used for the mandatory Kubernetes deployment type fields: + +![](./static/mapgcp-kube-terraform-infra-41.png) + +For information on Kubernetes deployments, see [Kubernetes How-tos](../kubernetes-deployments/kubernetes-deployments-overview.md). + +#### Cluster Name Format + +If the cluster is multi-zonal, ensure the resolved value of the Terraform output mapped to **Cluster Name** uses the format `region/name`. + +If the cluster is single-zone, ensure the resolved value of the Terraform output mapped to **Cluster Name** uses the format `zone/name`. If you use a `region/name` format, it will result in a 404 error. + +See [Types of clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters) from GCS. + +### Option 3: ​Map an AWS AMI Infrastructure​ + +AMI deployments are the only type that supports Terraform and CloudFormation Infrastructure Provisioners in Blue/Green deployments.The AWS AutoScaling Group deployment type requires the Region and Base Auto Scaling Group fields. The following example shows the Terraform script outputs used for all of the fields: + +![](./static/mapgcp-kube-terraform-infra-42.png) + +For detailed information on AMI deployments, see [AMI Basic Deployment](../aws-deployments/ami-deployments/ami-deployment.md). Here is what each of the output values are: + +* **Region** - The target AWS region for the AMI deployment. +* **Base Auto Scaling Group** - An existing Auto Scale Group that Harness will copy to create a new Auto Scaling Group for deployment by an AMI Workflow. The new Auto Scaling Group deployed by the AMI Workflow will have unique max and min instances and desired count. +* **Target Groups** - The target group for the load balancer that will support your Auto Scale Group. The target group is used to route requests to the Auto Scale Groups you deploy. If you do not select a target group, your deployment will not fail, but there will be no way to reach the Auto Scale Group. +* **Classic Load Balancers** - A classic load balancer for the Auto Scale Group you will deploy. +* For Blue/Green Deployments only: + + **Stage Classic Load Balancers** - A classic load balancer for the stage Auto Scale Group you will deploy. + + **Stage Target Groups** - The staging target group to use for Blue Green deployments. The staging target group is used for initial deployment of the Auto Scale Group and, once successful, the Auto Scale Group is registered with the production target group (**Target Groups** selected above). + +Harness recommends you use Launch Templates instead of Launch Configurations. With Launch Templates, the AMI root volume size parameter is overwritten as specified in the Launch Template. This prevents conflicts between devices on a base Launch Configuration and the AMI Harness creates.### Option 4: ​Map an AWS ECS Infrastructure​ + +The ECS deployment type requires the **Region** and **Cluster** fields. The following example shows the Terraform script outputs used for the mandatory ECS deployment type fields: + +![](./static/mapgcp-kube-terraform-infra-43.png) + +For information on ECS deployments, see [AWS ECS Deployments Overview](../concepts-cd/deployment-types/aws-ecs-deployments-overview.md). + +### Option 5: ​Map an AWS Lambda Infrastructure​ + +The Lambda deployment type requires the IAM Role and Region fields. The following example shows the Terraform script outputs used for the mandatory and optional Lambda deployment type fields: + +![](./static/mapgcp-kube-terraform-infra-44.png) + +### Option 6: ​Map a Secure Shell (SSH) Infrastructure + +The Secure Shell (SSH) deployment type requires the **Region** and **Tags** fields. The following example shows the Terraform script outputs used for the mandatory SSH deployment type fields: + +![](./static/mapgcp-kube-terraform-infra-45.png) + +### Option 7: Map an Azure Web App + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it is available for Trial and Community Editions.The Azure Web App deployment requires the Subscription and Resource Group in the Infrastructure Definition. + +The Web App name and Deployment Slots are mapped in the Deployment Slot Workflow step. + +In the following example, `${terraform.webApp}` is used for both the Web App name and Target Slot. + +![](./static/mapgcp-kube-terraform-infra-46.png) + +See [Azure Web App Deployments Overview](../azure-deployments/azure-webapp-category/azure-web-app-deployments-overview.md). + +Here's an example Terraform script for this type of deployment: +``` +variable "subscription_id" { +} +variable "client_id" { +} +variable "client_secret" { +} +variable "tenant_id" { +} + +# Configure the Azure Provider +provider "azurerm" { + # Whilst version is optional, we /strongly recommend/ using it to pin the version of the Provider being used + version = "=2.4.0" + subscription_id = var.subscription_id + client_id = var.client_id + client_secret = var.client_secret + tenant_id = var.tenant_id + features {} +} + +resource "azurerm_resource_group" "main" { + name = "my-terraform-resourceGroup-test" + location = "West Europe" +} + +resource "azurerm_app_service_plan" "main" { + name = "AppServicePlan-Terraform-test" + location = azurerm_resource_group.main.location + resource_group_name = azurerm_resource_group.main.name + kind = "Linux" + reserved = true + + sku { + tier = "Standard" + size = "S1" + } +} + +resource "azurerm_app_service" "main" { + name = "WebApp-Terraform-test" + location = azurerm_resource_group.main.location + resource_group_name = azurerm_resource_group.main.name + app_service_plan_id = azurerm_app_service_plan.main.id + + site_config { + linux_fx_version = "DOCKER|mcr.microsoft.com/appsvc/staticsite:latest" + always_on = "true" + } + + app_settings = { + "production_key" = "production_value" + WEBSITES_ENABLE_APP_SERVICE_STORAGE = false + } + + connection_string { + name = "Database" + type = "SQLServer" + value = "Server=some-server.mydomain.com;Integrated Security=SSPI" + } +} + +resource "azurerm_app_service_slot" "example" { + name = "terraformStage" + app_service_name = azurerm_app_service.main.name + location = azurerm_resource_group.main.location + resource_group_name = azurerm_resource_group.main.name + app_service_plan_id = azurerm_app_service_plan.main.id + + site_config { + linux_fx_version = "DOCKER|mcr.microsoft.com/appsvc/staticsite:latest" + always_on = "true" + } + + app_settings = { + "stage_key" = "stage_value" + WEBSITES_ENABLE_APP_SERVICE_STORAGE = false + } + + connection_string { + name = "Database" + type = "SQLServer" + value = "Server=some-server.mydomain.com;Integrated Security=SSPI-stage" + } +} + +output "subId" { + value = "${var.subscription_id}" +} + +output "resourceGroup" { + value = "${azurerm_resource_group.main.name}" +} + +output "webApp" { + value = "${azurerm_app_service.main.name}" +} + +output "deploymentSlot" { + value = "${azurerm_app_service_slot.example.name}" +} +``` +### Next Steps + +Now that the Infrastructure Definition is mapped to the Terraform outputs in your script, the provisioned infrastructure can be used as a deployment target by a Harness Workflow. But the Terraform script must still be run to provision this infrastructure. + +To run the Terraform script in your Harness Infrastructure Provisioner and create the infra you defined in Infrastructure Definition, you add a a Terraform Provisioner step to your Workflow. + +For steps on adding the Terraform Provisioner step, see [Provision using the Terraform Provisioner Step](terraform-provisioner-step.md). + diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-04.png b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-04.png new file mode 100644 index 00000000000..d4528f57109 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-04.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-05.png b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-05.png new file mode 100644 index 00000000000..44321a04586 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-05.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-06.png b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-06.png new file mode 100644 index 00000000000..a10befab622 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-06.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-07.png b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-07.png new file mode 100644 index 00000000000..ff13bedb8a5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-07.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-08.png b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-08.png new file mode 100644 index 00000000000..fc794ab45e9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-08.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-09.png b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-09.png new file mode 100644 index 00000000000..6de3d5fb021 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/add-terraform-scripts-09.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-38.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-38.png new file mode 100644 index 00000000000..00c204ad752 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-38.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-39.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-39.png new file mode 100644 index 00000000000..d4528f57109 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-39.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-40.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-40.png new file mode 100644 index 00000000000..a6acf62dda4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-40.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-41.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-41.png new file mode 100644 index 00000000000..df75dd67a26 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-41.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-42.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-42.png new file mode 100644 index 00000000000..8f32823dd66 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-42.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-43.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-43.png new file mode 100644 index 00000000000..e8565bbbe91 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-43.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-44.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-44.png new file mode 100644 index 00000000000..76238649335 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-44.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-45.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-45.png new file mode 100644 index 00000000000..6627420926f Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-45.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-46.png b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-46.png new file mode 100644 index 00000000000..b2ae51c6b8d Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/mapgcp-kube-terraform-infra-46.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-delegates-00.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-delegates-00.png new file mode 100644 index 00000000000..5d5c0a437c5 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-delegates-00.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-delegates-01.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-delegates-01.png new file mode 100644 index 00000000000..2d766fcd935 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-delegates-01.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-destroy-02.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-destroy-02.png new file mode 100644 index 00000000000..b9d990b86a3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-destroy-02.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-destroy-03.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-destroy-03.png new file mode 100644 index 00000000000..55da5466027 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-destroy-03.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-30.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-30.png new file mode 100644 index 00000000000..7f6171f20df Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-30.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-31.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-31.png new file mode 100644 index 00000000000..ee60100f59e Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-31.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-32.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-32.png new file mode 100644 index 00000000000..28faceac805 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-32.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-33.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-33.png new file mode 100644 index 00000000000..414ac4ea8b2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-33.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-34.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-34.png new file mode 100644 index 00000000000..ee60100f59e Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-dry-run-34.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-10.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-10.png new file mode 100644 index 00000000000..d4528f57109 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-10.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-11.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-11.png new file mode 100644 index 00000000000..e4bd3b607d0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-11.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-12.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-12.png new file mode 100644 index 00000000000..4c6fb100336 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-12.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-13.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-13.png new file mode 100644 index 00000000000..152a2b82dd2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-13.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-14.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-14.png new file mode 100644 index 00000000000..12ad49ca15b Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-14.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-15.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-15.png new file mode 100644 index 00000000000..4454dc99d0c Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-15.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-16.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-16.png new file mode 100644 index 00000000000..396eed1b350 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-16.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-17.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-17.png new file mode 100644 index 00000000000..b9d990b86a3 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-17.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-18.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-18.png new file mode 100644 index 00000000000..b3c581dc28b Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-18.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-19.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-19.png new file mode 100644 index 00000000000..9289ee86029 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-19.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-20.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-20.png new file mode 100644 index 00000000000..c3d3645b570 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-20.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-21.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-21.png new file mode 100644 index 00000000000..a638fe7887b Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-21.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-22.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-22.png new file mode 100644 index 00000000000..eac4113bf3e Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-22.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-23.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-23.png new file mode 100644 index 00000000000..4ee9f5b5e9e Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-23.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-24.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-24.png new file mode 100644 index 00000000000..e68ca39379f Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-24.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-25.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-25.png new file mode 100644 index 00000000000..9655499f3d2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-25.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-26.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-26.png new file mode 100644 index 00000000000..c15329f5b16 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-26.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-27.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-27.png new file mode 100644 index 00000000000..9fac0181d11 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-27.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-28.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-28.png new file mode 100644 index 00000000000..a2fa2adb7ab Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-28.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-29.png b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-29.png new file mode 100644 index 00000000000..247ccb95f41 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/terraform-provisioner-step-29.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-35.png b/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-35.png new file mode 100644 index 00000000000..fde1bb5a461 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-35.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-36.png b/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-36.png new file mode 100644 index 00000000000..2cfce10d65d Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-36.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-37.png b/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-37.png new file mode 100644 index 00000000000..524f56dad26 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/use-terraform-outputs-in-workflow-steps-37.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-47.png b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-47.png new file mode 100644 index 00000000000..980540ead07 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-47.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-48.png b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-48.png new file mode 100644 index 00000000000..f350a47e2e6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-48.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-49.png b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-49.png new file mode 100644 index 00000000000..c44f6b2318b Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-49.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-50.png b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-50.png new file mode 100644 index 00000000000..c5ae61e5273 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terraform-category/static/using-the-terraform-apply-command-50.png differ diff --git a/docs/first-gen/continuous-delivery/terraform-category/terraform-delegates.md b/docs/first-gen/continuous-delivery/terraform-category/terraform-delegates.md new file mode 100644 index 00000000000..87a89e4c19a --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/terraform-delegates.md @@ -0,0 +1,104 @@ +--- +title: Set Up Your Harness Account for Terraform +description: Set up the Harness Delegates, Cloud Providers, and Source Repo Providers for Terraform integration. +sidebar_position: 20 +helpdocs_topic_id: llp7a6lr1c +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in integrating your Terraform scripts and processes is setting up the necessary Harness account components: Delegates, Cloud Providers, and Source Repo Providers. + +This topic describes how to set up these components for Terraform. + +Once your account is set up, you can begin integrating your Terraform scripts. See [Add Terraform Scripts](add-terraform-scripts.md). + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* Get an overview of how Harness integrates Terraform: [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md) +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) +* [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) + +### Step 1: Set Up Harness Delegates + +A Harness Delegate performs the Terraform provisioning in your Terraform scripts. When installing the Delegate for your Terraform provisioning, consider the following: + +* The Delegate should be installed where it can connect to the target infrastructure. Ideally, this is the same subnet. +* If you are provisioning the subnet dynamically, then you can put the Delegate in the same VPC and ensure that it can connect to the provisioned subnet using security groups. +* The Delegate must also be able to connect to your script repo. The Delegate will pull the scripts at deployment runtime. +* While all Harness Delegates can use Terraform, you might want to select a Delegate type (Shell Script, Kubernetes, ECS, etc) similar to the type of infrastructure you are provisioning. +* If you are provisioning AWS AMIs and ASGs, you'll likely use Shell Script Delegates on EC2 instances or ECS Delegates. +* If you are provisioning Kubernetes clusters, you will likely use Kubernetes Delegates. +1. To install a Delegate, follow the steps in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). Once the Delegate is installed, it will be listed on the Harness Delegates page. + +#### Delegate Selectors + +If needed, add a Delegate Selector to your Delegates. When you add a **Terraform Provisioner** step to your Harness Workflows, you can use the Delegate Selector to ensure specific Delegates perform the operations. + +If you do not specify a Selector in the **Terraform Provisioner** step, Harness will try all Delegates and then assign the Terraform tasks to the Delegates with Terraform installed. + +To add Selectors, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +#### Permissions + +The Harness Delegate requires permissions according to the deployment platform and the operations of the Terraform scripts. + +In many cases, all credentials are provided by the account used to set up the Harness Cloud Provider. + +In some cases, access keys, secrets, and SSH keys are needed. You can add these in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). You can then select them in the **Terraform Provisioner** step in your Harness Workflows. + +For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see  [Trust Relationships and Roles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#trust_relationships_and_roles). + +### Step 2: Install Terraform on Delegates + +Terraform must be installed on the Delegate to use a Harness Terraform Provisioner. You can install Terraform manually or use the `INIT_SCRIPT` environment variable in the Delegate YAML. + +See [Run Initialization Scripts on Delegates](https://docs.harness.io/article/ul6qktixip-run-initialization-scripts-on-delegates). + + +``` +# Install TF +microdnf install unzip +curl -O -L https://releases.hashicorp.com/terraform/1.1.9/terraform_1.1.9_darwin_amd64.zip +unzip terraform_1.1.9_darwin_amd64.zip +mv ./terraform /usr/bin/ +# Check TF install +terraform --version +``` +Terraform is now installed on the Delegate. + +If you will be using a Cloud Provider that uses Delegate Selectors to identify Delegates (AWS Cloud Provider), add a Selector to this Delegate. For more information, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +The Delegate needs to be able to obtain the Terraform provider you specify in the modules in your Terraform script. For example, `provider "acme"`. On the Delegate, Terraform will download and initialize any providers that are not already initialized. + +### Step 3: Set Up the Cloud Provider + +Add a Harness Cloud Provider to connect Harness to your target platform (AWS, Kubernetes cluster, etc). + +Later, when you use Terraform to define a Harness Infrastructure Definition, you will also select the Cloud Provider to use when provisioning. + +When you create the Cloud Provider, you can enter the platform account information for the Cloud Provider to use as credentials, or you can use the Delegate(s) running in the infrastructure to provide the credentials for the Cloud Provider. + +If you are provisioning infrastructure on a platform that requires specific permissions, such as AWS AMIs, the account used by the Cloud Provider needs the required policies. For example, to create AWS EC2 AMIs, the account needs the **AmazonEC2FullAccess** policy. See the list of policies in [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). For steps on adding an AWS Cloud Provider, see [Amazon Web Services (AWS) Cloud](https://docs.harness.io/article/whwnovprrb-cloud-providers#amazon_web_services_aws_cloud). + +When the Cloud Provider uses the installed Delegate for credentials (via its Delegate Selector), it assumes the permissions/roles used by the Delegate. + +### Step 4: Connect Harness to Your Script Repo + +To use your Terraform script in Harness, you host the script in a Git repo and add a Harness Source Repo Provider that connects Harness to the repo. For steps on adding the Source Repo Provider, see [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +Here is an example of a Source Repo Provider and the GitHub repo it is using: + +![](./static/terraform-delegates-00.png) + +In the image above, there is no branch added in the Source Repo Provider **Branch Name** field as this is the master branch, and the **ec2** folder in the repo is not entered in the Source Repo Provider. Later, when you use the Source Repo Provider in your Terraform Provisioner, you can specify the branch and root directory: + +![](./static/terraform-delegates-01.png) + +If you are using a private Git repo, an SSH key for the private repo is required on the Harness Delegate running Terraform to download the root module. You can copy the SSH key over to the Delegate. For more information, see [Using SSH Keys for Cloning Modules](https://www.terraform.io/docs/enterprise/workspaces/ssh-keys.html) (from HashiCorp) and [Adding a new SSH key to your GitHub account](https://help.github.com/en/articles/adding-a-new-ssh-key-to-your-github-account) (from Github).### Next Steps + +Once your account is set up, you can begin integrating your Terraform scripts. See [Add Terraform Scripts](add-terraform-scripts.md). + diff --git a/docs/first-gen/continuous-delivery/terraform-category/terraform-destroy.md b/docs/first-gen/continuous-delivery/terraform-category/terraform-destroy.md new file mode 100644 index 00000000000..5e31414f4c0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/terraform-destroy.md @@ -0,0 +1,226 @@ +--- +title: Remove Provisioned Infra with Terraform Destroy +description: Remove any provisioned infrastructure. +sidebar_position: 80 +helpdocs_topic_id: 4egyxnse9r +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add a **Terraform Destroy** Workflow step to remove any provisioned infrastructure, just like running the `terraform destroy` command. See  [destroy](https://www.terraform.io/docs/commands/destroy.html) from Terraform. + +The **Terraform Destroy** step is independent of any other Terraform provisioning step in a Workflow. It is not restricted to removing the infrastructure deployed in its Workflow. It can remove any infrastructure you have provisioned using a Terraform Infrastructure Provisioner. + +### Before You Begin + +This topic assumes you have read the following: + +* [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md) +* [Set Up Your Harness Account for Terraform](terraform-delegates.md) +* [Add Terraform Scripts](add-terraform-scripts.md) +* [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md) +* [Provision using the Terraform Provision Step](terraform-provisioner-step.md) +* [Using the Terraform Apply Command](using-the-terraform-apply-command.md) +* [Perform a Terraform Dry Run](terraform-dry-run.md) + +### Limitations + +* You cannot add a Terraform Destroy step in the Rollback Phase of a Workflow. +* The Terraform Destroy step is only supported using Terraform versions less than 1.0.0. HashiCorp has deprecated the `terraform apply -destroy` command in 1.0.0. + +### Review: What Gets Destroyed? + +When you create a Harness Terraform Infrastructure Provisioner you specify the Terraform script that Harness will use for provisioning. + +When you destroy the provisioned infrastructure, you specify the Terraform Infrastructure Provisioner for Harness to use to locate this script. + +There are two ways to use the Terraform Destroy: + +* Destroy the infrastructure provisioned by the last successful use of a specific Terraform Infrastructure Provisioner, via a **Terraform Provision** or **Terraform** **Apply** step. Harness will use the same input values and backend configuration (Remote state) set up in the **Terraform Provision** or **Terraform Apply** steps. +* Destroy the infrastructure by entering new input values and backend configuration (Remote state) for specific resources. + +Which method you use is determined by the **Inherit from last successful Terraform Apply** option in the Terraform Destroy step. + +When the Terraform Provision or Terraform Apply step were executed, Harness saved the **Inline Values** and **Backend Configuration** values using a combination of the following: + +* **Terraform Infrastructure Provisioner** used. +* **Environment** used for the Workflow. +* **Workspace** used (or `default` if no workspace was specified). + +You can decide to use these by selecting the **Inherit from last successful Terraform Apply** option or provide your own **Inline Values** and **Backend Configuration** values by not selecting this option. + +#### Use Last Successful Terraform Provision or Apply Steps + +When you set up the Terraform Destroy step, you specify the Provisioner and Workspace to use, and Harness gets the the **Inline Values** and **Backend Configuration** values from the last **successful** execution of that Provisioner. + +When Terraform Destroy is run, it uses the same combination to identify which **Inline Values** and **Backend Configuration** values to use. You simply need to provide the Provisioner and Workspace. + +#### Specify Backend Configuration (Remote State) + +You can specify a Backend Configuration (Remote State) to use to identify the infrastructure to destroy. + +You simply need to specify a Terraform Infrastructure Provisioner so that Harness knows where to look for the script. + +In Terraform Destroy, you *disable* the **Inherit from last successful Terraform Apply** option, and then provide the input value and remote state settings to use. + +### Step 1: Add Terraform Destroy Step + +In the **Post-deployment Steps** of the Workflow, click **Add Step**, and then select **Terraform Destroy**. + +The Terraform Destroy settings appear. + +### Step 2: Select Provisioner and Workspace + +Select the Terraform Infrastructure Provisioner and Workspace that was used to provision the infrastructure you want to destroy. + +Typically, this is the Terraform Provisioner and Workspace used in the **Pre-deployment Steps**. + +### Option: AWS Cloud Provider, Region, Role ARN + +Currently, this feature is behind the Feature Flag `TERRAFORM_AWS_CP_AUTHENTICATION`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If you want to use a specific AWS role for this step's provisioning, you can select the AWS Cloud Provider, Region, and Role ARN. You can select any of these options, or all of them. + +These options allow you to use different roles for different Terraform steps, such as one role for the Terraform Plan step and a different role for the Terraform Provision or Apply steps. + +* **AWS Cloud Provider:** the AWS Cloud Provider selected here is used for authentication. +At a minimum, select the **AWS Cloud Provider** and **Role ARN**. When used in combination with the AWS Cloud Provider option, the Role ARN is assumed by the Cloud Provider you select. +The **AWS Cloud Provider** setting can be templated.You need to select an AWS Cloud Provider even if the Terraform Infrastructure Provisioner you selected uses a manually-entered template body. Harness needs access to the AWS API via the credentials in the AWS Cloud Provider. +* **Region:** the AWS region where you will be provisioning your resources. If not region is specified, Harness uses `us-east-1`. +* **Role ARN:** enter the Amazon Resource Name (ARN) of an AWS IAM role that Terraform assumes when provisioning. This allows you to tune the step for provisioning a specific AWS resource. For example, if you will only provision AWS S3, then you can use a role that is limited to S3. +At a minimum, select the **AWS Cloud Provider** and **Role ARN**. When used in combination with the AWS Cloud Provider option, the Role ARN is assumed by the Cloud Provider you select. +You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in **Role ARN**. For example, you can create a Service or Workflow variable and then enter its expression in **Role ARN**, such as `${serviceVariables.roleARN}` or `${workflow.variables.roleArn}`. + +#### Environment Variables + +If you use the **AWS Cloud Provider** and/or **Role ARN** options, do not add the following environment variables in the step's **Environment Variables** settings: + +* `AWS_ACCESS_KEY_ID` +* `AWS_SECRET_ACCESS_KEY` +* `AWS_SESSION_TOKEN` + +Harness generates these keys using the the **AWS Cloud Provider** and/or **Role ARN** options. If you also add these in **Environment Variables**, the step will fail. + +### Option: Select Delegate + +In **Delegate Selector**, enter the [Delegate Selector](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) for the Delegate that you want to execute this step. Typically, this is the same Selector used to select a Delegate in the **Terraform Provision** or **Terraform Apply** step. + +### Option: Terraform Environment Variables + +You can remove any Terraform environment variables you created using the Terraform Provision or Terraform Apply steps. + +You cannot add new environment variables in the Terraform Destroy step. + +If you select the **Inherit from last successful Terraform Apply** option, then the environment variables are also inherited from the environment variables set in any pervious Terraform provisioning step in the Workflow. + +### Option: Inherit from last successful Terraform Apply + +As described in [Review: What Gets Destroyed?](#review_what_gets_destroyed), select this option to destroy the infrastructure provisioned by the last successful **Terraform Provision** or **Terraform** **Apply** step in the Workflow. + +If you select this option, then the **Input Values** and **Backend Configuration** settings are disabled. + +### Option: Set as Terraform Destroy Plan and Export + +Select this option to make this Terraform Destroy step a Terraform plan. This is useful when you want to use an Approval step to approve Terraform Destroy steps. + +This is the same as running `terraform plan -destroy` in Terraform. + +If you select this option, Harness generates a plan to destroy all the known resources. + +Later, when you want to actually destroy the resources, you add another Terraform Destroy step and select the option **Inherit following configurations from Terraform Destroy Plan**. + +The **Inherit following configurations from Terraform Destroy Plan** option only appears if the **Set as Terraform Destroy Plan and Export** option was set in the preceding Terraform Destroy step. + +The Terraform Plan is stored in a Secrets Manager as an encrypted text. + +#### Terraform Plan Size Limit + +The Terraform Plan is stored in the default Harness Secrets Manager as encrypted text. This is because plans often contain variables that store secrets. + +The Terraform plan size must not exceed the secret size limit for secrets in your default Secret Manager. AWS Secrets Manager has a limitation of 64KB. Other supported Secrets Managers support larger file size. + +See [Add a Secrets Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager). + +#### Terraform Destroy Plan Output Variable + +If you select the **Set as Terraform Destroy Plan and Export** option, you can display the output of the plan using the variable expression `${terraformDestroy.tfplan}`. For example, you can display the plan output in a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +#### Terraform Destroy Plan File Output Variable + +Currently, this feature is behind the Feature Flag `OPTIMIZED_TF_PLAN`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If you select the **Set as Terraform Destroy Plan and Export** option, you can display the output of the plan using the variable expression `${terraformPlan.destroy.jsonFilePath()}` . + +The `${terraformPlan.destroy.jsonFilePath()}` expression outputs the path to the Terraform plan file on the Harness Delegate that executed the step. + +For example, you can display the plan output in a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step: + + +``` +# Terraform Destroy +#### Using OPA +opa exec --decision terraform/analysis/authz --bundle policy/ ${terraformPlan.destroy.jsonFilePath()} + +#### Using OPA daemon +curl localhost:8181/v0/data/terraform/analysis/authz -d @${terraformPlan.destroy.jsonFilePath()} +``` +If you use the Terraform Plan step, you can use the expression `{terraformPlan.jsonFilePath()}` to output plan used by that step. + +#### Terraform Plan Human Readable + +Harness provides expressions to view the plan in a more human readable format: + +* `${terraformApply.tfplanHumanReadable}` +* `${terraformDestroy.tfplanHumanReadable}` + +### Option: Inherit following configurations from Terraform Destroy Plan + +Select this option to apply the previous Terraform Destroy step if that step has the **Set as Terraform Destroy Plan and Export** option enabled. + +As noted above in Option: Set as Terraform Destroy Plan and Export, the **Inherit following configurations from Terraform Destroy Plan** option only appears if the **Set as Terraform Destroy Plan and Export** option was set in the preceding Terraform Destroy step. + +### Option: Input Values + +Enter the input values to use when destroying the infrastructure. + +The Terraform Infrastructure Provisioner you are using (the Terraform Infrastructure Provisioner you selected in the **Provisioner** setting earlier), identifies the Terraform script where the inputs are located. + +See [Enter Input Variables](terraform-provisioner-step.md#step-3-enter-input-values). + +#### Use tfvar Files + +The **Input Values** section also includes the **Use tfvar files** option for using a variable definitions file. + +You can use inline or remote tfvar files. + +##### Inline tfvar Files + +The path to the variable definitions file is relative to the root of the Git repo specified in the Terraform Provisioner setting. For example, in the following image, the **testing.tfvars** file is located in the repo at `terraform/ec2/testing/testing.tfvars`: + +![](./static/terraform-destroy-02\.png) + +If **Use tfvar files** is selected and there are also **Inline Values**, when Harness loads the variables from the **tfvars** file, the **Inline Values** variables override the variables from the tfvars file. + +If you only want to use the tfvars file, make sure to delete the Inline Values. + +You can also use [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in **File Path**. This allows you to make the setting a deployment runtime parameter and to output their values using a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +##### Remote tfvar Files + +In **Source Repository**, select the Harness [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) that connects to the repo where your tfvar file is. + +Select **Commit ID** or **Branch.** + +**Commit ID** also supports [Git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging).* For **Commit ID**, enter the git commit ID or Git tag containing the tfvar version you want to use. +* For **Branch**, enter the name of the branch where the tfvar file is located. + +In **File Folder Path**, enter the full path from the root of the repo to the tfvar file. + +### Step 4: Backend Configuration + +Use this option to access the Backend state file directly. Enter values for each backend config (remote state variable). + +The Terraform Infrastructure Provisioner you are using (the Terraform Infrastructure Provisioner you selected in the **Provisioner** setting earlier), identifies the Terraform script where the remote state settings are located. + +See [Backend Configuration (Remote state)](terraform-provisioner-step.md#option-1-backend-configuration-remote-state). + +Click **Submit**. The Terraform Destroy step is added to the Workflow. + +![](./static/terraform-destroy-03.png) \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/terraform-category/terraform-dry-run.md b/docs/first-gen/continuous-delivery/terraform-category/terraform-dry-run.md new file mode 100644 index 00000000000..ad436544b91 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/terraform-dry-run.md @@ -0,0 +1,146 @@ +--- +title: Perform a Terraform Dry Run +description: Execute Terraform Provision and Terraform Apply steps as a dry run. +sidebar_position: 70 +helpdocs_topic_id: xthfj92dys +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Terraform Provision and Terraform Apply steps in a Workflow can be executed as a dry run, just like running the [terraform plan](https://www.terraform.io/docs/commands/plan.html) command. + +The dry run will refresh the state file and generate a plan, but not apply the plan. You can then set up an Approval step to follow the dry run, followed by the Terraform Provision or Terraform Apply step to apply the plan. + +This topic covers using the Terraform Provision and Terraform Apply steps for dry runs only. For steps on applying plans without a dry run, see [Provision using the Terraform Provision Step](terraform-provisioner-step.md) and [Using the Terraform Apply Command](using-the-terraform-apply-command.md). + +### Before You Begin + +This topic assumes you have read the following: + +* [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md) +* [Set Up Your Harness Account for Terraform](terraform-delegates.md) +* [Add Terraform Scripts](add-terraform-scripts.md) +* [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md) +* [Provision using the Terraform Provision Step](terraform-provisioner-step.md) + +**What the difference between the Terraform Provision or Terraform Apply step?** The Terraform Provision step is used to provision infrastructure and is added in the **Pre-deployment Steps** of a Workflow. The Terraform Apply can be placed anywhere in the Workflow. + +### Visual Summary + +The following graphic shows a common use of a Terraform dry run in deployments. + +![](./static/terraform-dry-run-30\.png) + +1. The dry run is used to verify the provisioning. +2. An Approval step to ensure that the Terraform plan is working correctly. +3. The plan is run and the infrastructure is provisioned. +4. The app is deployed to the provisioned infrastructure. + +In a Harness Workflow it looks something like this: + +![](./static/terraform-dry-run-31.png) + +### Limitations + +The Terraform Plan is stored in the default Harness Secrets Manager as encrypted text. This is because plans often contain variables that store secrets. + +The Terraform plan size must not exceed the secret size limit for secrets in your default Secret Manager. AWS Secrets Manager has a limitation of 64KB. Other supported Secrets Managers support larger file size. + +See [Add a Secrets Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager). + +### Step 1: Set Terraform Step as Plan + +This step assumes you are familiar with adding the [Terraform Provision](terraform-provisioner-step.md) and [Terraform Apply](using-the-terraform-apply-command.md) steps. + +To perform a dry run of your Terraform Provision and Terraform Apply steps, you simply select the **Set as Terraform Plan** option. + +![](./static/terraform-dry-run-32.png)That's it. Now this Terraform Provision or Terraform Apply step will run like a `terraform plan` command. + +The dry run will refresh the state file and generate a plan but it is not applied. You can then set up an Approval step to follow the dry run, followed by a Terraform Provision or Terraform Apply step to apply the plan. + +In the subsequent Terraform Provision or Terraform Apply steps, you will select the **Inherit following configurations from Terraform Plan** option to apply the plan. + +This is just like running the `terraform plan` command before a `terraform apply` command. + +You can use expressions after a Terraform plan step (Terraform Apply step with **Set as Terraform Plan** enabled) to see the number of resources added, changed, or destroyed. For details, go to [Terraform Plan and Terraform Destroy Changes](https://docs.harness.io/article/aza65y4af6-built-in-variables-list#terraform_plan_and_terraform_destroy_changes).### Option: Export Terraform Plan to Apply Step + +This option supports [Terraform version 12](https://www.terraform.io/upgrade-guides/0-12.html) only.When you use **Set as Terraform Plan** in the Terraform Provision or Terraform Apply steps and then use **Inherit following configurations from Terraform Plan** in a subsequent Terraform Provision or Terraform Apply step, Harness does the following: + +Harness runs the Terraform provision again and points to the plan, runs a Terraform refresh, then a plan, and finally executes the new plan. + +Technically, this is a different plan. If you want use the actual plan because of security or audit requirements, use **Export Terraform Plan to Apply Step** in the previous Terraform Provision step along with **Set as Terraform Plan**. + +##### Notes + +* If the **Export Terraform Plan to Apply Step** option is enabled in two consecutive Terraform Provision steps, the second Terraform Provision step overwrites the plan from the first Terraform Provision step. +* Harness uses the [Harness Secret Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager) you have selected as your default in the export process. As a result, the size of the plan you can export is limited to the size of secret that Secret Manager allows. + +If Harness detects that a Terraform plan produces no changes then the actual generated Terraform plan file is not be uploaded to the Secret Manager regardless of whether the Terraform Apply step has **Export Terraform Plan to Apply Step** enabled. + +### Step 2: Add Approval Step + +Harness Workflow Approval steps can be done using Jira, ServiceNow, or the Harness UI. You can even use custom shell scripts. See [Approvals](https://docs.harness.io/article/0ajz35u2hy-approvals). + +Add the Approval step after the Terraform Provision or Terraform Apply step where you selected the **Set as Terraform Plan** option. + +1. To add the Approval step, click **Add Step**, and select **Approval**. +2. In the **Approval** step, select whatever approval options you want, and then click **Submit**. + +Next, we'll add a Terraform Provision or Terraform Apply step after the Approval step to actually run the Terraform Infrastructure Provisioner script. + +If the Approval step takes a long time to be approved there is the possibility that a new commit occurs in the Git repo containing for Terraform script. To avoid a problem, when the Workflow performs the dry run, it saves the commit ID of the script file. Later, after the approval, the Terraform Provision step will use the commit ID to ensure that it executes the script that was dry run.### Step 3: Add Terraform Step to Apply Plan + +For the Terraform Provision or Terraform Apply step that actually runs the Terraform Infrastructure Provisioner script (`terraform apply`), all you need to do is select the **Inherit following configurations from Terraform Plan** option. + +When you select this option, the Terraform Provision or Terraform Apply step inherits the settings of the Terraform Provision or Terraform Apply step that preceded it. + +1. After the Approval step, click **Add Step**. +2. Select a **Terraform Provision** or **Terraform Apply** step. +3. In **Name**, enter a name for the step to indicate that it will perform the provisioning. For example, **Apply Provisioning**. +4. In **Provisioner**, select the Harness Terraform Infrastructure Provisioner you want to run. This is the same Terraform Infrastructure Provisioner you selected in the previous Terraform Provision or Terraform Apply step. +5. Select the **Inherit following configurations from Terraform Plan** option. + ![](./static/terraform-dry-run-33.png) +6. Click **Submit**. + +You do not need to enter any more settings. The Terraform Provision or Terraform Apply step inherits the settings of the Terraform Provision or Terraform Apply step that preceded it. + +Your Workflow now looks something like this: + +![](./static/terraform-dry-run-34.png) + +### Step 4: Deploy + +Deploy your Workflow and see the `terraform plan` executed in the first Terraform Provision or Terraform Apply step. Next, approve the Approval step. Finally, see the `terraform apply` executed as part of the final Terraform Provision or Terraform Apply step. + +### Review: Terraform Plan Output Variable + +If you select the **Set as Terraform Plan** option, you can display the output of the plan using the variable expression `${terraformApply.tfplan}`. For example, you can display the plan output in a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +For help in parsing the plan output, see [Parsing Terraform Plan Output](https://community.harness.io/t/parsing-terraform-plan-output/545) on Harness Community. + +The `${terraformApply.tfplan}` expression does not support plan files larger than 15MB. + +### Review: Terraform Plan File Output Variable + +Currently, this feature is behind the Feature Flag `OPTIMIZED_TF_PLAN`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If you select the **Set as Terraform Plan** option, you can display the output of the plan using the variable expression `${terraformPlan.jsonFilePath()}`. + +The `${terraformPlan.jsonFilePath()}` expression outputs the path to the Terraform plan file on the Harness Delegate that executed the step. + +For example, you can display the plan output in a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step: + + +``` +# Terraform Apply +#### Using OPA +opa exec --decision terraform/analysis/authz --bundle policy/ ${terraformPlan.jsonFilePath()} + +#### Using OPA daemon +curl localhost:8181/v0/data/terraform/analysis/authz -d @${terraformPlan.jsonFilePath()} +``` +If you use the Terraform Destroy step, you can use the expression `${terraformPlan.destroy.jsonFilePath()}` to output plan used by that step. + +### Next Steps + +Removing provisioned infrastructure is a common Terraform-related task. You can add this task to your Harness Workflow and automate it. See [Remove Provisioned Infra with Terraform Destroy](terraform-destroy.md). + diff --git a/docs/first-gen/continuous-delivery/terraform-category/terraform-provisioner-step.md b/docs/first-gen/continuous-delivery/terraform-category/terraform-provisioner-step.md new file mode 100644 index 00000000000..21aa1b12734 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/terraform-provisioner-step.md @@ -0,0 +1,435 @@ +--- +title: Provision using the Terraform Provision Step +description: Provision infra in Harness Workflows. +sidebar_position: 50 +helpdocs_topic_id: uxwih21ps1 +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to provision infrastructure using the Workflow **Terraform Provisioner** step. + +You use the Terraform Provisioner step in a Workflow to run the Terraform script added in a Harness Terraform Infrastructure Provisioner. This is the same Terraform Infrastructure Provisioner selected in the Workflow's Infrastructure Definition to define the deployment target infrastructure. + +During deployment, the Terraform Provisioner step provisions the target deployment infrastructure and then the Workflow deploys to the provisioned infrastructure. + +To provision non-target deployment infrastructure, use the Terraform Apply Workflow step. See [Using the Terraform Apply Command](using-the-terraform-apply-command.md). + +The Harness Terraform Infrastructure Provisioner is supported in Canary and Multi-Service Workflows only. For AMI/ASG and ECS deployments, the Terraform Infrastructure Provisioner is also supported in Blue/Green Workflows. + + +### Before You Begin + +Ensure you have read the following topics before your add the Terraform Provisioner step to a Workflow: + +* [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md) +* [Set Up Your Harness Account for Terraform](terraform-delegates.md) +* [Add Terraform Scripts](add-terraform-scripts.md) +* [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md) + +You can also run Harness Terraform Infrastructure Provisioner using the Terraform Apply Workflow step. This step is used to provision non-target infrastructure. See [Using the Terraform Apply Command](using-the-terraform-apply-command.md). + +**What the difference between the Terraform Provisioner or Terraform Apply step?** The Terraform Provisioner step is used to provision infrastructure and is added in the **Pre-deployment Steps** of a Workflow. The Terraform Apply can run any Harness Terraform Infrastructure Provisioner and can be placed anywhere in the Workflow.In addition, the following related features are documented in other topics: + +* **Terraform Dry Run** - The Terraform Provisioner step in the Workflow can be executed as a dry run, just like running the `terraform plan` command. The dry run will refresh the state file and generate a plan. See [Perform a Terraform Dry Run](terraform-dry-run.md). +* **Terraform Destroy** — This is covered in the [Remove Provisioned Infra with Terraform Destroy](terraform-destroy.md). + +### Visual Summary + +This topic describes steps 3 through 6 in the Harness Terraform Provisioning implementation process: + +![](./static/terraform-provisioner-step-10\.png) + +For step 1, see [Add Terraform Scripts](add-terraform-scripts.md). For step 2, see [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md). + +Here is illustration using a deployment: + +![](./static/terraform-provisioner-step-11\.png) + +)1. The **Terraform Provision** step executes pre-deployment to build the infrastructure. +2. The **Infrastructure Definition** is used to select the provisioned nodes. +3. The app is **installed** on the provisioned node. + +### Step 1: Add Environment to Workflow + +Tp provision target deployment infrastructure in a Workflow, the Workflow Phase(s) needs to be set up with an Infrastructure Definition that uses the Terraform Infrastructure Provisioner. + +Setting up this Infrastructure Definition is covered in [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md). + +Next, when you create or edit your Canary or Multi-Service Workflow, you add the Environment containing the mapped Infrastructure Definition to your Workflow settings. + +Harness Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. If you are creating a Blue/Green Workflow for AMI, you can select the Environment and Infrastructure Definition in the Workflow setup settings.To create the Workflow and add the Environment, do the following: + +In your Harness Application, click **Workflows**. + +Click **Add Workflow**. The Workflow settings appear. + +Enter a name and description for the Workflow. + +In **Workflow Type**, select **Canary**. + +In **Environment**, select the Environment that has the Terraform Provisioner set up in one of its Infrastructure Definitions. + +You Workflow settings will look something like this: + +![](./static/terraform-provisioner-step-12\.png) + +Click **SUBMIT**. The new Workflow is created. + +By default, the Workflow includes a **Pre-deployment Steps** section. This is where you will add a step that uses your Terraform Provisioner. + +![](./static/terraform-provisioner-step-13\.png) + +Infrastructure Definitions are added in Canary Workflow *Phases*, in the **Deployment Phases** section. You will add the Infrastructure Definition that uses your Terraform Infrastructure Provisioner when you add the Canary Phases, later in this topic. + +### Step 2: Add Terraform Step to Pre-deployment Steps + +To provision the infrastructure in your Terraform Infrastructure Provisioner, add the **Terraform Provisioner** Step in **Pre-deployment Steps**: + +In your Workflow, in **Pre-deployment Steps**, click **Add Step**. + +Select **Terraform Provision**. The **Terraform Provision** settings appear. + +![](./static/terraform-provisioner-step-14\.png) + +In **Name**, enter a name for the step. Use a name that describes the infrastructure the step will provision. + +In **Provisioner**, select the Harness Terraform Infrastructure Provisioner you set up for provisioning your target infrastructure. Terraform Infrastructure Provisioner setup is covered in [Add Terraform Scripts](add-terraform-scripts.md). + +In **Timeout**, enter how long Harness should wait to complete the Terraform Provisioner step before failing the Workflow. + +### Option: AWS Cloud Provider, Region, Role ARN + +Currently, this feature is behind the Feature Flag `TERRAFORM_AWS_CP_AUTHENTICATION`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If you want to use a specific AWS role for this step's provisioning, you can select the AWS Cloud Provider, Region, and Role ARN. You can select any of these options, or all of them. + +These options allow you to use different roles for different Terraform steps, such as one role for the Terraform Plan step and a different role for the Terraform Provision or Apply steps. + +* **AWS Cloud Provider:** the AWS Cloud Provider selected here is used for authentication. +At a minimum, select the **AWS Cloud Provider** and **Role ARN**. When used in combination with the AWS Cloud Provider option, the Role ARN is assumed by the Cloud Provider you select. +The **AWS Cloud Provider** setting can be templated.You need to select an AWS Cloud Provider even if the Terraform Infrastructure Provisioner you selected uses a manually-entered template body. Harness needs access to the AWS API via the credentials in the AWS Cloud Provider. +* **Region:** the AWS region where you will be provisioning your resources. If not region is specified, Harness uses `us-east-1`. +* **Role ARN:** enter the Amazon Resource Name (ARN) of an AWS IAM role that Terraform assumes when provisioning. This allows you to tune the step for provisioning a specific AWS resource. For example, if you will only provision AWS S3, then you can use a role that is limited to S3. +At a minimum, select the **AWS Cloud Provider** and **Role ARN**. When used in combination with the AWS Cloud Provider option, the Role ARN is assumed by the Cloud Provider you select. +You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in **Role ARN**. For example, you can create a Service or Workflow variable and then enter its expression in **Role ARN**, such as `${serviceVariables.roleARN}` or `${workflow.variables.roleArn}`. + +#### Environment Variables + +If you use the **AWS Cloud Provider** and/or **Role ARN** options, do not add the following environment variables in the step's **Environment Variables** settings: + +* `AWS_ACCESS_KEY_ID` +* `AWS_SECRET_ACCESS_KEY` +* `AWS_SESSION_TOKEN` + +Harness generates these keys using the the **AWS Cloud Provider** and/or **Role ARN** options. If you also add these in **Environment Variables**, the step will fail. + +### Option: Terraform Plan Settings + +The **Inherit following configurations from Terraform Plan**, **Set as Terraform Plan**, and **Export** **Terraform Plan to Apply Step** settings are described in [Perform a Terraform Dry Run](terraform-dry-run.md) and [Using the Terraform Apply Command](using-the-terraform-apply-command.md). Here is a summary. + +Essentially, these settings allow you to use your Terraform Provision step as a [Terraform plan dry run](https://www.terraform.io/docs/commands/plan.html) (`terraform plan -out=tfplan`). Each setting provides a different option: + +#### Inherit following configurations from Terraform Plan + +This setting is used to inherit the settings of a previous Terraform Provision step in the Workflow. + +In the previous Terraform Provision step, you select the **Set as Terraform Plan** setting (and, optionally, the **Export Terraform Plan to Apply Step** setting) to run the step as a plan (a dry run). + +Next, in a subsequent Terraform Provision or Terraform Apply step, you select **Inherit following configurations from Terraform Plan** to use the plan. + +#### Set as Terraform Plan + +You select this setting to run this Terraform Provision step as a Terraform plan. In a subsequent Terraform Provision or Terraform Apply step, you select the **Inherit following configurations from Terraform Plan** setting to use the plan and apply it. + +Harness runs the Terraform provision again and points to the plan, runs a Terraform refresh, then a plan, and finally executes the new plan. Technically, this is a different plan. If you want use the actual plan because of security or audit requirements, use **Export Terraform Plan to Apply Step**. + +If you want to avoid the Terraform refresh, in your Terraform Infrastructure Provisioner, enable the **Skip Terraform Refresh when inheriting Terraform plan** setting. See [Skip Terraform Refresh When Inheriting Terraform Plan](add-terraform-scripts.md#option-2-skip-terraform-refresh-when-inheriting-terraform-plan). + +#### Export Terraform Plan to Apply Step + +This option supports [Terraform version 12](https://www.terraform.io/upgrade-guides/0-12.html) only.This option is only available if you've selected **Set as Terraform Plan**. Select this option to save this Terraform Provision step as a `tfplan` to be applied in a later **Terraform Apply** step. + +Select the [Harness Secret Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager) to use for the plan. + +By default, Harness uses the Harness Secret Manager you have selected as your **default** for the export process.The size of the plan you can export is limited to the size of secret that the Secret Manager you selected allows. + +### Option: Enter Input Values + +Provide values for any input values in your the Terraform script set up in the Terraform Infrastructure Provisioner used by this Terraform Provision step. + +Click **Populate Variables** and Harness will pull all of the input variables from the Terraform script you added to the Terraform Infrastructure Provisioner you selected. + +![](./static/terraform-provisioner-step-15\.png) + +It can table a moment to populate the variables. + +Enter a value for each variable in **Input Values**. For encrypted text values, select an Encrypted Text secret from Harness Secrets Management. + +![](./static/terraform-provisioner-step-16\.png) + +For more information, see [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). + +#### Use tfvar Files + +The **Input Values** section also includes the **Use tfvar files** option for using a variable definitions file instead of using the Terraform script variables. + +You can use inline or remote tfvar files. + +##### Inline tfvar Files + +The path to the variable definitions file is relative to the root of the Git repo specified in the Terraform Provisioner setting. For example, in the following image, the **testing.tfvars** file is located in the repo at `terraform/ec2/testing/testing.tfvars`: + +![](./static/terraform-provisioner-step-17\.png) + +If **Use tfvar files** is selected and there are also **Inline Values**, when Harness loads the variables from the **tfvars** file, the **Inline Values** variables override the variables from the tfvars file. + +If you only want to use the tfvars file, make sure to delete the Inline Values. + +In **File Path**, you can enter multiple files separated by commas. + +You can also use [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in **File Path**. This allows you to make the setting a deployment runtime parameter and to output their values using a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step.##### Remote tfvar Files + +In **Source Repository**, select the Harness [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) that connects to the repo where your tfvar file is. + +Select **Commit ID** or **Branch.** + +**Commit ID** also supports [Git tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging).* For **Commit ID**, enter the Git commit ID or Git tag containing the tfvar version you want to use. +* For **Branch**, enter the name of the branch where the tfvar file is located. + +In **File Folder Path**, enter the full path from the root of the repo to the tfvar file. You can enter multiple file paths separated by commas. + +#### Map and List Variable Type Support + +Terraform uses [map variables](https://www.terraform.io/docs/configuration-0-11/variables.html#maps) as a lookup table from string keys to string values, and [list variables](https://www.terraform.io/docs/configuration-0-11/variables.html#lists) for an ordered sequence of strings indexed by integers. + +Harness provides support for both Terraform map and list as input values. + +For example, here are map and list variables from a Terraform script: + + +``` +variable "map_test" { + type = "map" + default = { + "foo" = "bar" + "baz" = "quz" + } +} + +variable "list_test" { + type = "list" + default = ["ami-abc123", "ami-bcd234"] +} +``` +In **Inline Values**, you would enter these as text values `map_test` and `list_test` with their defaults in **Value**: + +![](./static/terraform-provisioner-step-18\.png) + +When the Workflow is deployed, the `map_test` and `list_test` variables and values are added using the `terraform plan -var` option to set a variable in the Terraform configuration (see [Usage](https://www.terraform.io/docs/commands/plan.html#usage) from Terraform): + + +``` +... +terraform plan -out=tfplan -input=false +... +**-var='map\_test={foo = "bar", baz = "qux"}' +-var='list\_test=["ami-abc123", "ami-bcd234"]'** +... +``` +And displayed as outputs: + + +``` +... +Outputs: + +list_test = [ + ami-abc123, + ami-bcd234 +] +map_test = { + baz = qux + foo = bar +} +... +``` +If the map or list you want to add is very large, such as over 128K, you might want to input them using the **Use tfvar files** setting and a values.tfvars file.You can also create an expression in an earlier Workflow step that creates a map or list and enter the expression in **Input Values**. So long as the expression results in the properly formatted map or list value, it will be entered using `terraform plan -var`. See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template).Click **Next**. The **Backend Configuration (Remote state)** section appears. + +### Option: Backend Configuration (Remote state) + +The **Backend Configuration (Remote state)** section contains the remote state values. + +Enter values for each backend config (remote state variable), and click **Next**. + +The **Additional Settings** section appears. + +### Option: Resource Targeting + +In **Additional Settings**, you can use the **Target** setting to target one or more specific modules in your Terraform script, just like using the `terraform plan -target` command. See [Resource Targeting](https://www.terraform.io/docs/commands/plan.html#resource-targeting) from Terraform. + +For example, in the following image you can see the Terraform script has one resource and two modules and the **Targets** setting displays them as potential targets. + +![](./static/terraform-provisioner-step-19\.png) + +If you have multiple modules in your script and you do not select one in **Targets**, all modules are used.You can also use Workflow variables as your targets. For example, you can create a Workflow variable named **module** and then enter the variable `${workflow.variables.module}` in the **Targets** field. When you deploy the Workflow, you are prompted to provide a value for the variable: + +![](./static/terraform-provisioner-step-20\.png) + +See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +### Option: Workspaces + +Harness supports Terraform [workspaces](https://www.terraform.io/docs/state/workspaces.html). A Terraform workspace is a logical representation of one your infrastructures, such as Dev, QA, Stage, Production. + +Workspaces are useful when testing changes before moving to a production infrastructure. To test the changes, you create separate workspaces for Dev and Production. + +A workspace is really a different state file. Each workspace isolates its state from other workspaces. For more information, see [When to use Multiple Workspaces](https://www.terraform.io/docs/state/workspaces.html#when-to-use-multiple-workspaces) from Hashicorp. + +Here is an example script where a local value names two workspaces, **default** and **production**, and associates different instance counts with each: + + +``` +locals { + counts = { + "default"=1 + "production"=3 + } +} + +resource "aws_instance" "my_service" { + ami="ami-7b4d7900" + instance_type="t2.micro" + count="${lookup(local.counts, terraform.workspace, 2)}" + tags { + Name = "${terraform.workspace}" + } +} +``` +In the workspace interpolation sequence you can see the count is assigned by applying it to the workspace variable (`terraform.workspace`) and that the tag is applied using the variable also. + +Harness will pass the workspace name you provide to the `terraform.workspace` variable, thus determining the count. If you provide the name **production**, the count will be **3**. + +In the **Workspace** setting, you can simply select the name of the workspace to use. + +![](./static/terraform-provisioner-step-21\.png) + +You can also use a Workflow variable to enter the name in **Workspace**. + +Later, when the Workflow is deployed, you can specify the name for the Workflow variable: + +![](./static/terraform-provisioner-step-22\.png) + +This allows you to specify a different workspace name each time the Workflow is run. + +You can even set a Harness Trigger where you can set the workspace name used by the Workflow. + +This Trigger can then be run in response to different events, such as a Git push. For more information, see [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows). + +When rollbacks occur, Harness will rollback the Terraform state to the previous version of same workspace. + +### Option: Select Delegate + +In **Delegate Selector**, you can select a specific Harness Delegate to execute the Terraform Provisioning step by selecting the Selector for the Delegate. + +For more information on Delegate Selectors, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +You can even add a Workflow variable for the Delegate Selector and the use an expression in the **Delegate Selectors** field. When you deploy the Workflow, you will provide the name of the Delegate Selector. + +![](./static/terraform-provisioner-step-23\.png) + +For more information, see [Add Workflow Variables](https://docs.harness.io/article/m220i1tnia-workflow-configuration#add_workflow_variables) and [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows). + +### Option: Add Environment Variables + +If the Terraform script used in the Terraform Infrastructure Provisioner you selected uses environment variables, you can provide values for those variables here. + +For any environment variable, provide a name, type, and value. + +Click **Add**. + +Enter a name, type, and value for the environment variable. For example: **TF\_LOG**, **Text**, and `TRACE`. + +If you select Encrypted Text, you must select an existing Harness [Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +You can use Harness [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and [expression variables](https://docs.harness.io/article/9dvxcegm90-variables) for the name and value. + +Environment variables can also be deleted using the Terraform Destroy step. See [Remove Provisioned Infra with Terraform Destroy](terraform-destroy.md). + +### Step 3: Add Infrastructure Definition to Phases + +Now that the Workflow **Pre-deployment** section has your Terraform Provisioner step added, you need to add the target Infrastructure Definition where the Workflow will deploy. + +This is the same Infrastructure Definition where you mapped your Terraform Infrastructure Provisioner outputs, as described in [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md). + +For Canary Workflows, Infrastructure Definitions are added in Phases, in the **Deployment Phases** section. + +For AMI deployments, Terraform Infrastructure Provisioners are also supported in Blue/Green Workflows. If you are creating a Blue/Green Workflow for AMI, you can select the Environment and Infrastructure Definition in the Workflow setup settings. + +1. In the **Deployment Phases** section, click **Add Phase**. The Workflow Phase settings appear. +2. In **Service**, select the Harness Service to deploy. +3. In **Infrastructure Definition**, select the target Infrastructure Definition where the Workflow will deploy. + This is the same Infrastructure Definition where you mapped your Terraform Infrastructure Provisioner outputs, as described in [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md). + Here is an example: + ![](./static/terraform-provisioner-step-24.png) +4. Click **Submit**. Use the same Infrastructure Definition for the remaining phases in your Canary Workflow. + +Once you are done, your Workflow is ready to deploy. Let's look at an example below. + +### Example: Terraform Deployment + +This section describes the deployment steps for a Workflow using the Terraform Provisioner step and deploying to a provisioned AMI. + +![](./static/terraform-provisioner-step-25\.png) + +This is a Canary deployment Workflow, but we are only interested in **Phase 1** where the Terraform provisioning occurs, and the artifact is installed in the provisioned AMI. Phase 2 of the Canary deployment is omitted.In the **Pre-Deployment** section, the **Terraform Provision** step is executed. When you click the step you can see the Terraform command executed in **Details**. + +![](./static/terraform-provisioner-step-26\.png) + +Note the DNS name of the AMI in the `dns` output: + +![](./static/terraform-provisioner-step-27\.png) + +You will see this name used next. + +In **Phase 1** of the Canary deployment, click **Select Nodes** to see that Harness has selected the provisioned AMI as the deployment target host. See that it used the same DNS name as the the output in the **Terraform Provision** step: + +![](./static/terraform-provisioner-step-28\.png) + +Lastly, expand the **Deploy Service** step, and then click **Install**. You will see that the DNS name is shown on the arrow leading to install, and that the **Details** section displays the internal Delegate and provisioned target host addresses. + +![](./static/terraform-provisioner-step-29\.png) + +As you can see, the artifact was copied to the provisioned host. Deployment was a success. + +### Notes + +The following notes discuss rollback of deployments that use Terraform Infrastructure Provisioners. + +#### Deployment Rollback + +If you have successfully deployed Terraform modules and on the next deployment there is an error that initiates a rollback, Harness will roll back the provisioned infrastructure to the previous, successful version of the Terraform state. + +Harness will not increment the serial in the state, but perform a hard rollback to the exact version of the state provided. + +Harness determines what to rollback using a combination of the following Harness entities: + +`Terraform Infrastructure Provisioner + Environment + Branch + Path + Workspace` + +The branch and path refer to the branch and path selected in the **Script Repository** settings in the Terraform Infrastructure Provisioner used by this step. + +If you have templated these settings (using Workflow variables), Harness uses the values it obtains at runtime when it evaluates the template variables. + +#### Rollback Limitations + +If you deployed two modules successfully already, module1 and module2, and then attempted to deploy module3, but failed, Harness will roll back to the successful state of module1 and module2. + +However, let's look at the situation where module3 succeeds and now you have module1, module2, and module3 deployed. If the next deployment fails, the rollback will only roll back to the Terraform state with module3 deployed. Module1 and module2 were not in the previous Terraform state, so the rollback excludes them. + +### Next Steps + +Now that you're familiar with provision using the Terraform Provisioner step, the following topics cover features to help you extend your Harness Terraform deployments: + +* [Using the Terraform Apply Command](using-the-terraform-apply-command.md) — The Terraform Apply command allows you to use a Harness Terraform Infrastructure Provisioner at any point in a Workflow. +* [Perform a Terraform Dry Run](terraform-dry-run.md) — The Terraform Provisioner step in the Workflow can be executed as a dry run, just like running the `terraform plan` command. The dry run will refresh the state file and generate a plan. +* [Remove Provisioned Infra with Terraform Destroy](terraform-destroy.md) — As a post-deployment step, you can add a Terraform Destroy step to remove the provisioned infrastructure, just like running the `terraform destroy` command. + diff --git a/docs/first-gen/continuous-delivery/terraform-category/terrform-provisioner.md b/docs/first-gen/continuous-delivery/terraform-category/terrform-provisioner.md new file mode 100644 index 00000000000..456f1a088f4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/terrform-provisioner.md @@ -0,0 +1,26 @@ +--- +title: Terraform How-tos (FirstGen) +description: Harness has first-class support for HashiCorp Terraform as an infrastructure provisioner. +sidebar_position: 10 +helpdocs_topic_id: 9pvvgcdbjh +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This content is for Harness [FirstGen](../../../getting-started/harness-first-gen-vs-harness-next-gen.md). Switch to [NextGen](https://harness.helpdocs.io/article/w6i5f7cpc9-terraform-how-tos).Harness has first-class support for HashiCorp [Terraform](https://www.terraform.io/) as an infrastructure provisioner. + +See the following Terraform How-tos: + +* [Set Up Your Harness Account for Terraform](terraform-delegates.md) +* [Add Terraform Scripts](add-terraform-scripts.md) +* [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md) +* [Provision using the Terraform Provisioner Step](terraform-provisioner-step.md) +* [Using the Terraform Apply Command](using-the-terraform-apply-command.md) +* [Perform a Terraform Dry Run](terraform-dry-run.md) +* [Remove Provisioned Infra with Terraform Destroy](terraform-destroy.md) +* [Use Terraform Outputs in Workflow Steps](use-terraform-outputs-in-workflow-steps.md) + +For a conceptual overview of Harness Terraform integration, see [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md). + +Harness Terraform Infrastructure Provisioner are only supported in Canary and Multi-Service Workflows. For AMI deployments, Terraform Infrastructure Provisioner are also supported in Blue/Green Workflows. \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/terraform-category/use-terraform-outputs-in-workflow-steps.md b/docs/first-gen/continuous-delivery/terraform-category/use-terraform-outputs-in-workflow-steps.md new file mode 100644 index 00000000000..ad69ec74c7e --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/use-terraform-outputs-in-workflow-steps.md @@ -0,0 +1,75 @@ +--- +title: Use Terraform Outputs in Workflow Steps +description: Use variable expressions to reference Terraform outputs. +sidebar_position: 90 +helpdocs_topic_id: 8p2ze4u25w +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +When you use a Terraform Provision or Terraform Apply step in a Workflow, any of the Terraform script outputs in the Terraform script can be used in Workflow settings that follow the step. + +You reference a Terraform output with a Harness variable expression in the format `${terraform.output_name}`. + +You can reference the output regardless of whether the Terraform Infrastructure Provisioner is used in the Infrastructure Definition in the Workflow settings.This topic demonstrates how to use these expressions in other Workflow steps. + +In this topic: + +* [Before You Begin](#before_you_begin) +* [Limitations](#limitations) +* [Step 1: Add a Workflow Step](#step_1_add_a_workflow_step) +* [Step 2: Enter the Output Variable Expression](#step_2_enter_the_output_variable_expression) +* [Notes](#notes) + +### Before You Begin + +This topic assumes you have read the following: + +* [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md) +* [Set Up Your Harness Account for Terraform](terraform-delegates.md) +* [Add Terraform Scripts](add-terraform-scripts.md) +* [Map Dynamically Provisioned Infrastructure using Terraform](mapgcp-kube-terraform-infra.md) +* [Provision using the Terraform Provision Step](terraform-provisioner-step.md) +* [Using the Terraform Apply Command](using-the-terraform-apply-command.md) + +### Limitations + +* Terraform outputs are limited to the Workflow where the Terraform plan is applied. You cannot run a Terraform plan in one Workflow in a Pipeline and reference its outputs in another Workflow in a Pipeline. +You can, however, publish the values of the output variables from a Shell Script step, and then scope that published variable to the Pipeline. Now the output value can be passed to another Workflow in the Pipeline. See [Pass Variables between Workflows](https://docs.harness.io/article/gkmgrz9shh-how-to-pass-variables-between-workflows). +* You can only reference a Terraform output once the Terraform plan has been applied in the same Workflow. If a Terraform Provision or Terraform Apply step is set to run as a plan, you cannot reference its outputs. +Once the plan has been applied by another Terraform Provision or Terraform Apply step, you can reference the Terraform script outputs. See [Perform a Terraform Dry Run](terraform-dry-run.md). + +### Step 1: Add a Workflow Step + +This topic assumes you have a Workflow that uses a Terraform Provision or Terraform Apply step. + +Add a Workflow step after the Terraform Provision or Terraform Apply step where you want to use the Terraform script outputs. + +Typically, you add a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +### Step 2: Enter the Output Variable Expression + +In the Shell Script (or other) step, you can reference any Terraform output using the variable expression in the format `${terraform.output_name}`. + +For example, let's say you have an output for a Kubernetes cluster name. You can add a Shell Script step in your Workflow and use `echo ${terraform.clusterName}` to print the value. + +You can see the Terraform log display the output `clusterName = us-central1-a/harness-test` in the following Terraform Provision step: + +![](./static/use-terraform-outputs-in-workflow-steps-35\.png) + +Next, you could add a Shell Script step that uses the Terraform output variable `${terraform.clusterName}`: + +![](./static/use-terraform-outputs-in-workflow-steps-36\.png) + +In the Shell Script step in the deployment, you can see the value `us-central1-a/harness-test` printed: + +![](./static/use-terraform-outputs-in-workflow-steps-37.png) + +### Notes + +Terraform output expressions cannot be evaluated or published under the following conditions: + +* The Shell Script step script uses `exit 0`. Bash exit prevents outputs from being published. +* No Terraform apply is performed by the Terraform Provision or Terraform Apply steps. In some cases, a Terraform plan might be run using the [Set Terraform as Plan](terraform-dry-run.md) option, but no further step performs the Terraform apply. If there is no Terraform apply, there are no output values. + diff --git a/docs/first-gen/continuous-delivery/terraform-category/using-the-terraform-apply-command.md b/docs/first-gen/continuous-delivery/terraform-category/using-the-terraform-apply-command.md new file mode 100644 index 00000000000..afad66ce86b --- /dev/null +++ b/docs/first-gen/continuous-delivery/terraform-category/using-the-terraform-apply-command.md @@ -0,0 +1,346 @@ +--- +title: Using the Terraform Apply Command +description: Use the Terraform Apply step to perform Terraform operations at any point in your Workflow. +sidebar_position: 60 +helpdocs_topic_id: jaxppd8w9j +helpdocs_category_id: gkm7rtubpk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to use the Terraform Apply step to perform Terraform operations at any point in your Workflow. + +### Before You Begin + +Before reading about the Terraform Apply step, we recommend you read about the Harness Terraform Infrastructure Provisioner and related Terraform Provision step in [Provision using the Terraform Provision Step](terraform-provisioner-step.md). This will help you see the scope of Terraform support in Harness. + +Also, setting up the Terraform Apply step is nearly identical to setting up the Terraform Provision step, and this document will not repeat the steps.  + +### Review: Terraform Apply Overview + +The Terraform Apply step performs a [terraform apply](https://www.terraform.io/docs/commands/apply.html) command using the Terraform template (config.tf) you set up in a Harness Terraform Infrastructure Provisioner. + +Terraform Apply can be applied as an independent step to any Workflow. Terraform Apply steps can also be used together to perform multiple operations on the same infrastructure and Terraform workspace. + +The Terraform Apply step is separate from the Terraform Provision step used in Pre-deployment steps in a Workflow, although you can have both in the same Workflow and have both run operations on the same infrastructure and Terraform workspace. + +#### Terraform Apply and Workflow Rollback + +Terraform Apply is intended for ad hoc provisioning anywhere in a Workflow. Consequently, the Terraform Apply step does not participate in a Workflow rollback when the Workflow fails. + +Any provisioning performed by Terraform Apply is not rolled back. Only provisioning performed by the [Terraform Provision Step](terraform-provisioner-step.md) is rolled back. + +To delete the ad hoc provisioned infrastructure in the case of a Workflow failure, add the Terraform Destroy step to the Workflow **Rollback Steps** section. See [Remove Provisioned Infra with Terraform Destroy](terraform-destroy.md). + +#### What Can I do with Terraform Apply? + +Terraform Apply can be used to perform the many tasks offered by [Terraform Providers](https://www.terraform.io/docs/providers/). For example: + +* Provision infrastructure. +* Execute scripts on local and remote hosts. +* Copy files from one server to another. +* Query AWS for resource information. +* Push files to Git. +* Read data from APIs as JSON. +* Read image metadata from a Docker registry. +* Create a certificates for a development environment. +* Create DNS records. +* Inject containers with sensitive information such as passwords. + +#### Terraform Apply and Terraform Provision Steps + +Harness also includes a Terraform Provision step that uses a Harness Infrastructure Provisioner to provision target deployment infrastructure, but the Terraform Provision step has the following limitations:  + +* Supported in Canary and Multi-Service Workflows only. +* Support for AMI Blue/Green Workflows only. +* May be applied in a Workflow's Pre-deployment Steps only and is intended to provision deployment target infrastructure. + +The Terraform Apply step can be applied anywhere in your Workflow and can be used to perform most Terraform operations. + +#### Using Terraform Apply with Terraform Provision + +While it is not necessary to use the Terraform Apply step with the standard Terraform Provision step in your Workflow, using them together can be an efficient method for provisioning and managing resources.  + +This scenario might involve the following steps: + +1. The target environment is dynamically provisioned using the Terraform Provision step in the Pre-Deployment steps of your Workflow. +2. The Workflow deploys your application to the dynamically provisioned target infrastructure. +3. The Terraform Apply step performs operations on the deployed hosts, services, etc. + +For information on the Terraform Provision step, see [Provision using the Terraform Provision Step](terraform-provisioner-step.md). + +#### Target Platform Roles and Policies + +If you use the Terraform Apply step to provision infrastructure on a target platform, such as AWS, the provisioning is performed by the Harness Delegate(s) in one of two ways: + +* Provisioning is performed by the Delegate(s) associated with the Harness Cloud Provider specified in the Workflow Infrastructure Definition. +* Provisioning is performed by the Delegate(s) selected in the Terraform Apply step's **Delegate Selector** settings, described below. + +The platform roles and policies needed to provision infrastructure must be present on the platform account used with the Delegate via Cloud Provider or the roles assigned to the Delegate host. + +For example, if you want to provision AWS ECS resources, you would add the IAM roles and policies needed for creating AWS ECS resources to the account used by the Harness AWS Cloud Provider. + +For a list of these roles and policies, see the related Terraform module for your target platform. For example, the [Terraform AWS ECS Cluster Required Permissions](https://registry.terraform.io/modules/infrablocks/ecs-cluster/aws/latest#required-permissions). + +### Step 1: Create your Terraform Configuration File + +In this example, we create an AWS EC2 instance as a completely separate function of a Basic Workflow whose primary function is to deploy an application package in EC2. + +This example shows the independence of the Terraform Apply step, and how it can be used to augment any Workflow and perform independent functions. + +This procedure assumes you have read about the Harness Terraform Infrastructure Provisioner and related Terraform Provision step in [Terraform Provisioning with Harness](../concepts-cd/deployment-types/terraform-provisioning-with-harness.md). This will help you see the scope of Terraform support in Harness.Create your Terraform configuration file (config.tf) in your Git repo. The file used in this example creates an AWS EC2 instance using an AMI: + + +``` +variable "region" {} +variable "access_key" {} +variable "secret_key" {} +variable "tag" {} + +provider "aws" { +  region  = "${var.region}" +  access_key = "${var.access_key}" +  secret_key = "${var.secret_key}" +} + +resource "aws_instance" "tf_instance" { +   subnet_id     = "subnet-05788710b1b06b6b1" +   security_groups = ["sg-05e7b8bxxxxxxxxx"] +   key_name        = "doc-delegate1" +   ami           = "ami-0080e4c5bc078760e" +   instance_type = "t2.micro" +   associate_public_ip_address = "true" +  tags { +    Name = "${var.tag}" +   } +} +``` +The access and secret keys will be provided when you add the Terraform Apply step to your Workflow. In Harness, you create secrets in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) and then you can use them in other Harness components. + +You can also use a Terraform Apply step with the [HashiCorp Vault](https://www.terraform.io/docs/providers/vault/index.html) provider to access Vault credentials for a Terraform configuration. + +### Step 2: Add a Terraform Infrastructure Provisioner + +In Harness, add a Terraform Infrastructure Provisioner that uses the Terraform configuration file. This process is described in detail in the [Add Terraform Scripts](add-terraform-scripts.md) topic. + +Later, when you are adding the Terraform Apply step that uses this Terraform Infrastructure Provisioner, the inputs for your config.tf file are added. You can add tfvar files also. See [Provision using the Terraform Provision Step](terraform-provisioner-step.md). + +Once your Terraform Infrastructure Provisioner is added, you can use it in the Terraform Apply step in your Workflow. + +### Step 3: Add Terraform Apply to the Workflow + +In your Workflow, in any section, click **Add Command**. The **Add Command** dialog appears. + +In **Add Command**, select **Terraform Apply**. The **Terraform Apply** settings appear. + +The steps for filling out the dialog are the same as those for the Terraform Provision step described in [Provision using the Terraform Provision Step](terraform-provisioner-step.md). + +You simply select the Terraform Infrastructure Provisioner you set up earlier using the Terraform script, fill in the input values, and select other settings like Workspace. + +Refer to [Provision using the Terraform Provision Step](terraform-provisioner-step.md) for details on each setting. + +Let's look at an example where the Terraform Apply step is added as an independent step to a Workflow that deploys an application package (TAR file) to an EC2 instance. + +The Workflow will deploy the application package as intended, and it will also execute the Terraform Apply step to create a separate EC2 instance. + +![](./static/using-the-terraform-apply-command-47\.png) + +The steps for installing the application package are described in the [Traditional Deployments](../traditional-deployments/traditional-deployments-overview.md) guide. Let's look at the Terraform Apply Step. + +The Terraform script listed earlier will create an EC2 instance. This script is used by the Terraform Infrastructure Provisioner that the Terraform Apply step will use. + +In the Terraform Apply step you supply the values for the input variables in the Terraform script. + +Click **Populate Variables** and Harness will pull all of the input variables from the Terraform script you added to the Terraform Infrastructure Provisioner you selected. + +![](./static/using-the-terraform-apply-command-48\.png) + +It can table a moment to populate the variables. + +You can see that the AWS access and secret keys are inputs in this configuration. You can use [Harness Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) to provide encrypted text secrets in the Terraform Apply step. + +When the Workflow is deployed, you can see the traditional deployment of the application package succeed and the Terraform Apply step executed successfully. + +![](./static/using-the-terraform-apply-command-49\.png) + +Here is the Terraform Apply step output (with some lines omitted): + + +``` +Branch: master +Normalized Path: create +terraform init +Initializing provider plugins... +... +* provider.aws: version = "~> 2.23" + +Terraform has been successfully initialized! +... +terraform apply -input=false tfplan + +aws_instance.tf_instance: Creating... + ami: "" => "ami-0080e4c5bc078760e" + arn: "" => "" + ... + instance_type: "" => "t2.micro" + ... + key_name: "" => "doc-delegate1" + ... + security_groups.#: "" => "1" + security_groups.2062602533: "" => "sg-05e7b8bxxxxxxxxx" + source_dest_check: "" => "true" + subnet_id: "" => "subnet-05788710b1b06b6b1" + tags.%: "" => "1" + tags.Name: "" => "doctfapply" + tenancy: "" => "" + volume_tags.%: "" => "" + vpc_security_group_ids.#: "" => "" + +aws_instance.tf_instance: Still creating... (10s elapsed) + +aws_instance.tf_instance: Still creating... (20s elapsed) + +aws_instance.tf_instance: Still creating... (30s elapsed) + +aws_instance.tf_instance: Creation complete after 31s (ID: i-0b20b148aec6c9239) + +Apply complete! Resources: 1 added, 0 changed, 1 destroyed. + +terraform output --json > /home/ec2-user/harness-delegate/./repository/terraform/lnFZRF6jQO6tQnB9znMALw/Z3B5PqktSViUQgNCjhR5vQ/terraform/create/terraform-eoOIwEfCTw-dTJ8RLC_tcg-99TFSMfVRfOIbqJreffg6g.tfvars + +Waiting: [15] seconds for resources to be ready + +Script execution finished with status: SUCCESS +``` +The output looks like a standard `terraform apply` output. + +As you can see, the Terraform config.tf file was run on the Delegate using Terraform and created the AWS EC2 instance. This was all performed as an auxiliary step to the Workflow, showing the independence of the Terraform Apply step. + +### Option: AWS Cloud Provider, Region, Role ARN + +Currently, this feature is behind the Feature Flag `TERRAFORM_AWS_CP_AUTHENTICATION`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.If you want to use a specific AWS role for this step's provisioning, you can select the AWS Cloud Provider, Region, and Role ARN. You can select any of these options, or all of them. + +These options allow you to use different roles for different Terraform steps, such as one role for the Terraform Plan step and a different role for the Terraform Provision or Apply steps. + +* **AWS Cloud Provider:** the AWS Cloud Provider selected here is used for authentication. +At a minimum, select the **AWS Cloud Provider** and **Role ARN**. When used in combination with the AWS Cloud Provider option, the Role ARN is assumed by the Cloud Provider you select. +The **AWS Cloud Provider** setting can be templated.You need to select an AWS Cloud Provider even if the Terraform Infrastructure Provisioner you selected uses a manually-entered template body. Harness needs access to the AWS API via the credentials in the AWS Cloud Provider. +* **Region:** the AWS region where you will be provisioning your resources. If not region is specified, Harness uses `us-east-1`. +* **Role ARN:** enter the Amazon Resource Name (ARN) of an AWS IAM role that Terraform assumes when provisioning. This allows you to tune the step for provisioning a specific AWS resource. For example, if you will only provision AWS S3, then you can use a role that is limited to S3. +At a minimum, select the **AWS Cloud Provider** and **Role ARN**. When used in combination with the AWS Cloud Provider option, the Role ARN is assumed by the Cloud Provider you select. +You can also use [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) in **Role ARN**. For example, you can create a Service or Workflow variable and then enter its expression in **Role ARN**, such as `${serviceVariables.roleARN}` or `${workflow.variables.roleArn}`. + +#### Environment Variables + +If you use the **AWS Cloud Provider** and/or **Role ARN** options, do not add the following environment variables in the step's **Environment Variables** settings: + +* `AWS_ACCESS_KEY_ID` +* `AWS_SECRET_ACCESS_KEY` +* `AWS_SESSION_TOKEN` + +Harness generates these keys using the the **AWS Cloud Provider** and/or **Role ARN** options. If you also add these in **Environment Variables**, the step will fail. + +### Option: Inherit Configurations from Terraform Plan + +You can use the **Inherit following configurations from Terraform Plan** setting to inherit the Terraform plan from a prior Terraform Provision step that has the **Set as Terraform Plan** option selected. + +Harness runs the Terraform provision again and points to the plan, runs a Terraform refresh, then a plan, and finally executes the new plan. Technically, this is a different plan. + +If you want use the actual plan because of security or audit requirements, in the prior Terraform Provision step select both the **Set as Terraform Plan** and **Export Terraform Plan to Apply Step**. + +If you want to avoid the Terraform refresh, in your Terraform Infrastructure Provisioner, enable the **Skip Terraform Refresh when inheriting Terraform plan** setting. See [Skip Terraform Refresh When Inheriting Terraform Plan](add-terraform-scripts.md#option-2-skip-terraform-refresh-when-inheriting-terraform-plan). + +### Option: Remote and Local State with Terraform Apply Step + +In **Backend Configuration (Remote State)**, Harness enables you to use Terraform remote and local state files, and you can use multiple Terraform Apply steps with the same state files. + +The general guidelines for using the same remote or local state files are: + +* **Remote state files**: to use the same remote state file, Terraform Apply steps must use the **Backend Configuration (Remote state)** setting. +* **Local state files:** to use the same local state file, Terraform Apply steps must use the same Environment, Terraform Infrastructure Provisioner, and workspace. The Environment is specified when you create the Workflow that will contain the Terraform Apply step(s). + +#### Using Remote State with Terraform Apply + +To use the same remote state file, Terraform Apply steps must use the **Backend Configuration (Remote state)** setting and the same Terraform Infrastructure Provisioner. + +You add the backend configs (remote state variables) for remote state in the **Backend Configuration (Remote state)** settings. + +![](./static/using-the-terraform-apply-command-50\.png) + +If you have two Terraform Apply steps that use the same Terraform Infrastructure Provisioner and the same workspace, then they are both using the same remote state file.  + +A workspace is really a different state file. If you have two Terraform Apply steps that use the same Terraform Infrastructure Provisioner but different workspaces, then they are using separate state files. + +For example, if you have a Pipeline with a Build Workflow containing a Terraform Apply step followed by a Canary Workflow containing a Terraform Apply step, in order for both steps to use the same remote state file, the following criteria must be met: + +* Both Terraform Apply steps must use the same Infrastructure Provisioner. +* Both Terraform Apply steps must use the same workspace. + +#### Using Local State with Terraform Apply + +If you do not use the **Backend Configuration (Remote state)** setting, by default, Terraform uses the local backend to manage [state](https://www.terraform.io/docs/state/) in a local [Terraform language](https://www.terraform.io/docs/configuration/syntax.html) file named terraform.tfstate on the disk where you are running Terraform (typically, the Harness Delegate(s)).  + +An individual local state file is identified by Harness using a combination of the Harness Environment, Terraform Infrastructure Provisioner, and workspace settings. + +In fact, the local state is stored by Harness using an entity ID in the format EnvironmentID+ProvisionerID+WorkspaceName. This combination uses the Environment selected for the Workflow (or empty), and the Provisioner and workspace (or empty) selected in the **Terraform Apply** step. + +For two Terraform Apply steps to use the *same* local state file, they must use the same criteria: + +* Environment (via their Workflow settings). +* Terraform Infrastructure Provisioner. +* Workspace. + +For example, let's imagine one Terraform Apply step in a Build Workflow with no Environment set up and another Terraform Apply step in a Canary Workflow that uses an Environment. + +The local backend state file that was created in the first Terraform Apply step is not used by the second Terraform Apply step because they do not use the same Environment. In fact, the Build Workflow uses no Environment. + +Consequently, a [Terraform Destroy](terrform-provisioner.md#terraform-destroy) step in the Canary Workflow would not affect the Terraform local state created by the Build Workflow. In this case, it is best to use a separate Terraform Infrastructure Provisioner for each Terraform Apply step. + +In order for a Terraform Destroy step to work, the **Provisioner** and **Workspace** settings in it must match those of the Terraform Apply step whose operation(s) you want destroyed. Consequently, it must also be part of a Workflow using the same Environment. + +### Option: Additional Settings + +There are additional settings for Targets, Workspace, Delegate Selectors, and Terraform Environment Variables. + +These are the same settings you will find in the Terraform Provision step. + +For details on these features, see [Provision using the Terraform Provision Step](terraform-provisioner-step.md). + +### Option: Local State and Delegates + +If the local state file criteria described above are met, Harness ensures that the same local state file is used regardless of which Harness Delegate performs the deployment. + +You might have multiple Delegates running and a Workflow containing a Terraform Apply step that uses a local state file. Harness manages the local state file to ensure that if different Delegates are used for different deployments of the Workflow, the same local state file is used.  + +This allows you to deploy a Workflow, or even multiple Workflows, using the same local state file and not worry about which Delegate is used by Harness. + +If you like, you can ensure that Terraform Apply steps use the same Delegate using the [Delegate Selector](terrform-provisioner.md) setting in the Terraform Apply steps. + +### Option: Terraform Environment Variables + +If the Terraform script used in the Terraform Infrastructure Provisioner you selected uses environment variables, you can provide values for those variables here. + +For any environment variable, provide a name, type, and value. + +Click **Add**. + +Enter a name, type, and value for the environment variable. For example: **TF\_LOG**, **Text**, and `TRACE`. + +If you select Encrypted Text, you must select an existing Harness [Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +You can use Harness [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and [expression variables](https://docs.harness.io/article/9dvxcegm90-variables) for the name and value. + +### Terraform Plan Human Readable + +Harness provides expressions to view the plan in a more human readable format: + +* `${terraformApply.tfplanHumanReadable}` +* `${terraformDestroy.tfplanHumanReadable}` + +### Next Steps + +* [Infrastructure Provisioners Overview](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner) +* [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management) + +  + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/_category_.json b/docs/first-gen/continuous-delivery/terragrunt-category/_category_.json new file mode 100644 index 00000000000..d105f9e1f6e --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/_category_.json @@ -0,0 +1 @@ +{"label": "Terragrunt", "position": 100, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Terragrunt"}, "customProps": { "helpdocs_category_id": "noj782z9is"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/add-terragrunt-configuration-files.md b/docs/first-gen/continuous-delivery/terragrunt-category/add-terragrunt-configuration-files.md new file mode 100644 index 00000000000..d0fdfd56600 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/add-terragrunt-configuration-files.md @@ -0,0 +1,133 @@ +--- +title: Add Terragrunt Configuration Files +description: This topic describes how to set up a Harness Infrastructure Provisioner for Terragrunt. +sidebar_position: 30 +helpdocs_topic_id: mkjxbkglih +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to set up a Harness Infrastructure Provisioner for Terragrunt. You simply link Harness to the repo where your Terragrunt config files are located. + +Once the Harness Infrastructure Provisioner is set up, you can use it in two ways: + +* Define a deployment target in a Harness Infrastructure Definition. You add that Infrastructure Definition to a Workflow as the deployment target. Next, you add a Terragrunt Provision step to the same Workflow to build the target infrastructure. The Workflow provisions the infrastructure and then deploys to it. +* Provision non-target infrastructure. You can also simply add the Terragrunt Provision step to a Workflow to provision non-target resources. +In this topic, we will cover provisioning the target infrastructure for a deployment, but the steps to provision other resources are similar. + + +### Before You Begin + +* Get an overview how how Harness supports Terragrunt: [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md). +* Ensure you have your Harness account settings prepared for Terragrunt: [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md). + +### Visual Summary + +Here is a visual summary of how you use your and Terragrunt and Terraform files with Harness to provision target infra and then deploy to it: + +![](./static/add-terragrunt-configuration-files-27.png) + +Here's a 6 minute video walkthrough of Harness-Terragrunt integration: + + + + + +You can use Terragrunt in Harness to provision any infrastructure, not just the target infrastructure for the deployment. + +In this use case, you simply add the Terragrunt Provision step to your Workflow and it runs some Terragrunt commands to provision some non-target resources in your infrastructure. + +![](./static/add-terragrunt-configuration-files-28.png) + +### Step 1: Add a Terragrunt Provisioner + +To set up a Terragrunt Infrastructure Provisioner, do the following: + +In your Harness Application, click **Infrastructure Provisioners**. + +Click **Add Infrastructure Provisioner**, and then click **Terragrunt**. The **Terragrunt Provisioner** settings appear. + +In **Name**, enter the name for this provisioner. You will use this name to select this provisioner in Harness Infrastructure Definitions and Workflows. + +Click **Next**. The **Script Repository** section appears. This is where you provide the location of the root module in your Git repo. + +### Step 2: Select Your Terragrunt Script Repo + +In **Script Repository**, in **Git Repository**, select the [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) you added for the Git repo where your script is located. See [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md). + +In **Commit**, select **Latest from Branch** or **Specific Commit ID**: + +* If you selected **Latest from Branch**, in **Git Repository Branch**, enter the repo branch to use. For example, **master**. For master, you can also use a dot (`.`). +* If you selected **Specific Commit ID**, in **Commit ID**, enter the Git commit ID to use. + +In **Path to Terragrunt Root Module**, enter the folder where the root module is located. Enter `.` for root. + +### Option: Use Expressions for Script Repository + +You can also use expressions in the **Git Repository Branch** and **Path to Terragrunt Root Module** and have them replaced by Workflow variable values when the Terragrunt Provisioner is used by the Workflow. + +For example, a Workflow can have variables for **branch** and **path**: + +![](./static/add-terragrunt-configuration-files-29\.png) + +In **Script Repository**, you can enter variables as `${workflow.variables.branch}` and `${workflow.variables.path}`: + +![](./static/add-terragrunt-configuration-files-30\.png) + +When the Workflow is deployed, you are prompted to provide values for the Workflow variables, which are then applied to the **Script Repository** settings. + +This allows the same Terragrunt Provisioner to be used by multiple Workflows, where each Workflow can use a different branch and path for the **Script Repository**. + +See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +### Step 3: Select Secret Manager for Terragrunt Plan + +In **Terraform Plan Storage Configuration**, select a Secrets Manager to use for encrypting/decrypting and saving the Terraform plan file. + +See [Add a Secrets Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager). + +A Terraform plan is a sensitive file that could be misused to alter cloud provider resources if someone has access to it. Harness avoids this issue by never passing the Terraform plan file as plain text. + +Harness only passes the Terraform plan between the Harness Manager and Delegate as an encrypted file using a Harness Secrets Manager. + +When the `terraform plan` command is run on the Harness Delegate, the Delegate encrypts the plan and saves it to the Secrets Manager you selected. The encrypted data is passed to the Harness Manager. + +When the plan is going to be applied, the Harness Manager passes the encrypted data to the Delegate. + +The Delegate decrypts the encrypted plan and applies it using the `terraform apply` command. + +### Option: Skip Terragrunt Refresh When Inheriting Terraform Plan + +To understand this setting, let's review some of the options available later when you will use this Terragrunt Infrastructure Provisioner with a [Terragrunt Provision](provision-using-the-terragrunt-provision-step.md) step in your Workflow. + +When you add that step, you can run it as a Terragrunt plan using the **Set as Terragrunt Plan** setting. + +Next, you have the option of exporting the Terragrunt plan from one Terragrunt step (using the **Export Terragrunt Plan to Apply Step** setting) and inheriting the Terraform plan in the next Terraform step (using the **Inherit following configurations from Terraform Plan** setting). + +Essentially, these settings allow you to use your Terragrunt Provision step as a `terragrunt plan` ([Terraform plan dry run](https://www.terraform.io/docs/commands/plan.html)). + +During this inheritance, Harness runs a Terraform refresh, then a plan, and finally executes the new plan. + +If do not want Harness to perform a refresh, enable the  **Skip Terragrunt Refresh when inheriting Terraform plan** option in your Terragrunt Infrastructure Provisioner. + +When this setting is enabled, Harness will directly apply the plan without reconciling any state changes that might have occurred outside of Harness between `plan` and `apply`. + +This setting is available because a Terraform refresh is not always an idempotent command. It can have some side effects on the state even when no infrastructure was changed. In such cases, terraform apply `tfplan` commands might fail. + +### Step 4: Complete the Terragrunt Provisioner + +When you are done, the **Terragrunt** **Provisioner** will look something like this: + +![](./static/add-terragrunt-configuration-files-31\.png) + +Now you can use this provisioner in both Infrastructure Definitions and Workflows. + +### Next Steps + +* **Infrastructure Definitions** — Use the Terragrunt Infrastructure Provisioner to define a Harness Infrastructure Definition. You do this by mapping the script outputs from the Terraform module used by the Terragrunt configuration file to the required Harness Infrastructure Definition settings. Harness supports provisioning for many different platforms. +See: [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md). +* **Workflows** — Once you have created the Infrastructure Definition and added it to a Workflow, you add a Terragrunt Provisioner Step to the Workflow to run your Terragrunt and Terraform files and provision the infra. +See: [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md). + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/map-terragrunt-infrastructure.md b/docs/first-gen/continuous-delivery/terragrunt-category/map-terragrunt-infrastructure.md new file mode 100644 index 00000000000..3aab9bbc1d0 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/map-terragrunt-infrastructure.md @@ -0,0 +1,167 @@ +--- +title: Map Dynamically Provisioned Infrastructure using Terragrunt +description: This topic describes how to use a Harness Terragrunt Infrastructure Provisioner to create a Harness Infrastructure Definition. When you select the Map Dynamically Provisioned Infrastructure option in… +sidebar_position: 40 +helpdocs_topic_id: tphb27opry +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to use a Harness Terragrunt Infrastructure Provisioner to create a Harness Infrastructure Definition. When you select the **Map Dynamically Provisioned Infrastructure** option in an Infrastructure Definition, you select an Infrastructure Provisioner and then map its outputs to required settings. + +![](./static/map-terragrunt-infrastructure-17\.png) + +Once you map the outputs, you add the Infrastructure Definition to a Workflow as its deployment target. + +Finally, you add a Terraform Provision step to that Workflow's Pre-deployment section to provision that target infrastructure. + +When the Workflow runs, it provisions the infrastructure and then deploys to it. + +This topic describes how to map Terraform script outputs for all of the supported platforms. + +If you just want to provision non-target infrastructure you don't need to map outputs in the Infrastructure Definition. See [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md) and [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md). + + +### Before You Begin + +* Get an overview how how Harness supports Terragrunt: [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md). +* Ensure you have your Harness account settings prepared for Terragrunt: [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md). +* Create a Harness Terragrunt Infrastructure Provisioner: [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md). + +### Visual Summary + +Here is a visual summary of how you use your and Terragrunt and Terraform files with Harness to provision target infra and then deploy to it: + +![](./static/map-terragrunt-infrastructure-18.png) + +Here's a 6-minute video walkthrough of Harness-Terragrunt integration: + + + + + +### Limitations + +Harness Terragrunt Infrastructure Provisioner are only supported in Canary and Multi-Service Workflows. For AMI/ASG and ECS deployments, Terragrunt Infrastructure Provisioners are also supported in Blue/Green Workflows. + +Harness has the same support for Terraform Infrastructure Provisioners. + +### Step 1: Add Terragrunt Configuration Files + +Following the steps in these topics to set up your Harness account for Terragrunt and then connect Harness with your Terragrunt config files using a Terragrunt Infrastructure Provisioner: + +1. [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md). +2. [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md). + +### Step 2: Add the Infrastructure Definition + +As noted above, ensure you have set up your Harness account for Terragrunt and adding a Terragrunt Infrastructure Provisioner before creating the Infrastructure Definition. + +To use a Terragrunt Infrastructure Provisioner to create an Infrastructure Definition, do the following: + +1. In the same Harness Application where you created the Terragrunt Infrastructure Provisioner, in a new or existing Environment, click **Infrastructure Definition**. The **Infrastructure Definition** settings appear. +2. In **Name**, enter the name for the Infrastructure Definition. You will use this name to select the Infrastructure Definition when you set up Workflows and Workflow Phases. +3. In **Cloud Provider Type**, select the type of Cloud Provider to use to connect to the target platform, such as Amazon Web Services, Kubernetes Cluster, etc. +4. In **Deployment Type**, select the same type of deployment as the Services you plan to deploy to this infrastructure. +5. Click **Map Dynamically Provisioned Infrastructure**. +6. In **Provisioner**, select your Terragrunt Infrastructure Provisioner. +7. In the remaining settings, map the required fields to the script outputs of the Terraform module used by the Terragrunt configuration file of the Terragrunt Infrastructure Provisioner. The required fields are described in the option sections below. + +You map the Terraform script outputs using this syntax, where `exact_name` is the name of the output: + + +``` +${terragrunt.exact_name} +``` +When you map a Terraform script output to a Harness Infrastructure Definition setting, the variable for the output, `${terragrunt.exact_name​}`, can be used anywhere in the Workflow that uses that Terragrunt Infrastructure Provisioner. + +### Option 1: Map a Platform Agnostic Kubernetes Cluster + +Provisioning Kubernetes is supported with the Kubernetes Cluster Cloud Provider and Google Cloud Platform Cloud Provider only. For Azure and AWS, use the Kubernetes Cluster Cloud Provider.Harness supports platform-agnostic Kubernetes cluster connections using its [Kubernetes Cluster Cloud Provider](https://docs.harness.io/article/l68rujg6mp-add-kubernetes-cluster-cloud-provider). + +When you set up an Infrastructure Definition using a Kubernetes Cluster Cloud Provider you can map your Terraform script outputs to the required Infrastructure Definition settings. + +The agnostic Kubernetes deployment type requires mapping for the **Namespace** and **Release Name** settings. + +The following example shows the Terraform script outputs used for the mandatory platform-agnostic Kubernetes deployment type fields: + +![](./static/map-terragrunt-infrastructure-19.png) + +### Option 2: ​Map a GCP Kubernetes Infrastructure​ + +The GCP Kubernetes deployment type requires that you map the **Cluster Name** setting. + +Provisioning Kubernetes is supported with the Kubernetes Cluster Cloud Provider and Google Cloud Platform Cloud Provider only. For Azure and AWS, use the Kubernetes Cluster Cloud Provider.The following example shows the Terraform script outputs used for the mandatory GCP Kubernetes (GKE) deployment type field: + +![](./static/map-terragrunt-infrastructure-20.png) + +#### Cluster Name Format + +If the cluster is multi-zonal, ensure the resolved value of the Terraform output mapped to **Cluster Name** uses the format `region/name`. + +If the cluster is single-zone, ensure the resolved value of the Terraform output mapped to **Cluster Name** uses the format `zone/name`. If you use a `region/name` format, it will result in a 404 error. + +See [Types of clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters) from GCS. + +### Option 3: ​Map an AWS AMI Infrastructure​ + +AMI deployments are the only type that supports Terraform and CloudFormation Infrastructure Provisioners in Blue/Green deployments.The AWS AutoScaling Group deployment type requires the Region and Base Auto Scaling Group fields. The following example shows the Terraform script outputs used for all of the fields: + +![](./static/map-terragrunt-infrastructure-21\.png) + +For detailed information on AMI deployments, see [AMI Basic Deployment](../aws-deployments/ami-deployments/ami-deployment.md). Here is what each of the output values are: + +* **Region** - The target AWS region for the AMI deployment. +* **Base Auto Scaling Group** - An existing Auto Scale Group that Harness will copy to create a new Auto Scaling Group for deployment by an AMI Workflow. The new Auto Scaling Group deployed by the AMI Workflow will have unique max and min instances and desired count. +* **Target Groups** - The target group for the load balancer that will support your Auto Scale Group. The target group is used to route requests to the Auto Scale Groups you deploy. If you do not select a target group, your deployment will not fail, but there will be no way to reach the Auto Scale Group. +* **Classic Load Balancers** - A classic load balancer for the Auto Scale Group you will deploy. +* For Blue/Green Deployments only: + + **Stage Classic Load Balancers** - A classic load balancer for the stage Auto Scale Group you will deploy. + + **Stage Target Groups** - The staging target group to use for Blue Green deployments. The staging target group is used for initial deployment of the Auto Scale Group and, once successful, the Auto Scale Group is registered with the production target group (**Target Groups** selected above). + +Harness recommends you use Launch Templates instead of Launch Configurations. With Launch Templates, the AMI root volume size parameter is overwritten as specified in the Launch Template. This prevents conflicts between devices on a base Launch Configuration and the AMI Harness creates.### Option 4: ​Map an AWS ECS Infrastructure​ + +The ECS deployment type requires the **Region** and **Cluster** fields. The following example shows the Terraform script outputs used for the mandatory ECS deployment type fields: + +![](./static/map-terragrunt-infrastructure-22\.png) + +For information on ECS deployments, see [AWS ECS Deployments Overview](../concepts-cd/deployment-types/aws-ecs-deployments-overview.md). + +### Option 5: ​Map an AWS Lambda Infrastructure​ + +The Lambda deployment type requires the IAM Role and Region fields. The following example shows the Terraform script outputs used for the mandatory and optional Lambda deployment type fields: + +![](./static/map-terragrunt-infrastructure-23\.png) + +See [AWS Lambda Quickstart](https://docs.harness.io/article/wy1rjh19ej-aws-lambda-deployments). + +### Option 6: ​Map a Secure Shell (SSH) Infrastructure + +The Secure Shell (SSH) deployment type requires the **Region** and **Tags** fields. The following example shows the Terraform script outputs used for the mandatory SSH deployment type fields: + +![](./static/map-terragrunt-infrastructure-24\.png) + +See [Traditional (SSH) Quickstart](https://docs.harness.io/article/keodlyvsg5-traditional-ssh-quickstart). + +### Option 7: Map an Azure Web App + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it is available for Trial and Community Editions.The Azure Web App deployment requires the Subscription and Resource Group in the Infrastructure Definition. + +The Web App name and Deployment Slots are mapped in the Deployment Slot Workflow step. + +In the following example, `${terragrunt.webApp}` is used for both the Web App name and Target Slot. + +![](./static/map-terragrunt-infrastructure-25\.png) + +See [Azure Web App Deployments Overview](../azure-deployments/azure-webapp-category/azure-web-app-deployments-overview.md). + +### Next Steps + +Now that the Infrastructure Definition is mapped to the Terraform outputs, the provisioned infrastructure can be used as a deployment target by a Harness Workflow. But the Terragrunt file and Terraform script must still be run to provision this infrastructure. + +To run the Terragrunt file in your Harness Infrastructure Provisioner and create the infra you defined in Infrastructure Definition, you add a a Terragrunt Provisioner step to the pre-deployment section of your Workflow. + +For steps on adding the Terragrunt Provisioner step, see [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md). + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/perform-a-terragrunt-dry-run.md b/docs/first-gen/continuous-delivery/terragrunt-category/perform-a-terragrunt-dry-run.md new file mode 100644 index 00000000000..62661474c7f --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/perform-a-terragrunt-dry-run.md @@ -0,0 +1,176 @@ +--- +title: Perform a Terragrunt Dry Run +description: The Terragrunt Provision step in a Workflow can be executed as a dry run, just like running the terragrunt plan command. +sidebar_position: 60 +helpdocs_topic_id: rbw96hdr1c +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Terragrunt Provision step in a Workflow can be executed as a dry run, just like running the `terragrunt plan` command. + +The dry run will refresh the state file and generate a plan, but not apply the plan. You can then set up an Approval step to follow the dry run, followed by the Terragrunt Provision step to inherit and apply the plan. + +This topic covers using the Terragrunt Provision step for dry runs only. For steps on applying plans without a dry run, see [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md). + +### Before You Begin + +This topic assumes you have read the following: + +* [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md) +* [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md) +* [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md) +* [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md) +* [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md) + +### Visual Summary + +The following graphic shows a common use of a Terragrunt dry run in deployments. + +![](./static/perform-a-terragrunt-dry-run-02\.png) + +1. The dry run is used to verify the provisioning. +2. An Approval step to ensure that the Terragrunt plan is working correctly. +3. The plan is run and the infrastructure is provisioned. +4. The app is deployed to the provisioned infrastructure. + +In a Harness Workflow it looks something like this: + +![](./static/perform-a-terragrunt-dry-run-03.png) + +### Limitations + +The Terragrunt and Terraform Plans are stored in the default Harness Secrets Manager as encrypted text. This is because plans often contain variables that store secrets. + +The plan size must not exceed the secret size limit for secrets in your default Secret Manager. AWS Secrets Manager has a limitation of 64KB. Other supported Secrets Managers support larger file size. + +See [Add a Secrets Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager). + +### Step 1: Set Terragrunt Step as Plan + +This step assumes you are familiar with adding the Terragrunt Provision step. See [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md). + +To perform a dry run of your Terragrunt Provision step, you simply select the **Set as Terragrunt Plan** option. + +![](./static/perform-a-terragrunt-dry-run-04\.png) + +That's it. Now this Terragrunt Provision step will run like a `terragrunt plan` command. + +The dry run will refresh the state file and generate a plan but it is not applied. You can then set up an Approval step to follow the dry run, followed by a Terragrunt Provision step to apply the plan. + +In the subsequent Terragrunt Provision step, you will select the **Inherit configurations from previous Terragrunt Provision step** option to apply the plan. + +This is just like running the `terragrunt plan` command before a `terragrunt apply` command. + +### Option: Export Terragrunt Plan to next Terragrunt Provision step + +This option supports modules with [Terraform version 12](https://www.terraform.io/upgrade-guides/0-12.html) only.When you use **Set as Terragrunt Plan** in the Terragrunt Provision step and then use **Inherit configurations from previous Terragrunt Provision step** in a subsequent Terragrunt Provision step, Harness does the following: + +* Harness runs the Terragrunt provision again and points to the plan, runs a Terragrunt refresh, then a plan, and finally executes the new plan. + +Technically, this is a different plan. If you want use the actual plan because of security or audit requirements, use **Export Terragrunt Plan to next Terragrunt Provision step** in the previous Terragrunt Provision step along with **Set as Terragrunt Plan**. + +![](./static/perform-a-terragrunt-dry-run-05.png) + +##### Notes + +* If the **Export Terragrunt Plan to next Terragrunt Provision step** option is enabled in two consecutive Terragrunt Provision steps, the second Terragrunt Provision step overwrites the plan from the first Terragrunt Provision step. +* Harness uses the [Harness Secret Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager) you have selected as your default in the export process. As a result, the size of the plan you can export is limited to the size of secret that Secret Manager allows. + +### Step 2: Add Approval Step + +Harness Workflow Approval steps can be done using Jira, ServiceNow, or the Harness UI. You can even use custom shell scripts. See [Approvals](https://docs.harness.io/article/0ajz35u2hy). + +Add the Approval step after the Terragrunt Provision where you selected the **Set as Terragrunt Plan** option. + +1. To add the Approval step, click **Add Step**, and select **Approval**. +2. In the **Approval** step, select whatever approval options you want, and then click **Submit**. + +Next, we'll add a Terragrunt Provision step after the Approval step to actually run the Terragrunt Infrastructure Provisioner files. + +If the Approval step takes a long time to be approved there is the possibility that a new commit occurs in the Git repo containing for Terragrunt or Terraform files. To avoid a problem, when the Workflow performs the dry run, it saves the commit ID of the script file. Later, after the approval, the Terragrunt Provision step will use the commit ID to ensure that it executes the script that was dry run.### Step 3: Add Terragrunt Step to Apply Plan + +For the Terragrunt Provision step that actually runs the Terragrunt Infrastructure Provisioner script (`terragrunt` `apply`), all you need to do is select the **Inherit configurations from previous Terragrunt Provision step** option. + +When you select this option, the Terragrunt Provision step inherits the settings of the Terragrunt Provision step that preceded it. + +After the Approval step, click **Add Step**. + +Select or add a **Terragrunt** **Provision** step. + +In **Name**, enter a name for the step to indicate that it will perform the provisioning. For example, **Apply Terragrunt Provisioning**. + +In **Provisioner**, select the Harness Terragrunt Infrastructure Provisioner you want to run. This is the same Terragrunt Infrastructure Provisioner you selected in the previous Terragrunt Provision step. + +Select the **Inherit configurations from previous Terragrunt Provision step** option. + +![](./static/perform-a-terragrunt-dry-run-06\.png) + +Click **Submit**. + +You do not need to enter any more settings. The Terragrunt Provision step inherits the settings of the Terragrunt Provision step that preceded it. + +Your Workflow now looks something like this: + +![](./static/perform-a-terragrunt-dry-run-07.png) + +### Step 4: Deploy + +Deploy your Workflow and see the `terragrunt plan` executed in the first Terragrunt Provision step. + + +``` +Generating ************** plan + +terragrunt plan -out=tfplan -input=false -var-file="/opt/harness-delegate/./terragrunt-working-dir/kmpySmUISimoRrJL6NL73w/235638175/terragrunt-script-repository/variables/local_variables/**************.tfvars" +Refreshing Terraform state in-memory prior to plan... +The refreshed state will be used to calculate this plan, but will not be +persisted to local or remote state storage. + +... + +This plan was saved to: tfplan + +To perform exactly these actions, run the following command to apply: + ************** apply "tfplan" + + +Generating json representation of tfplan + +terragrunt show -json tfplan + +Json representation of tfplan is exported as a variable ${**************Apply.tfplan} + +Finished terragrunt plan task +``` +Next, approve the Approval step. + +Finally, see the `terragrunt apply` executed as part of the final Terragrunt Provision step. + + +``` +terragrunt apply -input=false tfplan +null_resource.delaymodule3: Creating... +null_resource.delaymodule3: Provisioning with 'local-exec'... +null_resource.delaymodule3 (local-exec): Executing: ["/bin/sleep" "5"] +null_resource.delaymodule3: Creation complete after 5s [id=932665668643318315] + +Apply complete! Resources: 1 added, 0 changed, 0 destroyed. + +... + +State path: **************.tfstate + +Outputs: + +clusterName = us-central1-a/harness-test +sleepoutputModule3 = 10 +versionModule3 = 5 +terragrunt output --json > /opt/harness-delegate/./terragrunt-working-dir/kmpySmUISimoRrJL6NL73w/235638175/terragrunt-script-repository/prod-no-var-required/**************-235638175.tfvars +Finished terragrunt apply task +``` +### Next Steps + +Removing provisioned infrastructure is a common Terragrunt and Terraform-related task. You can add this task to your Harness Workflow and automate it. See [Remove Provisioned Infra with Terragrunt Destroy](remove-provisioned-infra-with-terragrunt-destroy.md) and [Remove Provisioned Infra with Terraform Destroy](../terraform-category/terraform-destroy.md). + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/provision-using-the-terragrunt-provision-step.md b/docs/first-gen/continuous-delivery/terragrunt-category/provision-using-the-terragrunt-provision-step.md new file mode 100644 index 00000000000..0cd3569e636 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/provision-using-the-terragrunt-provision-step.md @@ -0,0 +1,432 @@ +--- +title: Provision using the Terragrunt Provision Step +description: This topic describes how to provision the target infrastructure for a deployment using the Workflow Terragrunt Provisioner step. +sidebar_position: 50 +helpdocs_topic_id: jbzxpljhlo +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use Terragrunt in Harness to provision the target infrastructure for a deployment or any other resources. Typically, Harness users provision the target infrastructure for a deployment and then deploy to it in the same Workflow. + +This topic describes how to provision the target infrastructure for a deployment using the Workflow **Terragrunt** **Provisioner** step. The same information applies to provision non-target resources. + +You use the Terragrunt Provisioner step in a Workflow to run the Terragrunt configuration file (and related Terraform scripts) you added in a [Harness Terragrunt Infrastructure Provisioner](add-terragrunt-configuration-files.md). + +During deployment, the Terragrunt Provisioner step provisions the target infrastructure and then the Workflow deploys to it. + +The Harness Terragrunt Infrastructure Provisioner is supported in Canary and Multi-Service Workflows only. For AMI/ASG and ECS deployments, the Terragrunt Infrastructure Provisioner is also supported in Blue/Green Workflows. + +### Before You Begin + +* Get an overview how how Harness supports Terragrunt: [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md). +* Ensure you have your Harness account settings prepared for Terragrunt: [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md). +* Create a Harness Terragrunt Infrastructure Provisioner: [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md). +* If you are provisioning the target infrastructure for a deployment you need to map Terraform outputs to the Infrastructure Definition used by the Workflow. See [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md). + +In addition, the following related features are documented in other topics: + +* **Terragrunt** **Dry Run**: the Terragrunt Provisioner step in the Workflow can be executed as a dry run, just like running the `terragrunt plan` command. The dry run will refresh the state file and generate a plan. See [Perform a Terragrunt Dry Run](perform-a-terragrunt-dry-run.md). +* **Terragrunt** **Destroy**: see [Remove Provisioned Infra with Terragrunt Destroy](remove-provisioned-infra-with-terragrunt-destroy.md). + +### Visual Summary + +Here is a visual summary of how you use your and Terragrunt and Terraform files with Harness to provision target infra and then deploy to it: + +![](./static/provision-using-the-terragrunt-provision-step-08\.png) + +For step 1, see [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md). For step 2, see [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md). + +Here's a 6 minute video walkthrough of Harness-Terragrunt integration: + + + + + +### Step 1: Add Environment to Workflow + +Before creating or changing the Workflow settings to use a Terragrunt Infrastructure Provisioner, you need an Infrastructure Definition that uses the Terragrunt Infrastructure Provisioner. Setting up this Infrastructure Definition is covered in [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md). + +Next, when you create your Workflow, you add the Environment containing the mapped Infrastructure Definition to your Workflow settings. + +Harness Infrastructure Provisioners are only supported in Canary and Multi-Service deployment types. For AMI deployments, Infrastructure Provisioners are also supported in Blue/Green deployments. If you are creating a Blue/Green Workflow for AMI, you can select the Environment and Infrastructure Definition in the Workflow setup settings.In your Harness Application, click **Workflows**. + +Click **Add Workflow**. The Workflow settings appear. + +Enter a name and description for the Workflow. + +In **Workflow Type**, select **Canary**. + +In **Environment**, select the Environment that has the Terragrunt Provisioner set up in one of its Infrastructure Definitions. + +Click **SUBMIT**. The new Workflow is created. + +By default, the Workflow includes a **Pre-deployment Steps** section. This is where you will add a step that uses your Terragrunt Provisioner. + +Infrastructure Definitions are added in Canary Workflow *Phases*, in the **Deployment Phases** section. You will add the Infrastructure Definition that uses your Terragrunt Infrastructure Provisioner when you add the Canary Phases, later in this topic. + +### Step 2: Add Terragrunt Step to Pre-deployment Steps + +To provision the infrastructure in your Terragrunt Infrastructure Provisioner, you add the **Terraform Provisioner** Step in **Pre-deployment Steps**. + +In your Workflow, in **Pre-deployment Steps**, click **Add Step**. + +Select **Terragrunt** **Provision**. The **Terragrunt** **Provision** settings appear. + +In **Name**, enter a name for the step. Use a name that describes the infrastructure the step will provision. + +In **Provisioner**, select the Harness Terragrunt Infrastructure Provisioner you set up for provisioning your target infrastructure. This is covered in [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md). + +In **Timeout**, enter how long Harness should wait to complete the Terragrunt Provisioner step before failing the Workflow. Provisioning can be time-consuming so use at least `5m`. + +Click **Next**. The **Module Configuration** settings appear. + +#### Terragrunt Module Settings + +Specify the Terraform modules you want Terragrunt to use. + +You are telling Harness where to locate your terragrunt.hcl file. The terragrunt.hcl itself will point to a Terraform module using the `source` parameter like this: + + +``` +locals { +} + +terraform { +// source = "git::git@github.com:Tathagat-289/terraformResources.git//module3" + source = "github.com/Tathagat-289/terraformResources//module3" +} + +# Include all settings from the root terragrunt.hcl file +include { + path = find_in_parent_folders() +} + +inputs = { + tfmodule3 = "tfmodule4" + slmodule3 = "sleepmodule4" + tfv = "tfversion1" + sl = "sl1" +} +``` +You have two options: + +* **Apply All Modules:** Harness will use all of the terragrunt.hcl files starting from the folder you specify in **Path to Module**. +When you select **Apply All Modules**, the [Export Terragrunt Plan to next Terragrunt Provision step](#export_terragrunt_plan_to_next_terragrunt_provision_step) option is disabled. +When you select **Apply All Modules**, you might want to use [Backend Configuration (Remote state)](#option_backend_configuration_remote_state) to store your state file. Harness will not sync with the current state when Apply All Modules is selected. Instead, Harness simply applies the terragrunt.hcl files. +* **Specify Specific Module:** Harness will use the terragrunt.hcl file in the folder you specify in **Path to Module**. + +You can use [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in **Path to Module**. + +#### Terragrunt Plan Configuration + +Essentially, these settings allow you to use your Terraform Provision steps as a Terragrunt plan dry run. + +Users typically do this and add a Harness [Approval step](https://docs.harness.io/category/4edbfn50l8) between the Terraform Provision step that runs the plan and the Terraform Provision step that applies the plan. + +##### Inherit configurations from previous Terragrunt Provision step + +Select this option if there is a previous Terragrunt Provision step in the Workflow with **Export Terragrunt Plan to next Terragrunt Provision** selected. + +The **Inherit configurations from previous Terragrunt Provision step** will only work if a preceding Terragrunt Provision step uses **Export Terragrunt Plan to next Terragrunt Provision**. + +##### Set as Terragrunt Plan + +Run the step as a Terragrunt plan. + +When this option is selected, the **Export Terragrunt Plan to next Terragrunt Provision step** option becomes available. + +##### Export Terragrunt Plan to next Terragrunt Provision step + +This option supports [Terraform version 12](https://www.terraform.io/upgrade-guides/0-12.html) only.Select this option to run this Terragrunt Provision as a Terragrunt plan and then export it to the next Terragrunt Provision step in the Workflow to be applied. + +The next Terragrunt Provision step must have the **Inherit configurations from previous Terragrunt Provision** **step** selected to apply the plan. + +By default, Harness uses the [Harness Secret Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager) you have selected as your **default**. + +### Step 3: Input Values + +Input values are where you provide values for the Terraform input variables in the Terraform module (config.tf) that your Terragrunt config file uses. + +For example, here's a Terraform config.tf file with variables for access and secret key: + + +``` +variable "access_key" {} + +variable "secret_key" {} + +provider "aws" { + access_key = var.access_key + secret_key = var.secret_key + region = "us-east-1" +} +... +``` +You provide values for these input variables in the **Use tfvar Files** and or **Inline Values** section. You can use either or a mix of both. + +#### Use tfvar Files + +Use the **Use tfvar files** option for a variable definitions file. You can use inline or remote tfvar files. + +##### Inline tfvar Files + +In **File Path**, enter the path to the terraform.tfvars file from the root of the repo you specified in the Terragrunt Infrastructure Provisioner you selected. Enter the full path from the root of the repo to the tfvar file. + +For example, if the file is located from the root at `variables/local_variables/terraform.tfvars` you would enter `variables/local_variables/terraform.tfvars`. You can enter multiple file paths separated by commas. + +You can use [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in **File Path**. + +##### Remote tfvar Files + +In **Source Repository**, select the Harness [Source Repo Provider](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) that connects to the repo where your tfvar file is. + +Select **Commit ID** or **Branch.** + +* For **Commit ID**, enter the git commit ID containing the tfvar version you want to use. +* For **Branch**, enter the name of the branch where the tfvar file is located. + +In **File Folder Path**, enter the full path from the root of the repo to the tfvar file. You can enter multiple file paths separated by commas. + +#### Inline Values + +You can enter inline values for the inputs in the Terraform config.tf file. + +For example, here's a Terraform config.tf file with variables for access and secret key: + + +``` +variable "access_key" {} + +variable "secret_key" {} + +provider "aws" { + access_key = var.access_key + secret_key = var.secret_key + region = "us-east-1" +} +... +``` +In **Inline Values**, you can enter values for those inputs or select Harness secrets for the values: + +![](./static/provision-using-the-terragrunt-provision-step-09\.png) + +See [Use Encrypted Text Secrets](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +### Option: Backend Configuration (Remote state) + +In Backend Configuration (Remote state), enter values for each backend config (remote state variable) in the Terragrunt config (.hcl) or Terraform script (config.tf) file. + +For example, here's a config.tf with a backend the values for it in Harness: + +![](./static/provision-using-the-terragrunt-provision-step-10\.png) + +Depending on which platform you store your remote state data, Terragrunt and Terraform allow you to pass many different credentials and configuration settings, such as access and secret keys. For example, see the settings available for [AWS S3](https://www.terraform.io/docs/backends/types/s3.html#configuration) from Terraform and review [Keep your remote state configuration DRY](https://terragrunt.gruntwork.io/docs/features/keep-your-remote-state-configuration-dry/) from Terragrunt. + +### Option: Resource Targeting + +In **Additional Settings**, you can use the **Target** setting to target one or more specific modules in your Terraform script, just like using the `terraform plan -target` command. See [Resource Targeting](https://www.terraform.io/docs/commands/plan.html#resource-targeting) from Terraform. + +For example, in the following image you can see the Terraform script has one resource and two modules and the **Targets** setting displays them as potential targets. + +![](./static/provision-using-the-terragrunt-provision-step-11\.png) + +If you have multiple modules in your script and you do not select one in **Targets**, all modules are used.You can also use Workflow variables as your targets. For example, you can create a Workflow variable named **module** and then enter the variable `${workflow.variables.module}` in the **Targets** field. When you deploy the Workflow, you are prompted to provide a value for the variable: + +![](./static/provision-using-the-terragrunt-provision-step-12\.png) + +See [Set Workflow Variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template). + +### Option: Workspaces + +Harness supports Terraform [workspaces](https://www.terraform.io/docs/state/workspaces.html). A Terraform workspace is a logical representation of one your infrastructures, such as Dev, QA, Stage, Production. + +Workspaces are useful when testing changes before moving to a production infrastructure. To test the changes, you create separate workspaces for Dev and Production. + +A workspace is really a different state file. Each workspace isolates its state from other workspaces. For more information, see [When to use Multiple Workspaces](https://www.terraform.io/docs/state/workspaces.html#when-to-use-multiple-workspaces) from Hashicorp. + +Here is an example script where a local value names two workspaces, **default** and **production**, and associates different instance counts with each: + + +``` +locals { + counts = { + "default"=1 + "production"=3 + } +} + +resource "aws_instance" "my_service" { + ami="ami-7b4d7900" + instance_type="t2.micro" + count="${lookup(local.counts, terraform.workspace, 2)}" + tags { + Name = "${terraform.workspace}" + } +} +``` +In the workspace interpolation sequence you can see the count is assigned by applying it to the workspace variable (`terraform.workspace`) and that the tag is applied using the variable also. + +Harness will pass the workspace name you provide to the `terraform.workspace` variable, thus determining the count. If you provide the name **production**, the count will be **3**. + +In the **Workspace** setting, you can simply select the name of the workspace to use. + +![](./static/provision-using-the-terragrunt-provision-step-13\.png) + +So can also use a Workflow variable to enter the name in **Workspace**. + +Later, when the Workflow is deployed, you can specify the name for the Workflow variable: + +![](./static/provision-using-the-terragrunt-provision-step-14\.png) + +This allows you to specify a different workspace name each time the Workflow is run. + +You can even set a Harness Trigger where you can set the workspace name used by the Workflow: + +This Trigger can then be run in response to different events, such as a Git push. For more information, see [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows). + +When rollbacks occur, Harness will rollback the Terraform state to the previous version of same workspace.### Option: Select Delegate + +In **Delegate Selector**, you can select the specific Harness Delegate(s) to execute the Terragrunt Provisioning step. + +For more information on Delegate Selectors, see [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +You can even add a [Workflow variable](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) for the Delegate Selector and the use an expression in the **Delegate Selectors** field. When you deploy the Workflow, you will provide the name of the Delegate Selector. + +For more information, see [Add Workflow Variables](https://docs.harness.io/article/m220i1tnia-workflow-configuration#add_workflow_variables) and [Passing Variables into Workflows and Pipelines from Triggers](https://docs.harness.io/article/revc37vl0f-passing-variable-into-workflows). + +### Option: Skip Terragrunt Rollback + +When you add a **Terragrunt Provision** step to the Pre-deployment section of a Workflow, Harness automatically adds a **Terragrunt Rollback** step to the **Rollback Steps** of the Workflow Phase. + +Enable **Skip rollback of provisioned infrastructure on failure** to prevent Harness from automatically adding a **Terragrunt Rollback** step to the **Rollback Steps** of the Workflow Phase. + +### Option: Add Environment Variables + +In **Terragrunt** **Environment Variables**, you can reference additional environment variables in the Terraform script ultimately used by the Terragrunt Infrastructure Provisioner. These are in addition to any variables already in the script. + +Click **Add** and enter a name, type, and value for the environment variable. For example: **TF\_LOG**, **Text**, and `TRACE`. + +If you select Encrypted Text, you must select an existing Harness [Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +You can use Harness [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) and [expression variables](https://docs.harness.io/article/9dvxcegm90-variables) for the name and value. + +Environment variables can also be deleted using the Terragrunt Destroy step. See [Remove Provisioned Infra with Terragrunt Destroy](remove-provisioned-infra-with-terragrunt-destroy.md). + +### Step 4: Add Infrastructure Definition to Phases + +Now that the Workflow **Pre-deployment** section has your Terragrunt Provisioner step added, you need to add the target Infrastructure Definition where the Workflow will deploy. + +This is the same Infrastructure Definition where you mapped your Terragrunt Infrastructure Provisioner outputs, as described in [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md). + +For Canary Workflows, Infrastructure Definitions are added in Phases, in the **Deployment Phases** section. + +For AMI deployments, Terragrunt Infrastructure Provisioners are also supported in Blue/Green Workflows. If you are creating a Blue/Green Workflow for AMI, you can select the Environment and Infrastructure Definition in the Workflow setup settings.In the **Deployment Phases** section, click **Add Phase**. The Workflow Phase settings appear. + +In **Service**, select the Harness Service to deploy. + +In **Infrastructure Definition**, select the target Infrastructure Definition where the Workflow will deploy. This is the same Infrastructure Definition where you mapped your Terragrunt Infrastructure Provisioner outputs, as described in [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md). + +Here is an example: + +![](./static/provision-using-the-terragrunt-provision-step-15\.png) + +Click **Submit**. Use the same Infrastructure Definition for the remaining phases in your Canary Workflow. + +Once you are done, your Workflow is ready to deploy. Let's look at an example below. + +### Example: Terragrunt Deployment + +This section shows the deployment steps for a Workflow using the Terragrunt Provisioner step and deploying to a Kubernetes cluster. + +![](./static/provision-using-the-terragrunt-provision-step-16\.png) + +In the **Pre-Deployment** section, two **Terragrunt** **Provision** steps are executed. When you click each step you can see the Terragrunt commands executed in **Details**. + +The first Terragrunt Provision step create a plan using the Terragrunt config files and the source Terraform module. The plan is encrypted and stored in a Secrets Manager. + + +``` +Generating ************** plan + +terragrunt plan -out=tfplan -input=false -var-file="/opt/harness-delegate/./terragrunt-working-dir/kmpySmUISimoRrJL6NL73w/235638175/terragrunt-script-repository/variables/local_variables/**************.tfvars" +Refreshing Terraform state in-memory prior to plan... +The refreshed state will be used to calculate this plan, but will not be +persisted to local or remote state storage. +``` +The second Terragrunt Provision step inherits the plan, decrypts it, and applies the plan: + + +``` +Decrypting ************** plan before applying + + +Using approved ************** plan + +Finished terragrunt plan task +... +terragrunt apply -input=false tfplan +null_resource.delaymodule3: Creating... +null_resource.delaymodule3: Provisioning with 'local-exec'... +null_resource.delaymodule3 (local-exec): Executing: ["/bin/sleep" "5"] +null_resource.delaymodule3: Creation complete after 5s [id=932665668643318315] + +Apply complete! Resources: 1 added, 0 changed, 0 destroyed. + +The state of your infrastructure has been saved to the path +below. This state is required to modify and destroy your +infrastructure, so keep it safe. To inspect the complete state +use the `************** show` command. + +State path: **************.tfstate + +Outputs: + +clusterName = us-central1-a/harness-test +sleepoutputModule3 = 10 +versionModule3 = 5 +``` +Finally, in the **Canary Deployment** step, the workload steady state is reached and the deployment is considered a success: + + +``` +kubectl --kubeconfig=config get events --namespace=default --output=custom-columns=KIND:involvedObject.kind,NAME:.involvedObject.name,MESSAGE:.message,REASON:.reason --watch-only + +kubectl --kubeconfig=config rollout status Deployment/harness-example-deployment-canary --namespace=default --watch=true + + +Status : Waiting for deployment "harness-example-deployment-canary" rollout to finish: 0 of 1 updated replicas are available... +Event : Pod harness-example-deployment-canary-5b4cb547b-dmv5k Pulling image "registry.hub.docker.com/library/nginx:stable-perl" Pulling +Event : Pod harness-example-deployment-canary-5b4cb547b-dmv5k Successfully pulled image "registry.hub.docker.com/library/nginx:stable-perl" Pulled +Event : Pod harness-example-deployment-canary-5b4cb547b-dmv5k Created container harness-example Created +Event : Pod harness-example-deployment-canary-5b4cb547b-dmv5k Started container harness-example Started + +Status : deployment "harness-example-deployment-canary" successfully rolled out + +Done. +``` +### Notes + +The following notes discuss rollback of deployments that use Terragrunt Infrastructure Provisioners. + +#### Deployment Rollback + +If you have successfully deployed Terraform modules and on the next deployment there is an error that initiates a rollback, Harness will roll back the provisioned infrastructure to the previous, successful version of the Terraform state. + +Harness will not increment the serial in the state, but perform a hard rollback to the exact version of the state provided. + +#### Rollback Limitations + +If you deployed two modules successfully already, module1 and module2, and then attempted to deploy module3, but failed, Harness will roll back to the successful state of module1 and module2. + +However, let's look at the situation where module3 succeeds and now you have module1, module2, and module3 deployed. If the next deployment fails, the rollback will only roll back to the Terraform state with module3 deployed. Module1 and module2 were not in the previous Terraform state, so the rollback excludes them. + +### Next Steps + +Now that you're familiar with provision using the Terragrunt Provisioner step, the following topics cover features to help you extend your Harness Terragrunt deployments: + +* [Perform a Terragrunt Dry Run](perform-a-terragrunt-dry-run.md) +* [Remove Provisioned Infra with Terragrunt Destroy](remove-provisioned-infra-with-terragrunt-destroy.md) + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/remove-provisioned-infra-with-terragrunt-destroy.md b/docs/first-gen/continuous-delivery/terragrunt-category/remove-provisioned-infra-with-terragrunt-destroy.md new file mode 100644 index 00000000000..f38c61df45d --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/remove-provisioned-infra-with-terragrunt-destroy.md @@ -0,0 +1,149 @@ +--- +title: Remove Provisioned Infra with Terragrunt Destroy +description: You can add a Terragrunt Destroy Workflow step to remove any provisioned infrastructure, just like running the terragrunt run-all destroy command. +sidebar_position: 70 +helpdocs_topic_id: 1zmz2vtdo2 +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add a **Terragrunt** **Destroy** Workflow step to remove any provisioned infrastructure, just like running the `terragrunt run-all destroy` command. See [destroy](https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/#the-run-all-command) from Terragrunt. + +The Terragrunt **Destroy** step is independent of any other Terragrunt provisioning step in a Workflow. It is not restricted to removing the infrastructure deployed in its Workflow. It can remove any infrastructure you have provisioned using a Terragrunt Infrastructure Provisioner. + + +### Before You Begin + +This topic assumes you have read the following: + +* [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md) +* [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md) +* [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md) +* [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md) +* [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md) +* [Perform a Terragrunt Dry Run](perform-a-terragrunt-dry-run.md) + +### Review: What Gets Destroyed? + +The `terragrunt run-all destroy` command removes all the Terraform modules you specify. + +When you create a Harness Terragrunt Infrastructure Provisioner you specify the Terragrunt config file (.hcl). That file references Terraform modules that Harness will use for provisioning. + +When you destroy the provisioned infrastructure, you specify the Terragrunt Infrastructure Provisioner for Harness to use to locate the Terragrunt file and Terraform module. + +There are two ways to use the Terragrunt Destroy: + +* Destroy the infrastructure provisioned by the last successful use of a specific Terragrunt Infrastructure Provisioner, via a **Terragrunt** **Provision** step. Harness will use the same input values and backend configuration (Remote state) set up in the Terragrunt Infrastructure Provisioner. +* Destroy the infrastructure by entering new input values and backend configuration (Remote state) for a specific Terragrunt Infrastructure Provisioner. + +Which method you use is determined by the **Inherit configurations from previous Terragrunt Provision step** option in the Terragrunt Destroy step. + +![](./static/remove-provisioned-infra-with-terragrunt-destroy-01\.png) + +When the Terragrunt Provision step is executed, Harness saved the **Inline Values** and **Backend Configuration** values using a combination of the following: + +* **Infrastructure Provisioner** used. +* **Environment** used for the Workflow. +* **Workspace** used (or `default` if no workspace was specified). + +You can decide to use these by selecting the **Inherit configurations from previous Terragrunt Provision step** option or provide your own **Inline Values** and **Backend Configuration** values by not selecting this option. + +#### Inherit configurations from previous Terragrunt Provision step + +When you use the Terragrunt Destroy step, you specify the Provisioner and Workspace to use, and Harness gets the the **Inline Values** and **Backend Configuration** values from the last **successful** execution of that Provisioner. + +When Terragrunt Destroy is run, it uses the same combination to identify which **Inline Values** and **Backend Configuration** values to use. You simply need to provide the Provisioner and Workspace. + +#### Specify Backend Configuration (Remote State) + +You can specify a Backend Configuration (Remote State) to use to identify the infrastructure to destroy. + +You simply need to specify a Terragrunt Infrastructure Provisioner so that Harness knows where to look for the files. + +In Terragrunt Destroy, you *disable* the **Inherit configurations from previous Terragrunt Provision step** option and then provide the input value and remote state settings to use. + +### Step 1: Add Terragrunt Destroy Step + +In the **Post-deployment Steps** of the Workflow, click **Add Step**, and then select **Terragrunt** **Destroy**. + +The Terragrunt Destroy settings appear. + +### Step 2: Select Provisioner and Workspace + +Select the Terragrunt Infrastructure Provisioner and Workspace that was used to provision the infrastructure you want to destroy. + +Typically, this is the Terragrunt Provisioner and Workspace used in the **Pre-deployment Steps**. + +### Option: Select Delegate + +In **Delegate Selector**, enter the Delegate Selector for the Delegate that you want to execute this step. Typically, this is the same Selector used to select a Delegate in the **Terragrunt** **Provision** step. + +See [Select Delegates with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +### Option: Inherit configurations from previous Terragrunt Provision Step + +As described in [Review: What Gets Destroyed?](#review_what_gets_destroyed), select this option to destroy the infrastructure provisioned by the last successful **Terragrunt** **Provision** step in the Workflow. + +If you select this option, then the **Input Values** and **Backend Configuration** settings are disabled. + +### Option: Set as Terragrunt Destroy Plan and Export + +Select this option to make this Terragrunt Destroy step a Terragrunt plan. This is useful when you want to use an Approval step to approve Terragrunt Destroy steps. + +This is the same as running `terragrunt run-all destroy` in Terragrunt. + +If you select this option, Harness generates a plan to destroy all the known resources. + +Later, when you want to actually destroy the resources, you add another Terragrunt Destroy step and select the option **Inherit following configurations from Terragrunt Destroy Plan**. + +The **Inherit following configurations from Terragrunt Destroy Plan** option only appears if the **Set as Terragrunt Destroy Plan and Export** option was set in the preceding Terragrunt Destroy step. + +The Terragrunt Plan is stored in a Secrets Manager as an encrypted text. + +See [Add a Secrets Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager). + +#### Terragrunt Plan Size Limit + +The Terragrunt Plan is stored in the default Harness Secrets Manager as encrypted text. This is because plans often contain variables that store secrets. + +The Terragrunt plan size must not exceed the secret size limit for secrets in your default Secret Manager. AWS Secrets Manager has a limitation of 64KB. Other supported Secrets Managers support larger file size. + +See [Add a Secrets Manager](https://docs.harness.io/article/uuer539u3l-add-a-secrets-manager). + +#### Terragrunt Destroy Plan Output Variable + +If you select the **Set as Terragrunt Destroy Plan and Export** option, you can display the output of the plan using the variable expression `${terraformDestroy.tfplan}`. + +For example, you can display the plan output in a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +The plan is associated with the Terraform script source of the Terragrunt config file in the Terragrunt Infrastructure Provisioner. + +### Option: Inherit following configurations from Terragrunt Destroy Plan + +Select this option to apply the previous Terragrunt Destroy step if that step has the **Set as Terragrunt Destroy Plan and Export** option enabled. + +As noted above in [Option: Set as Terragrunt Destroy Plan and Export](#option_set_as_terragrunt_destroy_plan_and_export), the **Inherit following configurations from Terragrunt Destroy Plan** option only appears if the **Set as Terragrunt Destroy Plan and Export** option was set in the preceding Terragrunt Destroy step. + +### Step 3: Input Values + +Enter the input values to use when destroying the infrastructure. + +The Terragrunt Infrastructure Provisioner you are using (the Terragrunt Infrastructure Provisioner you selected in the **Provisioner** setting earlier), identifies the Terraform script where the inputs are located. + +See **Input Values** in [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md). + +### Step 4: Backend Configuration + +In **Backend Configuration (Remote state)**, enter values for each backend config (remote state variable) in the Terragrunt config.tf file or Terraform script. + +See **Backend Configuration (Remote state)** in [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md). + +### Option: Terragrunt Environment Variables + +You can remove any Terragrunt environment variables you created using the Terragrunt Provision steps. + +You cannot add new environment variables in the Terragrunt Destroy step. + +If you select the **Inherit configurations from previous Terragrunt Provision Step** option, then the environment variables are also inherited from the environment variables set in any pervious Terragrunt provisioning step in the Workflow. + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/set-up-your-harness-account-for-terragrunt.md b/docs/first-gen/continuous-delivery/terragrunt-category/set-up-your-harness-account-for-terragrunt.md new file mode 100644 index 00000000000..daacf7ae00f --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/set-up-your-harness-account-for-terragrunt.md @@ -0,0 +1,117 @@ +--- +title: Set Up Your Harness Account for Terragrunt +description: This topic describes how to set up the necessary Harness account components for Terragrunt. +sidebar_position: 20 +helpdocs_topic_id: ulhl7sjxva +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The first step in integrating your Terragrunt files and processes is setting up the necessary Harness account components: Delegates, Cloud Providers, and Source Repo Providers. + +This topic describes how to set up these components for Terragrunt. + +Once your account is set up, you can begin integrating your Terragrunt files. See [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md). + +### Before You Begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* Get an overview of how Harness integrates Terragrunt: [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md). +* [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) +* [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers) + +### Visual Summary + +Here's a 6 minute video walkthrough of Harness-Terragrunt integration that shows how each component is used: + + + + +### Step 1: Set Up Harness Delegates + +A Harness Delegate performs the Terragrunt provisioning in your Terragrunt files. When installing the Delegate for Terragrunt provisioning, consider the following: + +* The Delegate should be installed where it can connect to the target infrastructure. Ideally, this is the same subnet. +* The Delegate should have Terragrunt and Terraform installed on its host. For details on supported versions, see [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md). +* If you are provisioning the subnet dynamically, then you can put the Delegate in the same VPC and ensure that it can connect to the provisioned subnet using security groups. +* The Delegate must also be able to connect to your file repo. The Delegate will pull the files and related scripts at deployment runtime. +* While all Harness Delegates can use Terragrunt, you might want to select a Delegate type (Shell Script, Kubernetes, ECS, etc) similar to the type of infrastructure you are provisioning. + + If you are provisioning AWS AMIs and ASGs, you'll likely use Shell Script Delegates on EC2 instances or ECS Delegates. + + If you are provisioning Kubernetes clusters, you will likely use Kubernetes Delegates. +1. To install a Delegate, follow the steps in [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). Once the Delegate is installed, it will be listed on the Harness Delegates page. + +#### Delegate Selectors + +If needed, add a Delegate Selector to your Delegates. When you add a **Terragrunt** **Provisioner** step to your Harness Workflows, you can use the Delegate Selector to ensure specific Delegates perform the operations. + +If you do not specify a Selector in the **Terragrunt** **Provisioner** step, Harness will try all Delegates and then assign the Terragrunt tasks to the Delegates with Terragrunt installed. + +To add Selectors, see [Delegate Installation and Management](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +#### Permissions + +The Harness Delegate requires permissions according to the deployment platform and the operations of the Terragrunt and Terraform scripts. + +In many cases, all credentials are provided by the account used to set up the Harness Cloud Provider. + +In some cases, access keys, secrets, and SSH keys are needed. You can add these in Harness [Secrets Management](https://docs.harness.io/article/au38zpufhr-secret-management). You can then select them in the **Terragrunt** **Provisioner** step in your Harness Workflows. + +For ECS Delegates, you can add an IAM role to the ECS Delegate task definition. For more information, see  [Trust Relationships and Roles](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation#trust_relationships_and_roles). + +### Step 2: Install Terragrunt and Terraform on Delegates using Delegate Profiles + +The Delegate should have Terragrunt and Terraform installed on its host. For details on supported versions, see [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md). + +You can install Terragrunt and Terraform on the Delegate using Delegate Profiles. + +For example, here is a Delegate Profile script to install Terragrunt and Terraform: + + +``` +##terraform update +set +x +apt-get update +apt-get install wget +apt-get -y install git +wget https://releases.hashicorp.com/terraform/0.13.3/terraform_0.13.3_linux_amd64.zip apt-get install unzip +unzip terraform_0.13.3_linux_amd64.zip +cp terraform /usr/bin/ +terraform --version + +wget https://github.com/gruntwork-io/terragrunt/releases/download/v0.28.0/terragrunt_linux_amd64 +mv terragrunt_linux_amd64 terragrunt +chmod u+x terragrunt +mv terragrunt /usr/local/bin/terragrunt +terragrunt --version +``` +See [Run Scripts on Delegates using Profiles](https://docs.harness.io/article/yd4bs0pltf-run-scripts-on-the-delegate-using-profiles). + +The Delegate needs to be able to obtain any providers you specify in modules. For example, `provider "acme"`. On the Delegate, Terraform will download and initialize any providers that are not already initialized. + +### Step 3: Set Up the Cloud Provider + +Add a Harness Cloud Provider to connect Harness to your target platform (AWS, Kubernetes cluster, etc). + +Later, when you use Terragrunt to define a Harness Infrastructure Definition, you will also select the Cloud Provider to use when provisioning. + +When you create the Cloud Provider, you can enter the platform account information for the Cloud Provider to use as credentials, or you can use the Delegate(s) running in your infrastructure to provide the credentials for the Cloud Provider. + +If you are provisioning infrastructure on a platform that requires specific permissions, the account used by the Cloud Provider needs the required policies. For example, to create AWS EC2 AMIs, the account needs the **AmazonEC2FullAccess** policy. See the list of policies in [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +When the Cloud Provider uses the installed Delegate for credentials (via its Delegate Selector), it assumes the permissions/roles used by the Delegate (service accounts, etc). + +### Step 4: Connect Harness to Your Script Repo + +To use your Terragrunt and Terraform files in Harness, you host the files in a Git repo and add a Harness Source Repo Provider that connects Harness to the repo. For steps on adding the Source Repo Provider, see [Add Source Repo Providers](https://docs.harness.io/article/ay9hlwbgwa-add-source-repo-providers). + +Here is an example of a Source Repo Provider and the GitHub repo for Terragrunt. The Terragrunt configuration file in the repo points to a Terraform module in another repo. + +![](./static/set-up-your-harness-account-for-terragrunt-26.png) + +### Next Steps + +* [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md) + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-27.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-27.png new file mode 100644 index 00000000000..e729dad08b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-27.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-28.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-28.png new file mode 100644 index 00000000000..2977a1e7863 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-28.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-29.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-29.png new file mode 100644 index 00000000000..0eb64f5a07b Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-29.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-30.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-30.png new file mode 100644 index 00000000000..acdcea59dea Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-30.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-31.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-31.png new file mode 100644 index 00000000000..bc64654a8fa Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/add-terragrunt-configuration-files-31.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-17.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-17.png new file mode 100644 index 00000000000..00c204ad752 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-17.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-18.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-18.png new file mode 100644 index 00000000000..e729dad08b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-18.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-19.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-19.png new file mode 100644 index 00000000000..acf91167016 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-19.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-20.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-20.png new file mode 100644 index 00000000000..c76b40d5379 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-20.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-21.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-21.png new file mode 100644 index 00000000000..86cf5e9c995 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-21.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-22.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-22.png new file mode 100644 index 00000000000..55f342ed828 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-22.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-23.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-23.png new file mode 100644 index 00000000000..7feaaa7f84b Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-23.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-24.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-24.png new file mode 100644 index 00000000000..a197a737107 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-24.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-25.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-25.png new file mode 100644 index 00000000000..c5eb14842b2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/map-terragrunt-infrastructure-25.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-02.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-02.png new file mode 100644 index 00000000000..47ea7d26eac Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-02.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-03.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-03.png new file mode 100644 index 00000000000..fb58cd010b1 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-03.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-04.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-04.png new file mode 100644 index 00000000000..8b004d846fe Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-04.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-05.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-05.png new file mode 100644 index 00000000000..6d509e0f279 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-05.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-06.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-06.png new file mode 100644 index 00000000000..e7d973567ec Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-06.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-07.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-07.png new file mode 100644 index 00000000000..bfacc0113a2 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/perform-a-terragrunt-dry-run-07.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-08.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-08.png new file mode 100644 index 00000000000..e729dad08b7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-08.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-09.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-09.png new file mode 100644 index 00000000000..1c49c01c95f Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-09.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-10.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-10.png new file mode 100644 index 00000000000..68b9e1d3485 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-10.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-11.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-11.png new file mode 100644 index 00000000000..9289ee86029 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-11.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-12.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-12.png new file mode 100644 index 00000000000..c3d3645b570 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-12.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-13.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-13.png new file mode 100644 index 00000000000..a638fe7887b Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-13.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-14.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-14.png new file mode 100644 index 00000000000..eac4113bf3e Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-14.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-15.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-15.png new file mode 100644 index 00000000000..74356fc9838 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-15.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-16.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-16.png new file mode 100644 index 00000000000..a07121816c4 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/provision-using-the-terragrunt-provision-step-16.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/remove-provisioned-infra-with-terragrunt-destroy-01.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/remove-provisioned-infra-with-terragrunt-destroy-01.png new file mode 100644 index 00000000000..b16c55a1043 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/remove-provisioned-infra-with-terragrunt-destroy-01.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/set-up-your-harness-account-for-terragrunt-26.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/set-up-your-harness-account-for-terragrunt-26.png new file mode 100644 index 00000000000..59a62db8605 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/set-up-your-harness-account-for-terragrunt-26.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/static/use-terragrunt-outputs-in-workflow-steps-00.png b/docs/first-gen/continuous-delivery/terragrunt-category/static/use-terragrunt-outputs-in-workflow-steps-00.png new file mode 100644 index 00000000000..6fb4298dad0 Binary files /dev/null and b/docs/first-gen/continuous-delivery/terragrunt-category/static/use-terragrunt-outputs-in-workflow-steps-00.png differ diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/terragrunt-how-tos.md b/docs/first-gen/continuous-delivery/terragrunt-category/terragrunt-how-tos.md new file mode 100644 index 00000000000..be605e1bc7f --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/terragrunt-how-tos.md @@ -0,0 +1,33 @@ +--- +title: Terragrunt How-tos +description: Harness has first-class support for Terragrunt as an infrastructure provisioner. See the following Terragrunt How-tos -- Set Up Your Harness Account for Terragrunt. Add Terragrunt Configuration Files.… +sidebar_position: 10 +helpdocs_topic_id: a9e63yqb2j +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness has first-class support for [Terragrunt](https://terragrunt.gruntwork.io/docs/) as an infrastructure provisioner. + +See the following Terragrunt How-tos: + +* [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md) +* [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md) +* [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md) +* [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md) +* [Perform a Terragrunt Dry Run](perform-a-terragrunt-dry-run.md) +* [Remove Provisioned Infra with Terragrunt Destroy](remove-provisioned-infra-with-terragrunt-destroy.md) +* [Use Terragrunt Outputs in Workflow Steps](use-terragrunt-outputs-in-workflow-steps.md) + +For a conceptual overview of Harness Terragrunt integration and details of limitations and permissions, see [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md). + +### Video Summary + +Here's a 6 minute video walkthrough of Harness-Terragrunt integration: + + + + + diff --git a/docs/first-gen/continuous-delivery/terragrunt-category/use-terragrunt-outputs-in-workflow-steps.md b/docs/first-gen/continuous-delivery/terragrunt-category/use-terragrunt-outputs-in-workflow-steps.md new file mode 100644 index 00000000000..59aa9b8d7c4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/terragrunt-category/use-terragrunt-outputs-in-workflow-steps.md @@ -0,0 +1,79 @@ +--- +title: Use Terragrunt Outputs in Workflow Steps +description: Terragrunt outputs can be used in Workflow steps by using the expression ${terragrunt.output_name}. +sidebar_position: 80 +helpdocs_topic_id: sd6hbtqcbv +helpdocs_category_id: noj782z9is +helpdocs_is_private: false +helpdocs_is_published: true +--- + +When you use a Terragrunt Provision step in a Workflow, any of the Terragrunt config source's Terraform script outputs can be used in Workflow settings that follow the step. + +You reference an output with a Harness variable expression in the format `${terragrunt.output_name}`. + +You can reference the output regardless of whether the Terragrunt Infrastructure Provisioner is used in the Infrastructure Definition in the Workflow settings.This topic demonstrates how to use these expressions in other Workflow steps. + + +### Before You Begin + +This topic assumes you have read the following: + +* [Terragrunt Provisioning with Harness](../concepts-cd/deployment-types/terragrunt-provisioning-with-harness.md) +* [Set Up Your Harness Account for Terragrunt](set-up-your-harness-account-for-terragrunt.md) +* [Add Terragrunt Configuration Files](add-terragrunt-configuration-files.md) +* [Map Dynamically Provisioned Infrastructure using Terragrunt](map-terragrunt-infrastructure.md) +* [Provision using the Terragrunt Provision Step](provision-using-the-terragrunt-provision-step.md) + +Other useful topics: + +* [Perform a Terragrunt Dry Run](perform-a-terragrunt-dry-run.md) +* [Remove Provisioned Infra with Terragrunt Destroy](remove-provisioned-infra-with-terragrunt-destroy.md) + +### Limitations + +You can only reference a Terraform output once the Terraform plan has been applied by the Terragrunt Provision step. + +If a Terragrunt Provision step is set to run as a plan, you cannot reference its Terraform outputs. + +Once the plan has been applied by another Terragrunt Provision step, you can reference the Terraform script outputs. + +See [Perform a Terragrunt Dry Run](perform-a-terragrunt-dry-run.md). + +### Step 1: Add a Workflow Step + +This topic assumes you have a Workflow that uses a Terragrunt Provision step. + +Add a Workflow step after the Terragrunt Provision step where you want to use the Terraform script outputs. + +Typically, you add a [Shell Script](https://docs.harness.io/article/1fjrjbau7x-capture-shell-script-step-output) step. + +### Step 2: Enter the Output Variable Expression + +You can reference any Terraform output using the variable expression in the format `${terragrunt.output_name}`. + +For example, let's say the Terraform script source of the Terragrunt config file in the Terragrunt Infrastructure Provisioner has a Kubernetes cluster name output. + +You can add a Shell Script step in your Workflow and use `echo ${terragrunt.clusterName}` to print the value. + +In the following diagram, you can see two outputs in the Terraform script referenced and echoed in a Shell Script step and then resolved in the logs: + +![](./static/use-terragrunt-outputs-in-workflow-steps-00\.png) + +The Shell Script step simply contains: + + +``` +echo "Terragrunt outputs: " + +echo "clusterName: " ${terragrunt.clusterName} + +echo "sleepoutputModule3: " ${terragrunt.sleepoutputModule3} +``` +### Notes + +Terragrunt output expressions cannot be evaluated or published under the following conditions: + +* The Shell Script step script uses `exit 0`. Bash exit prevents outputs from being published. +* No Terragrunt apply is performed by the Terragrunt Provision step. In some cases, a Terragrunt plan might be run using the [Set Terragrunt as Plan](perform-a-terragrunt-dry-run.md) option, but no further step performs the Terragrunt apply. If there is no Terragrunt apply, there are no output values. + diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/_category_.json b/docs/first-gen/continuous-delivery/traditional-deployments/_category_.json new file mode 100644 index 00000000000..988e75997a6 --- /dev/null +++ b/docs/first-gen/continuous-delivery/traditional-deployments/_category_.json @@ -0,0 +1 @@ +{"label": "Traditional Deployments (SSH)", "position": 110, "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Traditional Deployments (SSH)"}, "customProps": { "helpdocs_category_id": "td451rmlr3"}} \ No newline at end of file diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/add-artifacts-for-ssh-deployments.md b/docs/first-gen/continuous-delivery/traditional-deployments/add-artifacts-for-ssh-deployments.md new file mode 100644 index 00000000000..d83a782d590 --- /dev/null +++ b/docs/first-gen/continuous-delivery/traditional-deployments/add-artifacts-for-ssh-deployments.md @@ -0,0 +1,121 @@ +--- +title: Add Artifacts and App Stacks for Traditional (SSH) Deployments +description: The Harness Secure Shell (SSH) Service contains the application package artifact (file or metadata) and the related commands to execute on the target host. It also included scripts for installing and… +sidebar_position: 30 +helpdocs_topic_id: umpe4zfnac +helpdocs_category_id: td451rmlr3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness Secure Shell (SSH) Service contains the application package artifact (file or metadata) and the related commands to execute on the target host. It also included scripts for installing and running an application stack, if needed. + +In this topic, we will show you how to create the Service for your application package artifact, add additional scripts, and add an application stack. + +### Before You Begin + +* [Connect to Your Repos and Target SSH Platforms](connect-to-your-target-ssh-platform.md) +* [Traditional Deployments Overview](traditional-deployments-overview.md) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Step 1: Create a Harness SSH Service + +To create a Service for an application package, do the following: + +1. In your Application, click **Services**, and then click **Add Service**. The **Add Service** settings appear. +2. In **Name**, enter a name for the Service. You will use this name when selecting this Service in Harness Environments, Workflows, and other components. For more information, see  [Services](https://docs.harness.io/article/eb3kfl8uls-service-configuration). +3. In **Deployment Type**, select **Secure Shell (SSH)**. All file-based Services are Secure Shell (SSH) deployments.The **Artifact Type** and **Application Stack** settings appear. + +### Step 2: Select an Artifact Type + +In **Artifact Type**, select the file type of your artifact. For example, **Java Archive (JAR)**. + +#### Supported Packaging Formats + +Harness supports the following traditional deployment packaging formats: WAR, JAR, TAR, RPM, ZIP, Docker, and custom files. + +All of these formats are also supported by other Harness deployment types, such as Kubernetes, Helm, PCF, ECS, etc.  This topic is concerned with traditional deployments outside of the container orchestration platforms. + +### Option: Select an Application Stack + +In **Application Stack**, select the app stack you want to use to as a runtime environment for your application, such as Tomcat. + +When the Service is created, it contains the scripts need to install the application stack. + +![](./static/add-artifacts-for-ssh-deployments-00\.png) + +If you are deploying to an existing instance that already has an app stack installed, you can leave **Application Stack** empty. For more information, see  [Add Application Stacks](https://docs.harness.io/article/g26sp2ay68-catalog). + +### Review: Secure Shell Service Sections + +The Service page has the following important sections: + +* **Artifact Source** - The package files you want deployed are added here. In some cases an actual file is obtained, but in most cases metadata is sufficient. +* **Artifact History** - You can manually pull metadata on your artifacts to see their builds and versions. +* **Script** - The scripts to set up your files. These will typically include application stack setup unless your target hosts already have the application stack set up. +* **Add Commands** - You can add new commands from an Application or Shared Template Library, or simply add a blank command and add Harness scripts to it. +* **Configuration** - You can add variables and files to use in your Service scripts. These can be encrypted by Harness, allowing you to use secrets. The variables and files can be overwritten in Environments and Workflows. + +![](./static/add-artifacts-for-ssh-deployments-01.png) + +### Review: Software Required by Commands + +The commands in the Service will be executed on your target hosts, and so any of the software used in the commands must be installed on the target hosts. + +For example, the **Port Listening** command uses netcat (nc): + + +``` +... +nc -v -z -w 5 localhost $port +... +``` +You can install nc on your target hosts simply by running: `yum install -y nc` + +You can install the required software by adding an **Exec** command to the Service that installs the software. + +### Step 3: Add Your Artifact Source + +The Artifact Source for the Service lists the file(s) that you want copied to the target host(s). The Artifact History will manually pull artifact build and version metadata from the Artifact Source. + +Before you can add an artifact source, you need to add a Harness Artifact Server or Cloud Provider. + +If your artifact files are located on cloud platform storage like AWS S3, GCP Storage, or Azure Storage, you can add a Cloud Provider. + +If the files are located in a repo such as Artifactory or an automation server such as Jenkins, you can create an Artifact Server. + +For more information, see  [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server) and  [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers). + +To add an artifact source, do the following: + +1. Click **Add Artifact Source**, and select the repo or cloud platform where the artifact is located. The dialog for the artifact source appears. This guide will use AWS S3 as an example. +2. In **Cloud Provider**, select the Harness Cloud Provider to use to locate your artifact file. +3. In **Bucket**, select the name of the bucket containing your artifact. +4. In **Artifact Path**, click the artifact Harness located in the bucket you selected in Bucket. If the artifact is at the root of the bucket, then just the filename is provided. If the artifact is in a folder, the file path is provided also. +Harness uses **Metadata Only** to download the file on the target host. +Metadata is sufficient as it contains enough information for the target host(s) to obtain or build the artifact. Harness stores the metadata. +During runtime, Harness passes the metadata to the target host(s) where it is used to obtain the artifact(s). Ensure that the target host has network connectivity to the Artifact Server. For more information, see  [Service Types and Artifact Sources](https://docs.harness.io/article/qluiky79j8-service-types-and-artifact-sources). +5. Click SUBMIT. The artifact source is listed. + +### Option: View Artifact History + +When you add an Artifact Source to the Service, Harness will pull all the build and version history metadata for its artifacts. You can see the results of the pull in **Artifact History** and, if you do not see a build/version you expected, you can manually pull them. + +To view the artifact history,  do the following: + +1. Click **Artifact History**. This assistant lists the artifact builds and versions Harness has pulled. +2. In the **Artifact History** assistant, click **Manually pull artifact**. The **Manually Select An Artifact** dialog appears. +3. In **Artifact Stream**, click the Artifact Source you added to the Service. +4. In **Artifact**, select an artifact build/version, and then click **SUBMIT**. +5. Click **Artifact History** to view the history. + +Now all available artifact builds and version history metadata is displayed. + +### See Also + +* [Add Scripts for Traditional (SSH) Deployments](add-deployment-specs-for-traditional-ssh-deployments.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/add-deployment-specs-for-traditional-ssh-deployments.md b/docs/first-gen/continuous-delivery/traditional-deployments/add-deployment-specs-for-traditional-ssh-deployments.md new file mode 100644 index 00000000000..8e86e35aae4 --- /dev/null +++ b/docs/first-gen/continuous-delivery/traditional-deployments/add-deployment-specs-for-traditional-ssh-deployments.md @@ -0,0 +1,70 @@ +--- +title: Add Scripts for Traditional (SSH) Deployments +description: When you create the Harness Secure Shell (SSH) Service, Harness automatically generates the commands and scripts needed to install the app and stack on the target host, copy the file(s) to the correc… +sidebar_position: 40 +helpdocs_topic_id: ih779z9kb6 +helpdocs_category_id: td451rmlr3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +When you create the Harness Secure Shell (SSH) Service, Harness automatically generates the commands and scripts needed to install the app and stack on the target host, copy the file(s) to the correct folder, and start the app. + + +### Before You Begin + +* [Connect to Your Repos and Target SSH Platforms](connect-to-your-target-ssh-platform.md) +* [Traditional Deployments Overview](traditional-deployments-overview.md) +* [Add Artifacts and App Stacks for Traditional (SSH) Deployments](add-artifacts-for-ssh-deployments.md) + +### Visual Summary + +Here is an example of the default scripts and commands Harness generates when you first create your Secure Shell (SSH) Service: + +![](./static/add-deployment-specs-for-traditional-ssh-deployments-02.png) + +### Review: Script Execution Order + +When you look at the default commands in a file-based Service, their order of execution might be confusing. For example, it looks like they are executed like this:  + +![](./static/add-deployment-specs-for-traditional-ssh-deployments-03\.png) + +But they are actually executed like this:  + +![](./static/add-deployment-specs-for-traditional-ssh-deployments-04\.png) + +The order is clearer when you see the deployment in the **Deployments** page: + +![](./static/add-deployment-specs-for-traditional-ssh-deployments-05.png) + +### Step 1: Add Commands and Scripts + +The default scripts Harness generates will deploy the artifact and app package you add to the Service. No further changes are required. + +If you like, you can add commands and scripts using the **Add Command** settings, and by clicking the plus icon in the commands. + +![](./static/add-deployment-specs-for-traditional-ssh-deployments-06\.png) + +All of the scripts include tooltips to explain how to use them: + +![](./static/add-deployment-specs-for-traditional-ssh-deployments-07.png) + +### Review: Download Artifact and Exec Scripts + +The Download Artifact script is supported for Amazon S3, Artifactory, SMB (PowerShell-only), SFTP (PowerShell-only), Azure DevOps artifacts, Nexus, Jenkins, and Bamboo. For other artifact sources, add a new command and use the Exec script to download the artifact. For more information, see  [Exec Script](https://docs.harness.io/article/qluiky79j8-service-types-and-artifact-sources#exec_script). + +### Review: Harness and Custom Variables + +You can use Harness built-in variables in your Service scripts, or add your own variables and reference them in your scripts. + +For information on Harness built-in variables, see  [What is a Harness Variable Expression?](https://docs.harness.io/article/9dvxcegm90-variables). For information on using variables in your scripts, see  [Add Service Config Variables](https://docs.harness.io/article/q78p7rpx9u-add-service-level-config-variables) and [Add Service Config Files](https://docs.harness.io/article/iwtoq9lrky-add-service-level-configuration-files). + +### See Also + +* [Set Default Application Directories as Variables](https://docs.harness.io/article/lgg12f0yry-set-default-application-directories-as-variables) +* [Override Variables at the Infrastructure Definition Level](../kubernetes-deployments/override-variables-per-infrastructure-definition.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/connect-to-your-target-ssh-platform.md b/docs/first-gen/continuous-delivery/traditional-deployments/connect-to-your-target-ssh-platform.md new file mode 100644 index 00000000000..1096f6ce6c1 --- /dev/null +++ b/docs/first-gen/continuous-delivery/traditional-deployments/connect-to-your-target-ssh-platform.md @@ -0,0 +1,44 @@ +--- +title: Connect to Your Repos and Target SSH Platforms +description: Traditional (SSH) deployments typically pull application packages from artifact servers and then deploy to target virtual machines on a cloud platform. They can also target physical servers. Harness… +sidebar_position: 20 +helpdocs_topic_id: mk5pjqyugc +helpdocs_category_id: td451rmlr3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Traditional (SSH) deployments typically pull application packages from artifact servers and then deploy to target virtual machines on a cloud platform. They can also target physical servers. Harness supports connecting and deploying to all target types. + +This topic covers the steps needed to connect Harness to your artifact servers and target environments. + +### Before You Begin + +* [Traditional Deployments (SSH) Overview](../concepts-cd/deployment-types/traditional-deployments-ssh-overview.md) +* [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Step 1: Set Up a Harness Delegate + +The Delegate needs to be able to connect to the artifact server or repository containing the file, and the target host where the file will be deployed. Typically, the Delegate is installed on a host in the same subnet as the target host. + +For steps on installing the Delegate, see [Harness Delegate Overview](https://docs.harness.io/article/h9tkwmkrm7-delegate-installation). + +For AWS, you can install the Delegate on an EC2 instance and then have the Harness Cloud Provider assume the IAM role used by the Delegate host. For more information, see Delegate Selectors in [Select Delegates for Specific Tasks with Selectors](https://docs.harness.io/article/c3fvixpgsl-select-delegates-for-specific-tasks-with-selectors). + +### Step 2: Connect to Your Artifact Server + +Harness retrieves the package file from an artifact source using a Harness Artifact Server and deploys it to the target host using a Cloud Provider. + +See [Add Artifact Servers](https://docs.harness.io/article/7dghbx1dbl-configuring-artifact-server). + +### Step 3: Connect to Your Cloud Provider or Physical Server + +You connect Harness to the target environment for your deployment. This can be a VM in the cloud or a physical server. + +See [Add Cloud Providers](https://docs.harness.io/article/whwnovprrb-cloud-providers) and [Add Physical Data Center as Cloud Provider](https://docs.harness.io/article/stkxmb643f-add-physical-data-center-cloud-provider). + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/create-a-basic-workflow-for-traditional-ssh-deployments.md b/docs/first-gen/continuous-delivery/traditional-deployments/create-a-basic-workflow-for-traditional-ssh-deployments.md new file mode 100644 index 00000000000..62adf5d8955 --- /dev/null +++ b/docs/first-gen/continuous-delivery/traditional-deployments/create-a-basic-workflow-for-traditional-ssh-deployments.md @@ -0,0 +1,150 @@ +--- +title: Create a Basic Workflow for Traditional (SSH) Deployments +description: Traditional (SSH) deployments involve obtaining an application package from an artifact source, such as a WAR file in an AWS S3 bucket, and deploying it to a target host, such as a virtual machine. T… +sidebar_position: 60 +helpdocs_topic_id: 8zff5k2frj +helpdocs_category_id: td451rmlr3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Traditional (SSH) deployments involve obtaining an application package from an artifact source, such as a WAR file in an AWS S3 bucket, and deploying it to a target host, such as a virtual machine. + + +Typically, the Harness Basic Workflow is used for Traditional deployments, but Harness provides Canary and Rolling Workflows for Traditional deployments also. + + +In this topic, we will use the Basic Workflow to demonstrate a simple Traditional deployment. + + +For a Build and Deploy Pipeline using a Traditional deployment, see + [Artifact Build and Deploy Pipelines Overview](../concepts-cd/deployment-types/artifact-build-and-deploy-pipelines-overview.md). + +### Before You Begin + + +* [Add Artifacts and App Stacks for Traditional (SSH) Deployments](add-artifacts-for-ssh-deployments.md) +* [Connect to Your Repos and Target SSH Platforms](connect-to-your-target-ssh-platform.md) +* [Traditional Deployments Overview](traditional-deployments-overview.md) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + + +### Supported Platforms and Technologies + + +See **SSH** in + [Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms). + + + +### Review: Basic Workflows + + +The Basic deployment Workflow is the most common Workflow type for traditional, package file-based deployments. Basic Workflows simply select nodes in the deployment infrastructure and install and start the application attached to the Harness Service. + + + + +![](./static/create-a-basic-workflow-for-traditional-ssh-deployments-10.png) + + +### Step 1: Create the Workflow + + +To create a Basic Workflow for a Traditional deployment, do the following: + + +1. In your Harness Application, click **Workflows**. +2. In **Workflows**, click **Add Workflow**. The **Workflow** dialog appears. +3. In **Name**, enter a name for the Workflow. +4. In **Workflow Type**, select **Basic Deployment**. +5. In **Environment**, select the Environment where the Infrastructure Definition you defined for your deployment is located. +6. In **Service**, select the SSH Service to be deployed. +7. In **Infrastructure Definition**, select your target infrastructure. +8. Click **SUBMIT**. The Workflow is created. + + +Let's look at the two default steps in the Workflow, **Select Nodes** and **Install**. + + +### Step 2: Select Target Nodes + + +The **Select Nodes** step selects the target hosts from the Infrastructure Definition you defined. You can choose to select a specific host or simply specify the number of instances to select with the Infrastructure Definition criteria. + + +The following image shows an **Infrastructure Definition** specifying an AWS Region, VPC, and Tags (**Name:doc-target**), the EC2 instance that meets that criteria, and the host name in the Node Select dialog. + + + + +![](./static/create-a-basic-workflow-for-traditional-ssh-deployments-11.png) + +For details, see + [Select Nodes Workflow Step](https://docs.harness.io/article/9h1cqaxyp9-select-nodes-workflow-step). + + +### Step 3: Install and Run the Application and Stacks + + +The Install step runs the command scripts in your Service on the target host. + + +For details, see + [Install Workflow Step](https://docs.harness.io/article/2q8vjxdjcq-install-workflow-step). + + +### Review: Rollbacks + + +There are not many causes for a rollback of a Basic Workflow using application packages. Artifact issues are uncommon because the artifact must be available to the Harness Delegate before you deploy. If the Delegate cannot reach the target host, the deployment + will fail without changing the target host, and so no rollback is needed. + + +### Example: Basic Workflow Deployment + + +The Basic Workflow is the most common deployment of Services deploying application packages. Once you've successfully deployed the Workflow, you can click the **Install** step to see the Service commands and scripts in the **Deployments** page. + + + + +![](./static/create-a-basic-workflow-for-traditional-ssh-deployments-12.png) + +You can expand logs for each script in the **Install** step to see the log of its execution by the Harness Delegate. For example, here is the **Copy Artifact** script copying the application package **todolist.war** to + the runtime location set up in Application Defaults (`$HOME/${app.name}/${service.name}/${env.name}/runtime`): + + +``` +Begin execution of command: Copy Artifact + +Connecting to ip-10-0-0-54.ec2.internal .... + +Connection to ip-10-0-0-54.ec2.internal established + +Begin file transfer todolist.war to ip-10-0-0-54.ec2.internal:/home/ec2-user/ExampleAppFileBased/WAR/Development/runtime/tomcat/webapps + +File successfully transferred + +Command execution finished with status SUCCESS +``` + +You can SSH into the target host and see the application package: + + + +![](./static/create-a-basic-workflow-for-traditional-ssh-deployments-13.png) + + +### See Also + + +* [Artifact Build and Deploy Pipelines Overview](../concepts-cd/deployment-types/artifact-build-and-deploy-pipelines-overview.md) +* [Trigger Workflows and Pipelines](https://docs.harness.io/article/xerirloz9a-add-a-trigger-2) + + +### Configure As Code + + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/define-your-traditional-ssh-target-infrastructure.md b/docs/first-gen/continuous-delivery/traditional-deployments/define-your-traditional-ssh-target-infrastructure.md new file mode 100644 index 00000000000..4f424ae212f --- /dev/null +++ b/docs/first-gen/continuous-delivery/traditional-deployments/define-your-traditional-ssh-target-infrastructure.md @@ -0,0 +1,111 @@ +--- +title: Define Your Traditional (SSH) Target Infrastructure +description: Harness Infrastructure Definitions specify the target deployment infrastructure for your Harness Services, and the specific infrastructure details for the deployment, like VPC settings. In this topic… +sidebar_position: 50 +helpdocs_topic_id: 5qh02lv090 +helpdocs_category_id: td451rmlr3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness [Infrastructure Definitions](https://docs.harness.io/article/v3l3wqovbe-infrastructure-definitions) specify the target deployment infrastructure for your Harness Services, and the specific infrastructure details for the deployment, like VPC settings. + +In this topic, we describe how to add an Infrastructure Definition for your Traditional (SSH) deployment. + + +### Before You Begin + +* [Connect to Your Repos and Target SSH Platforms](connect-to-your-target-ssh-platform.md) +* [Traditional Deployments Overview](traditional-deployments-overview.md) +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) + +### Review: Target Host Requirements + +The MaxSessions setting on the target host(s) must be set to a minimum of 2. This is a requirement for multiplexing. + +If MaxSessions is set to 1, the `error JSchException: channel is not opened` can occur during deployment. + +The [default for MaxSessions](https://linux.die.net/man/5/sshd_config) is set in **/etc/ssh/sshd\_config** and is 10. + +To set MaxSessions, do the following: + +1. Edit to /etc/ssh/sshd\_config on the target host(s). +2. Modify **MaxSessions 1** line and change it to **MaxSessions 2** or greater. The default is **MaxSessions 10**. +3. Restart SSD service: `sudo service sshd restart` + +### Visual Summary + +For example, here is an Infrastructure Definition using an AWS Cloud Provider, and specifies the AWS infrastructure settings for the target AWS VPC and host. + +![](./static/define-your-traditional-ssh-target-infrastructure-08\.png) + +Later, when you create a Workflow, you will select the Service and this Infrastructure Definition. + +### Review: Software Required by Commands + +The commands in the Service will be executed on your target hosts to identify in the Infrastructure Definition, and so any of the software used in the commands must be installed on the target hosts. + +For example, the **Port Listening** command uses netcat (nc): + + +``` +... +nc -v -z -w 5 localhost $port +... +``` +You can install nc on your target hosts simply by running: `yum install -y nc` + +You can install the required software by adding an **Exec** command to the Service that installs the software. + +### Step 1: Create an Environment + +Environments represent one or more of the deployment infrastructures where you want to deploy your application package files. Within an Environment, you add an Infrastructure Definition for each specific deployment infrastructure, using a Cloud Provider and the specific infrastructure details for the deployment, like VPC settings. + +For details on creating an Environment, see  [Environments](https://docs.harness.io/article/n39w05njjv-environment-configuration). + +### Step 2: Define Target Infrastructure + +As an example, we will create an Infrastructure Definition for an AWS EC2 target infrastructure. + +To add an Infrastructure Definition, do the following: + +1. In your Harness Application Environment, click **Add Infrastructure Definition**. The **Infrastructure Definition** dialog appears. +2. In **Name**, enter the name you will use when you select this Infrastructure Definition in Workflows. +3. In **Cloud Provider Type**, select the type of Cloud Provider that this Infrastructure Definition will use for connections. For example, select **Amazon Web Services** for AWS EC2 infrastructures. +4. In **Deployment Type**, select the deployment type for the Services that will use this Infrastructure Definition. For example, if you are deploying SSH type Services like JAR, WAR, etc, you would select **Secure Shell (SSH)**. +5. Click **Use Already Provisioned Infrastructure**. If you were using a Harness  [Infrastructure Provisioner](https://docs.harness.io/article/o22jx8amxb-add-an-infra-provisioner), you would select **Map Dynamically Provisioned Infrastructure**. +6. In **Cloud Provider**, select the Cloud Provider you set up to connect Harness to your deployment infrastructure. +7. Fill out the remaining settings. These settings will look different depending on the Cloud Provider you selected. For example, for an AWS Cloud Provider, you will see AWS-specific settings, such as **Region** and **Auto Scaling Group**. +When you select a region, more settings appear, such as **VPC** and **Tags**. +8. Provide the settings for your infrastructure. For example, here are the settings for an AWS infrastructure that identify the target host using AWS EC2 Tags. + +![](./static/define-your-traditional-ssh-target-infrastructure-09.png) + +##### Using Variable Expressions in Tags + +**Tags** support [Harness variable expressions](https://docs.harness.io/article/9dvxcegm90-variables) from Harness Services, Environment Overrides, Workflows, and secrets. + +**Tags** does not support file-based variable expressions. + +For example, in **Tags**, `automation:${serviceVariable.automationValue}` and `automation:${workflow.variables.automationValue}` will work, but `automation:${configFile.getAsString("automationFile")}` will **not** work. + +1. When you are finished, click **SUBMIT**. The Infrastructure Definition is added. + +For AWS Infrastructure Definitions, you can use [Workflow variables](https://docs.harness.io/article/766iheu1bk-add-workflow-variables-new-template) in the **Tags** setting. This allows you to parameterize the **Tags** setting, and enter or select the AWS tags to use when you deploy any Workflow that uses this Infrastructure Definition.### Option: Scope to Specific Service + +In **Scope to specific Services**, you can select the Service(s) that you want to use this Infrastructure Definition. + +### Review: SSH Key for Connection Attributes + +When you set up the Infrastructure Definition in Harness to identify the target host(s) where your file will be deployed, you also add **Connection Attributes** that use a Harness SSH Key secret. This key is used by the Harness Delegate to SSH into the target host. + +For more information, see  [Add SSH Keys](https://docs.harness.io/article/gsp4s7abgc-add-ssh-keys). + +### See Also + +* [Create a Basic Workflow for Traditional (SSH) Deployments](create-a-basic-workflow-for-traditional-ssh-deployments.md) + +### Configure As Code + +To see how to configure the settings in this topic using YAML, configure the settings in the UI first, and then click the **YAML** editor button. + diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-artifacts-for-ssh-deployments-00.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-artifacts-for-ssh-deployments-00.png new file mode 100644 index 00000000000..8f62987309c Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-artifacts-for-ssh-deployments-00.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-artifacts-for-ssh-deployments-01.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-artifacts-for-ssh-deployments-01.png new file mode 100644 index 00000000000..f2939d4c009 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-artifacts-for-ssh-deployments-01.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-02.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-02.png new file mode 100644 index 00000000000..f947667f0cc Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-02.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-03.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-03.png new file mode 100644 index 00000000000..5c960ab42d6 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-03.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-04.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-04.png new file mode 100644 index 00000000000..0e8ea0dd3ec Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-04.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-05.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-05.png new file mode 100644 index 00000000000..7759c2ac723 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-05.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-06.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-06.png new file mode 100644 index 00000000000..2fd6d84aefb Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-06.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-07.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-07.png new file mode 100644 index 00000000000..f70e8685fa9 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/add-deployment-specs-for-traditional-ssh-deployments-07.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-10.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-10.png new file mode 100644 index 00000000000..ce24c62f631 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-10.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-11.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-11.png new file mode 100644 index 00000000000..17ffcd57b0f Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-11.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-12.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-12.png new file mode 100644 index 00000000000..596b1ce39e7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-12.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-13.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-13.png new file mode 100644 index 00000000000..6d21d20ec1d Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/create-a-basic-workflow-for-traditional-ssh-deployments-13.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/define-your-traditional-ssh-target-infrastructure-08.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/define-your-traditional-ssh-target-infrastructure-08.png new file mode 100644 index 00000000000..34f936e6df7 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/define-your-traditional-ssh-target-infrastructure-08.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/static/define-your-traditional-ssh-target-infrastructure-09.png b/docs/first-gen/continuous-delivery/traditional-deployments/static/define-your-traditional-ssh-target-infrastructure-09.png new file mode 100644 index 00000000000..628beacd756 Binary files /dev/null and b/docs/first-gen/continuous-delivery/traditional-deployments/static/define-your-traditional-ssh-target-infrastructure-09.png differ diff --git a/docs/first-gen/continuous-delivery/traditional-deployments/traditional-deployments-overview.md b/docs/first-gen/continuous-delivery/traditional-deployments/traditional-deployments-overview.md new file mode 100644 index 00000000000..3adf53588d6 --- /dev/null +++ b/docs/first-gen/continuous-delivery/traditional-deployments/traditional-deployments-overview.md @@ -0,0 +1,27 @@ +--- +title: Traditional (SSH) Deployments How-tos +description: An overview of traditional deployments using Harness. +sidebar_position: 10 +helpdocs_topic_id: 6pwni5f9el +helpdocs_category_id: td451rmlr3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following topics discuss how to perform Traditional deployments using application package files and a runtime environment (Tomcat, JBoss) in Harness. + +These deployments are different from Harness deployments using container orchestration platforms like [Kubernetes](https://docs.harness.io/article/7in9z2boh6-kubernetes-quickstart), [Helm](https://docs.harness.io/article/2aaevhygep-helm-quickstart), [Pivotal](https://docs.harness.io/article/hy819vmsux-pivotal-cloud-foundry-quickstart), [AWS ECS](https://docs.harness.io/article/j39azkrevm-aws-ecs-deployments), and [Azure](../azure-deployments/aks-howtos/azure-deployments-overview.md). + +Traditional deployments involve obtaining an application package from an artifact source, such as a WAR file in an AWS S3 bucket, and deploying it to a target host, such an AWS AMI. + +For an overview, see [Traditional Deployments (SSH) Overview](../concepts-cd/deployment-types/traditional-deployments-ssh-overview.md). + +Traditional Deployments How-tos: + +* [Connect to Your Repos and Target SSH Platforms](connect-to-your-target-ssh-platform.md) +* [Add Artifacts and App Stacks for Traditional (SSH) Deployments](add-artifacts-for-ssh-deployments.md) +* [Add Scripts for Traditional (SSH) Deployments](add-deployment-specs-for-traditional-ssh-deployments.md) +* [Define Your Traditional (SSH) Target Infrastructure](define-your-traditional-ssh-target-infrastructure.md) +* [Create Default Application Directories and Variables](https://docs.harness.io/article/lgg12f0yry-set-default-application-directories-as-variables) +* [Create a Basic Workflow for Traditional (SSH) Deployments](create-a-basic-workflow-for-traditional-ssh-deployments.md) + diff --git a/docs/first-gen/sample.md b/docs/first-gen/sample.md index 48f5dcee87c..221f332da27 100644 --- a/docs/first-gen/sample.md +++ b/docs/first-gen/sample.md @@ -1,5 +1,8 @@ -# Harness FirstGen +--- +title: FirstGen Docs (Under Construction) +sidebar_position: 1 +--- -FirstGen docs will be available here soon. +Harness FirstGen docs are currently under construction. In the meantime, you can find existing docs at: [https://docs.harness.io/category/yj3d4lvxn0-harness-firstgen](https://docs.harness.io/category/yj3d4lvxn0-harness-firstgen). \ No newline at end of file diff --git a/docs/getting-started/_category_.json b/docs/getting-started/_category_.json new file mode 100644 index 00000000000..72b81aa83fb --- /dev/null +++ b/docs/getting-started/_category_.json @@ -0,0 +1 @@ +{"label": "Get started", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Get started"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "kx4hs8bn38"}} \ No newline at end of file diff --git a/docs/getting-started/harness-first-gen-vs-harness-next-gen.md b/docs/getting-started/harness-first-gen-vs-harness-next-gen.md new file mode 100644 index 00000000000..a8f4809e66f --- /dev/null +++ b/docs/getting-started/harness-first-gen-vs-harness-next-gen.md @@ -0,0 +1,43 @@ +--- +title: Harness FirstGen vs Harness NextGen +description: Compare Harness two product suite versions. +# sidebar_position: 2 +helpdocs_topic_id: 1fjmm4by22 +helpdocs_category_id: kx4hs8bn38 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness has two versions of its product suite. + +**Harness NextGen** is Harness' new version with a redesigned experience and new Continuous Integration, Feature Flags, Security Testing Orchestration, Service Reliability Management, Cloud Cost Management, and Chaos Engineering modules. + + If possible, sign up with Harness NextGen. Eventually, all Harness FirstGen accounts will migrate to Harness NextGen.![Harness NextGen ](./static/harness-first-gen-vs-harness-next-gen-18.png) + + **Harness FirstGen** is the Harness version that's been around for years, covering all of the common platforms. + +![](./static/harness-first-gen-vs-harness-next-gen-19.png) + +Documentation for FirstGen features is located under the [FirstGen Docs](https://docs.harness.io/category/yj3d4lvxn0-harness-firstgen) section of docs.harness.io. The documentation in all other sections applies only to NextGen features.Review the following supported platforms and technologies topics to see which version to use today: + +* [FirstGen Supported Platforms and Technologies](https://docs.harness.io/article/220d0ojx5y-supported-platforms) +* [NextGen Supported Platforms and Technologies](supported-platforms-and-technologies.md#continuous-delivery-cd) + +### Mapping FirstGen to NextGen entities + +Here's a diagram that shows how FirstGen entities like Services, Environments, Workflows, and Pipelines are represented in NextGen: + +![](./static/harness-first-gen-vs-harness-next-gen-20.png) + +The following table maps the entities based on how you use them for deployment: + + + +| | | | +| --- | --- | --- | +| | **FirstGen** | **NextGen** | +| **What I'm deploying.** | Service | Service | +| **Where I'm deploying it.** | Environment | Environment | +| **How I'm deploying it.** | Workflow | Execution | +| **My release process.** | Pipelines | Pipelines | + diff --git a/docs/getting-started/harness-platform-architecture.md b/docs/getting-started/harness-platform-architecture.md new file mode 100644 index 00000000000..71671b651d8 --- /dev/null +++ b/docs/getting-started/harness-platform-architecture.md @@ -0,0 +1,45 @@ +--- +title: Harness Platform architecture +description: Harness Platform overview. The Harness Platform is a self-service CI/CD platform that enables end-to-end software delivery. The Platform includes modules to help you build, test, deploy, and verify s… +# sidebar_position: 2 +helpdocs_topic_id: len9gulvh1 +helpdocs_category_id: kx4hs8bn38 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness Platform is a self-service CI/CD platform that enables end-to-end software delivery. The Platform includes the following modules to help you build, test, deploy, and verify software: + +* Continuous Delivery +* Continuous Integration +* Feature Flags +* Cloud Cost Management +* Service Reliability Management +* Security Testing Orchestration +* Chaos Engineering + +Watch the following video to learn about some of the Harness modules: + +#### Harness Platform components + +The Harness Platform has two components: + +* **Harness Manager:** Harness Manager is where your CI/CD and other configurations are stored and your pipelines are managed. Your pipelines can be managed purely through Git as well. +Pipelines are triggered manually in the Harness Manager or automatically in response to Git events, schedules, new artifacts, and so on. +Harness Manager is available either as SaaS (running in the Harness cloud) or as self-managed (running in your infrastructure). +* **Harness Delegate:** The Harness Delegate is a software service you install in your environment. It connects to the Harness Manager and performs tasks using your container orchestration platforms, artifact repositories, monitoring systems, etc. The Delegate is key to enabling Harness to perform CI/CD tasks, but you don't need to install it right away. You can install the Delegate as part of the flow when setting up your Pipelines or Connectors. For more information, go to [Delegates Overview](https://docs.harness.io/article/2k7lnc7lvl-delegates-overview). + +![Harness Delegate overview](./static/harness-platform-architecture-00.png) + +### Harness editions + +Harness is available in the following editions to meet different users' needs: + +* **Enterprise:** This is our enterprise version, licensed by annual subscription based on your usage needs. It supports flexible scaling, custom integrations, and extended data analysis. It includes 24/7 support. +* **Team****:** Designed for growing teams, this version provides most Harness Enterprise features at lower per-usage pricing. It limits or excludes some integrations and enterprise security features, and limits real-time support to standard business hours. +* **Free**: This is a free-forever edition with almost all Harness Enterprise features (excluding unlimited Services and license-based service Instances scaling). +* **Community**: This version is a free-forever, open, on-premises edition. It does not have RBAC, audit trails, governance, or advanced security. See [Harness CD Community Edition Overview](https://docs.harness.io/article/yhyyq0v0y4-harness-community-edition-overview). +If you move from the full-featured Enterprise trial to the free Community Edition, you might need to remove or adjust any premium features you've configured. For these migrations, please [contact Harness](https://harness.io/company/contact-sales).Support for Harness Community is available through the [Harness Community Forum](https://community.harness.io/). + +For a detailed comparison of the Harness editions, see the [Harness Pricing](https://harness.io/pricing/?module=cd) page. + diff --git a/docs/getting-started/learn-harness-key-concepts.md b/docs/getting-started/learn-harness-key-concepts.md new file mode 100644 index 00000000000..aefb85aaba4 --- /dev/null +++ b/docs/getting-started/learn-harness-key-concepts.md @@ -0,0 +1,173 @@ +--- +title: Key concepts +description: Before you begin using Harness modules, you should be familiar with the the key concepts. Account. A Harness account is the top-level entity under which everything is organized. Within an account you… +# sidebar_position: 2 +helpdocs_topic_id: hv2758ro4e +helpdocs_category_id: kx4hs8bn38 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Before you begin using Harness modules, you should be familiar with the the key concepts. + +### Account + +A Harness account is the top-level entity under which everything is organized. + +Within an account you have organizations, and within organizations you have projects. You can add resources at the account level, and also at the organization and project levels. + +All organizations and projects in the account can use the account's resources. + +All projects in the organization can use the org's resources. + +**Why is this great?** Each team can manage its resources within its project and not have to bother account admins every time they want to add a Connector or a secret. Projects make teams independent. This is part of Harness' democratization goals for developers.See a visual example![](./static/learn-harness-key-concepts-04.png) + +### Organizations and Projects + +Harness Organizations (Orgs) allow you to group projects that share the same goal. For example, all projects for a business unit or division. + +Within each Org you can add several Harness Projects. + +See a visual example![](./static/learn-harness-key-concepts-05.png) + +A Harness Project contains Harness Pipelines, users, and resources that share the same goal. For example, a Project could represent a business unit, division, or simply a development project for an app. + +See a visual example![](./static/learn-harness-key-concepts-06.png) + +Think of Projects as a common space for managing teams working on similar technologies. A space where the team can work independently and not need to bother account admins or even org admins when new entities like Connectors, Delegates, or secrets are needed. + +Much like account-level roles, project members can be assigned Project Admin, Member, and Viewer roles. + +See a visual example![](./static/learn-harness-key-concepts-07.png) + +Project users have at least view access to all configuration and runtime data of a Project and share the same assets (Environments, Services, Infrastructure, etc). + +See [Projects and Organizations](https://docs.harness.io/article/7fibxie636-projects-and-organizations). + +### Product Modules + +Your project can add Harness products as modules, such as Continuous Integration or Continuous Delivery. + +See a visual example![](./static/learn-harness-key-concepts-08.png) + +### Pipelines + +Typically, a Pipeline is an end-to-end process that delivers a new version of your software. But a Pipeline can be much more: a Pipeline can be a cyclical process that includes integration, delivery, operations, testing, deployment, real-time changes, and monitoring. + +See a visual example![](./static/learn-harness-key-concepts-09.png) + +For example, a Pipeline can use the CI module to build, test, and push code, and then a CD module to deploy the artifact to your production infrastructure. + +### Pipeline Studio + +You build Pipelines in Pipeline Studio. + +You can create Pipelines visually or using code, and switch back and forth as needed. + +See a visual example + +| | | +| --- | --- | +| **Visual** | **YAML** | +| | | + +See [Harness YAML Quickstart](https://docs.harness.io/article/1eishcolt3-harness-yaml-quickstart). + +Pipeline Studio guides you in setting up and running your Pipelines with ready-to-use steps. + +See a visual example![](./static/learn-harness-key-concepts-10.png) + +### Stages + +A Stage is a subset of a Pipeline that contains the logic to perform one major segment of the Pipeline process. Stages are based on the different milestones of your Pipeline, such as building, approving, and delivering. + +See a visual example![](./static/learn-harness-key-concepts-11.png) + +Some stages, like a Deploy stage, use strategies that automatically add the necessary steps. + +See a visual example![](./static/learn-harness-key-concepts-12.png) + +See [Add a Stage](https://docs.harness.io/article/2chyf1acil-add-a-stage). + +### Steps and Step Groups + +A step is an individual operation in a stage. + +Steps can be run in sequential and parallel order. + +A Step Group is a collection of steps that share the same logic such as the same rollback strategy. + +See a visual example![](./static/learn-harness-key-concepts-13.png)See [Run Steps in a Step Group](https://docs.harness.io/article/ihnuhrtxe3-run-steps-in-parallel-using-a-step-group). + +### Services + +A Service represents your microservices and other workloads logically. + +A Service is a logical entity to be deployed, monitored, or changed independently. + +See a visual example![](./static/learn-harness-key-concepts-14.png) + +#### Service Instance + +Service Instances represent the dynamic instantiation of a service you deploy via Harness. + +For example, for a service representing a Docker image, Service Instances are the number of pods running with the Docker image. + +See a visual example![](./static/learn-harness-key-concepts-15.png) + +#### Service Definitions + +When a Service is added to the stage in a Pipeline, you define its Service Definition. Service Definitions represent the real artifacts, manifests, and variables of a Service. They are the actual files and variable values. + +You can also propagate and override a Service in subsequent stages by selecting its name in that stage's Service settings. + +See a visual example![](./static/learn-harness-key-concepts-16.png) + +See [Monitor Deployments and Services in CD Dashboards](https://docs.harness.io/article/phiv0zaoex-monitor-cd-deployments). + +### Environments + +Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many Stages as you need. + +#### Infrastructure Definition + +Infrastructure Definitions represent an Environment's infrastructure physically. They are the actual clusters, hosts, etc. + +### Connectors + +Connectors contain the information necessary to integrate and work with 3rd party tools. + +Harness uses Connectors at Pipeline runtime to authenticate and perform operations with a 3rd party tool. + +For example, a GitHub Connector authenticates with a GitHub account and repo and fetches files as part of a build or deploy Stage in a Pipeline. + +See [Connectors How-tos](https://docs.harness.io/category/o1zhrfo8n5). + +### Secrets Management + +Harness includes built-in Secrets Management to store your encrypted secrets, such as access keys, and use them in your Harness account. Harness integrates with all popular Secrets Managers. + +See a visual example![](./static/learn-harness-key-concepts-17.png) + +See [Harness Secrets Management Overview](https://docs.harness.io/article/hngrlb7rd6-harness-secret-manager-overview). + +### YAML and Git + +You can sync your Harness account, orgs, and projects with your Git repo to manage Harness entirely from Git. + +Harness can respond to Git events to trigger Pipelines and pass in event data. + +See [Harness Git Experience Overview](https://docs.harness.io/article/utikdyxgfz-harness-git-experience-overview). + +### Recap + +What you've seen is how Harness integrates with your resources and tools, and how you can build Pipelines. + +Harness helps you to model any kind of software development and delivery process in minutes. + +It allows for flexibility while making best practices easy to follow and poor practices difficult to implement. + +Most importantly, it takes away the pain points of software development, delivery, verification, etc, and gives you confidence in their management and success. + +**What's next?** [Sign up for Harness](https://app.harness.io/auth/#/signup/) and then try a [quickstart](quickstarts.md). + diff --git a/docs/getting-started/quickstarts.md b/docs/getting-started/quickstarts.md new file mode 100644 index 00000000000..1c9f88b0fb2 --- /dev/null +++ b/docs/getting-started/quickstarts.md @@ -0,0 +1,44 @@ +--- +title: Tutorials and quickstart guides +description: New to Harness? The following quickstarts and tutorials will take you from novice to advanced. +# sidebar_position: 2 +helpdocs_topic_id: u8lgzsi7b3 +helpdocs_category_id: kx4hs8bn38 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +**New to Harness?** The following tutorials and quickstart guides will take you from novice to advanced. + + + +| | | +| --- | --- | +| **Module or feature** | **Tutorials and quickstarts** | +| Continuous Integration | [CI Pipeline Quickstart](../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) helps you to create a CI Pipeline that builds and tests code and then pushes an artifact to a registry and performs integration tests. | +| Continuous Deployment | Select the tutorial for the platform you want to use to deploy.* [Kubernetes Deployment Tutorial](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) shows you how to create a CD Pipeline that deploys a publicly available Docker image and manifest to your target cluster. +* [Helm Chart Deployment Tutorial](https://docs.harness.io/article/cifa2yb19a-helm-cd-quickstart) shows you how to create a CD Pipeline that uses a Helm chart to deploy a publicly available Docker image to your target cluster. +* [Kustomize Deployment Tutorial](https://docs.harness.io/article/uiqe6jz9o1-kustomize-quickstart) shows you how to create a CD Pipeline that uses a kustomization to deploy multiple variants of a simple public Hello World server. +* [Azure ACR to AKS Deployment Tutorial](https://docs.harness.io/article/m7nkbph0ac-azure-cd-quickstart) shows you how to create a CD Pipeline that deploys your ACR image to your target AKS cluster. +* [Azure Web Apps Tutorial](https://docs.harness.io/article/muegjde97q-azure-web-apps-tutorial) deploy a Docker image or non-containerized artifact for your Azure Web App. You can deploy to source and target deployment slots, and perform traffic shifting. +* [Serverless Lambda Deployment Tutorial](https://docs.harness.io/article/5fnx4hgwsa-serverless-lambda-cd-quickstart) shows you how to deploy a Serverless Lambda application to AWS Lambda using Harness. +* [ECS deployment tutorial](https://docs.harness.io/article/vytf6s0kwc-ecs-deployment-tutorial) shows you how to deploy a publicly available Docker image to your Amazon Elastic Container Service (ECS) cluster using a Rolling Deployment strategy. +* [Custom deployments using Deployment Templates tutorial](https://docs.harness.io/article/6k9t49p6mn-custom-deployment-tutorial) shows you how to use Deployment Templates for non-native deployments (integrations other than those Harness supports out of the box). Deployment Templates use shell scripts to connect to target platforms, obtain target host information, and execute deployment steps. + | +| GitOps | The following quickstart guides are available for GitOps:* [Harness CD GitOps Quickstart](https://docs.harness.io/article/pptv7t53i9-harness-cd-git-ops-quickstart) shows you how to use Harness native GitOps to deploy services by syncing the Kubernetes manifests in your source repos with your target clusters. + | +| Feature Flags | The following quickstart guides are available for Feature Flags:* [Getting Started with Feature Flags](../feature-flags/1-ff-onboarding/2-ff-getting-started/2-getting-started-with-feature-flags.md) provides a high-level summary of Feature Flag (FF), with video and Quick Guide walkthroughs. +* [Java Quickstart](../feature-flags/1-ff-onboarding/2-ff-getting-started/3-java-quickstart.md) helps you to create a feature flag and use the feature flag SDK in your Java application. + | +| Cloud Cost Management (CCM) | The following quickstart guide is available for Cloud Cost Management:* [Kubernetes Autostopping Quick Start Guide](https://docs.harness.io/article/9l4gblg2we-kubernetes-autostopping-quick-start-guide) shows you how to create and test an AutoStopping rule for your Kubernetes cluster. + | +| Harness CD Community Edition | [Harness Community Edition deployment tutorial](https://docs.harness.io/article/ltvkgcwpum-harness-community-edition-quickstart) shows you how to set up Harness CD Community Edition locally and create a CD Pipeline that deploys a public NGINX image to a local cluster. | +| Harness YAML | [Harness YAML Quickstart](https://docs.harness.io/article/1eishcolt3-harness-yaml-quickstart) shows you how to build Pipelines using the Harness YAML builder. | +| Service Reliability Management | The following quickstart guides are available for Service Reliability Management:* [Change Monitoring Quickstart](https://docs.harness.io/article/fs64l16dbp-change-intelligence-quick-start-change-monitoring) helps you to create a Monitored Service and add a Change Source to track change events. +* [Health Monitoring Quickstart](https://docs.harness.io/article/m4pbiqb4m9-verify-deployments-in-change-intelligence-quickstart) shows you how to create a Monitored Service and add a Health Source to monitor the health of your service in its associated environment using logs and metrics collected from an APM or logging tool. +* [SLOs Quickstart](https://docs.harness.io/article/jnl7w1rryp-slo-quickstart) helps you to create an SLO to measure your service’s reliability. + | +| Security Testing Orchestration | The following quickstart guides are available for Security Testing Orchestration:* [STO Tutorial 1: Stand-Alone Pipelines](../security-testing-orchestration/onboard-sto/30-tutorial-1-standalone-workflows.md) shows you how to set up a Pipeline with a scanner, run scans, analyze the results, and learn the key features of STO. +* [Tutorial 2: Integrated STO Pipelines](../security-testing-orchestration/onboard-sto/40-sto-tutorial-2-integrated-sto-ci-cd-workflows.md) shows you how to integrate STO functionality into CI and CD Pipelines. + | + diff --git a/docs/getting-started/start-a-free-trial.md b/docs/getting-started/start-a-free-trial.md new file mode 100644 index 00000000000..f896fe959df --- /dev/null +++ b/docs/getting-started/start-a-free-trial.md @@ -0,0 +1,20 @@ +--- +title: Start a free trial +description: Harness offers a free trial that allows you to try the Harness Software Delivery Platform. Click here to sign up for a free trial. After you have signed up for an account, go to the following topics… +# sidebar_position: 2 +helpdocs_topic_id: 6z93pdhs28 +helpdocs_category_id: kx4hs8bn38 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness offers a free trial that allows you to try the Harness Software Delivery Platform. + +[Click here](https://app.harness.io/auth/#/signup?utm_source=Website&utm_medium=harness-docs&utm_campaign=harness-docs-free-account-cta-main-navigation&utm_content=free-account) to sign up for a free trial. + +After you have signed up for an account, go to the following topics to learn more about how to get started: + +* [Harness Platform architecture](harness-platform-architecture.md) +* [Key concepts](learn-harness-key-concepts.md) +* [Tutorials and quickstart guides](quickstarts.md) + diff --git a/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-01.png b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-01.png new file mode 100644 index 00000000000..5b8102fd7d2 Binary files /dev/null and b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-01.png differ diff --git a/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-02.png b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-02.png new file mode 100644 index 00000000000..8eb999f0468 Binary files /dev/null and b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-02.png differ diff --git a/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-03.png b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-03.png new file mode 100644 index 00000000000..e0645448ea4 Binary files /dev/null and b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-03.png differ diff --git a/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-18.png b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-18.png new file mode 100644 index 00000000000..5b8102fd7d2 Binary files /dev/null and b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-18.png differ diff --git a/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-19.png b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-19.png new file mode 100644 index 00000000000..8eb999f0468 Binary files /dev/null and b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-19.png differ diff --git a/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-20.png b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-20.png new file mode 100644 index 00000000000..e0645448ea4 Binary files /dev/null and b/docs/getting-started/static/harness-first-gen-vs-harness-next-gen-20.png differ diff --git a/docs/getting-started/static/harness-platform-architecture-00.png b/docs/getting-started/static/harness-platform-architecture-00.png new file mode 100644 index 00000000000..05a3e5db71e Binary files /dev/null and b/docs/getting-started/static/harness-platform-architecture-00.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-04.png b/docs/getting-started/static/learn-harness-key-concepts-04.png new file mode 100644 index 00000000000..a5af7e2d424 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-04.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-05.png b/docs/getting-started/static/learn-harness-key-concepts-05.png new file mode 100644 index 00000000000..59cd569b719 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-05.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-06.png b/docs/getting-started/static/learn-harness-key-concepts-06.png new file mode 100644 index 00000000000..0ac23dc2afc Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-06.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-07.png b/docs/getting-started/static/learn-harness-key-concepts-07.png new file mode 100644 index 00000000000..28b5604aa09 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-07.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-08.png b/docs/getting-started/static/learn-harness-key-concepts-08.png new file mode 100644 index 00000000000..27c6b8a4a32 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-08.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-09.png b/docs/getting-started/static/learn-harness-key-concepts-09.png new file mode 100644 index 00000000000..8bba36ebc6c Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-09.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-10.png b/docs/getting-started/static/learn-harness-key-concepts-10.png new file mode 100644 index 00000000000..1c0f5269528 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-10.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-11.png b/docs/getting-started/static/learn-harness-key-concepts-11.png new file mode 100644 index 00000000000..e83534177c6 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-11.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-12.png b/docs/getting-started/static/learn-harness-key-concepts-12.png new file mode 100644 index 00000000000..2dded4d2af3 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-12.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-13.png b/docs/getting-started/static/learn-harness-key-concepts-13.png new file mode 100644 index 00000000000..ba8f28b7496 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-13.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-14.png b/docs/getting-started/static/learn-harness-key-concepts-14.png new file mode 100644 index 00000000000..65bda8e7ee1 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-14.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-15.png b/docs/getting-started/static/learn-harness-key-concepts-15.png new file mode 100644 index 00000000000..9bc7b891a50 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-15.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-16.png b/docs/getting-started/static/learn-harness-key-concepts-16.png new file mode 100644 index 00000000000..3ae16a96fb9 Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-16.png differ diff --git a/docs/getting-started/static/learn-harness-key-concepts-17.png b/docs/getting-started/static/learn-harness-key-concepts-17.png new file mode 100644 index 00000000000..1f48dcfe55e Binary files /dev/null and b/docs/getting-started/static/learn-harness-key-concepts-17.png differ diff --git a/docs/getting-started/supported-platforms-and-technologies.md b/docs/getting-started/supported-platforms-and-technologies.md new file mode 100644 index 00000000000..f0cf98d1780 --- /dev/null +++ b/docs/getting-started/supported-platforms-and-technologies.md @@ -0,0 +1,868 @@ +--- +title: Supported platforms and technologies +description: This topic lists Harness support for platforms, methodologies, and related technologies. +# sidebar_position: 2 +helpdocs_topic_id: 1e536z41av +helpdocs_category_id: kx4hs8bn38 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic lists Harness support for platforms, methodologies, and related technologies for NextGen modules. + +### Continuous Delivery (CD) + +The following table lists Harness support for deployment platforms, artifacts, strategies, and related technologies. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Deployment Type/Platform Artifact Servers and Repos Infrastructure Strategies Verification
+

Kubernetes

+
+
    +
  • Docker Hub
  • +
  • ECR
  • +
  • GCR
  • +
  • ACR
  • +
  • Nexus 3 (Docker Repo)
  • +
  • Artifactory (Docker Repo)
  • +
  • Custom Repository
  • +
  • Google Artifact Registry
  • +
+

Manifest Resources:

+
    +
  • Kustomize
  • +
  • Helm (see Helm support below)
  • +
  • OpenShift Template
  • +
+
+

Static Infrastructure:

+
    +
  • GKE
  • +
  • AKS
  • +
  • Other Kubernetes Compliant Clusters
  • +
  • EKS
  • +
  • OpenShift version 3.11, 4.x
  • +
  • Minikube
  • +
  • Kubernetes Operations (kops)
  • +
+

Dynamic Infrastructure:

+
    +
  • GKE using Terraform
  • +
  • AWS EKS using CloudFormation
  • +
+
+
    +
  • Rolling
  • +
  • Canary
  • +
  • Blue/Green
  • +
+

See Note on Kubernetes for more deatils.

+
+

Rolling:

+
    +
  • Previous Analysis - Synthetic Load
  • +
+

Canary:

+
    +
  • Canary Analysis - Realtime Load
  • +
+

Blue/Green:

+
    +
  • Previous Analysis - Synthetic Load
  • +
+
+

Helm v3.0

+
+

Docker Image Repo:

+
    +
  • Docker Hub
  • +
  • ECR
  • +
  • GCR
  • +
  • ACR
  • +
  • Nexus 3 (Docker Repo)
  • +
  • Artifactory (Docker Repo)
  • +
  • Custom Repository
  • +
+

Helm Chart Package Repo:

+
    +
  • Artifactory (as a Helm HTTP Server)
  • +
  • Nexus (as a Helm HTTP Server)
  • +
  • OCI (as a Helm HTTP Server)
  • +
  • AWS S3
  • +
  • GCS
  • +
  • HTTP Server
  • +
+

Helm Source Repo:

+
    +
  • Github
  • +
  • GitLab
  • +
  • Bitbucket
  • +
  • Code Commit (Not Certified)
  • +
  • Google Cloud Source Repository (Not Certified)
  • +
+
+

Static Infrastructure:

+
    +
  • GKE
  • +
  • AKS
  • +
  • Other Kubernetes Compliant Clusters
  • +
  • EKS
  • +
  • OpenShift v4.x
  • +
  • Minikube
  • +
  • Kubernetes Operations (kops)
  • +
+

Dynamic Infrastructure:

+
    +
  • GKE using Terraform
  • +
  • AWS EKS using CloudFormation
  • +
+
+

Using Harness Kubernetes:

+
    +
  • Rolling
  • +
  • Canary
  • +
  • Blue/Green
  • +
+

Using native Helm Command:

+
    +
  • Basic along with steady state check
  • +
+
+

Previous Analysis - Synthetic Load

+
+

Serverless Lambda

+
+
    +
  • Artifactory (ZIP files)
  • +
  • AWS ECR (Docker images)
  • +
+
+

AWS Lambda

+
+

Rolling

+
+

Previous Analysis - Synthetic Load

+
+

Azure Web App

+
+

Container and non-container:

+
    +
  • Docker Registry
  • +
  • Artifactory
  • +
  • Azure Container Registry
  • +
  • Nexus 3
  • +
  • GCR
  • +
  • AWS ECR
  • +
+
+

Static Infrastructure:
Azure App Services

+
+
    +
  • Canary
  • +
  • Blue/Green
  • +
  • Basic
  • +
+
+

Basic:

+
    +
  • Previous Analysis - Synthetic Load
  • +
+

Canary:

+
    +
  • Canary Analysis - Realtime Load
  • +
+

Blue/Green:

+
    +
  • Previous Analysis - Synthetic Load
  • +
+
+

Secure Shell (SSH)

+
+

Non-container:

+
    +
  • Artifactory
  • +
  • Nexus 3
  • +
  • Custom
  • +
  • Jenkins
  • +
  • AWS S3
  • +
+
+

Static Infrastructure:

+
    +
  • AWS
  • +
  • Physical Data Center (agnostic support for VMs on any platform)
  • +
  • Azure
  • +
  • GCP (Supported under Physical Data Center)
  • +
+
+
    +
  • Canary
  • +
  • Rolling
  • +
  • Basic
  • +
+
+

Basic:

+
    +
  • Previous Analysis - Synthetic Load
  • +
+

Canary:

+
    +
  • Canary Analysis - Realtime Load
  • +
+

Rolling:

+
    +
  • Previous Analysis - Synthetic Load
  • +
+
+

Windows Remote Management (WinRM)

+
+

Non-container:

+
    +
  • Artifactory
  • +
  • Nexus 3
  • +
  • Custom
  • +
  • Jenkins
  • +
  • AWS S3
  • +
+
+

Static Infrastructure:

+
    +
  • AWS
  • +
  • Physical Data Center (agnostic support for VMs on any platform)
  • +
  • Azure
  • +
  • GCP (Supported under Physical Data Center)
  • +
+
+
    +
  • Canary
  • +
  • Rolling
  • +
  • Basic
  • +
+
+

Previous Analysis - Synthetic Load

+
+

AWS ECS

+
+

Non-container:

+
    +
  • Docker Registry
  • +
  • Artifactory
  • +
  • Nexus 3
  • +
  • Custom
  • +
  • GCR
  • +
  • ECR
  • +
  • ACR
  • +
+
+

Static Infrastructure:

+
    +
  • AWS ECS
  • +
+
+
    +
  • Canary
  • +
  • Rolling
  • +
  • Blue/Green
  • +
  • Blank
  • +
+
+

Deployment Type - EC2:

+
    +
  • Canary: Canary Analysis - Realtime Load
  • +
  • Blue/Green: Previous Analysis - Synthetic Load
  • +
  • Rolling: Previous Analysis - Synthetic Load
  • +
+

Deployment Type - Fargate:

+

Same strategy support as EC2.

+

For Fargate: The complete-docker-id
must be present in the monitoring provider.

+
+ +#### Deployment notes + +The following notes clarify support of some platform features. + +##### Kubernetes + +See [What Can I Deploy in Kubernetes?](https://docs.harness.io/article/efnlvytc6l-what-can-i-deploy-in-kubernetes). + +##### Kubernetes version support + +The following versions are tested and supported for Kubernetes Canary, Rolling, and Blue/Green deployments: + +* 1.13.0 +* 1.14.0 +* 1.15.0 +* 1.16.0 +* 1.17.0 +* 1.18.0 +* 1.19.4 +* 1.20.0 +* 1.21.0 +* 1.22.0 +* 1.23.0 +* 1.24.3 + +For details on other tools and version included in Harness, see [SDKs installed with the Delegate](#sd_ks_installed_with_the_delegate). + +Guidelines: + +* Harness will officially support 3 previous versions from the last stable release. For example, the current most recent stable release is 1.24.3, and so Harness supports 1.23, 1.22, and 1.21. +* Harness supports any other versions of Kubernetes you are using on a best effort basis. +* Harness commits to support new minor versions within 3 months of the first stable release. For example, if the stable release of 1.24.3 occurs on August 15th, we will support it for compatibility by November 15th. + +##### Helm + +Helm chart dependencies are not supported in Git source repositories (Harness [Code Repo Connectors](https://docs.harness.io/category/xyexvcc206)). Helm chart dependencies are supported in Helm Chart Repositories. + +##### Artifact servers, repos, and artifacts + +Harness uses **Metadata only** when downloading artifact sources. + +For pulling Docker images from Docker repos, Harness has a limit of 10000 for private Docker repos, and 250 for public (no username or password required) Docker repos. + +The following table lists Harness integrations and their artifact source support: + + + +| | | | | | | | | | | | | +| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +| | **Docker Hub** | **ECR** | **GCR** | **ACR** | **Artifactory** | **Nexus 3** | **Custom** | **Google Artifact Registry** | **Github Artifact Registry** | **Jenkins** | **AWS S3** | +| **Kubernetes** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | +| **Helm** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | | +| **AWS ECS** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | | +| **Azure Web Apps** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | | | +| **SSH** | | | | | ✅ | ✅ | ✅ | | | ✅ | ✅ | +| **WinRM** | | | | | ✅ | ✅ | ✅ | | | ✅ | ✅ | +| **Serverless** | | ✅ | | | ✅ | | | | | | ✅ | + +##### Manifest and Config file Store Support + +The following table lists where you can store your manifests or config files for each integration. + + + +| | | | | | | | | | | | | +| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +| | **Github** | **Gitlab** | **Bitbucket** | **Harness Filestore** | **Any Git** | **OCI Helm** | **HTTP Helm** | **AWS S3** | **Custom** | **Google Cloud Storage** | **Inherit from manifest** | +| **Kubernetes** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| **Values YAML** | ✅ | ✅ | ✅ | ✅ | ✅ | | | | ✅ | | ✅ | +| **Kustomize** | ✅ | ✅ | ✅ | ✅ | ✅ | | | | | | | +| **Kustomize****Patches** | ✅ | ✅ | ✅ | ✅ | ✅ | | | | | | ✅ | +| **Openshift****Template** | ✅ | ✅ | ✅ | ✅ | ✅ | | | | ✅ | | | +| **Openshift****Params** | ✅ | ✅ | ✅ | ✅ | ✅ | | | | ✅ | | | +| **AWS ECS** | ✅ | ✅ | ✅ | ✅ | ✅ | | | | | | ✅ | +| **Helm Chart** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| **Serverless.com** | ✅ | ✅ | ✅ | | ✅ | | | | | | | +| **SSH** | | | | ✅ | | | | | | | | +| **WinRM** | | | | ✅ | | | | | | | | +| **Azure Web Apps** | | | | ✅ | | | | | | | | + +##### Terraform version support + +Harness does not include Terraform on the Harness Delegate. You must install Terraform on the Delegate when using Terraform in Harness. For more information, go to [Terraform How-tos](https://docs.harness.io/article/w6i5f7cpc9-terraform-how-tos). + +Harness supports the following Terraform versions: + +* v1.1.9 +* v1.0.0 +* v0.15.5 +* v0.15.0 +* v0.14.0 + +Some Harness features might require specific Terraform versions. + +##### Azure AKS clusters + +To use an AKS cluster for deployment, the AKS cluster must have local accounts enabled (AKS property `disableLocalAccounts=false`). + +##### AWS and Azure GovCloud + +Harness is now certified in Azure GovCloud and AWS GovCloud. + +### GitOps + +Harness GitOps lets you perform GitOps deployments in Harness. You define the desired state of the service you want to deploy in your Git manifest, and then use Harness GitOps to sync state with your live Kubernetes cluster. + +GitOps supports the following: + +* Source Repositories: + + All Git providers. + + HTTP Helm repos. +* Target clusters: + + Kubernetes clusters hosted on any platform: + - GKE. + - AKS. + - EKS. + - Other Kubernetes-compliant clusters. + - OpenShift version 3.11, 4.x. + - Minikube. + - Kubernetes Operations (kops). +* Repository Certificates: + + TLS Certificate (PEM format). + + SSH Known Host Entry. +* GnuPG Keys: + + GnuPG Public Key Data (ASCII-armored). + +See [Harness GitOps Basics](https://newdocs.helpdocs.io/article/w1vg9l1j7q-harness-git-ops-basics) and [Harness CD GitOps Quickstart](https://newdocs.helpdocs.io/article/pptv7t53i9-harness-cd-git-ops-quickstart) + +### Continuous Integration (CI) + +The following table lists Harness support for CI platforms, repos, registries, and related technologies. + + + + + + + + + + + + + + + + + + + + +
Source Code Management (SCM)Artifact ReposContainer RegistriesBuild Farm PlatformsTesting Frameworks Supported
+
    +
  • GitLab
  • +
  • Bitbucket
  • +
  • GitHub
  • +
+
+
    +
  • Artifactory
  • +
  • AWS S3
  • +
  • GCP GCS
  • +
+
+
    +
  • Amazon Elastic Container Registry (ECR)
  • +
  • Google Container Registry (GCR)
  • +
  • Docker registries (e.g. Docker Hub)
  • +
  • Other
  • +
+
+
    +
  • Kubernetes cluster (platform agnostic)
  • +
  • Amazon Elastic Kubernetes Service (Amazon EKS)
  • +
  • Google Kubernetes Engine (GKE)
  • +
  • AWS Linux and Windows VMs
  • +
  • Red Hat OpenShift 4
  • +
+
+

Currently, Harness supports:

+
    +
  • Bazel
  • +
  • Maven
  • +
  • Gradle
  • +
+

More frameworks will be supported soon.

+
+ + More frameworks will be supported soon. + +### Continuous Verification + +Harness supports the following metrics and logging platforms. + +#### Metrics providers + +The following table lists Harness support for metrics platforms (APMs). + + + +| Metrics Provider Name | Metric Pack | Deployment Verification | +| --- | --- | --- | +| [AppDynamics](https://docs.harness.io/article/916vrl4l76-verify-deployments-with-app-dynamics) | Business Transactions | Yes | +| [AppDynamics](https://ngdocs.harness.io/article/916vrl4l76) | JVM and Infra Metrics | Supported via Custom Metrics | +| [New Relic](https://docs.harness.io/article/p8lqq2il39-verify-deployments-with-new-relic) | Business Transactions | Yes | +| New Relic | Insights | Supported via Custom Metrics | +| [Google Cloud Operations (GCP)](https://docs.harness.io/article/owqpo59gp5-verify-deployments-with-google-cloud-operations) | Infrastructure Metrics | Yes | +| Google Cloud Operations (GCP) | Custom metrics from explorer | No | +| [Prometheus](https://ngdocs.harness.io/article/e9z7944qhw) | Custom metrics from Prometheus | Yes | +| [Datadog](https://ngdocs.harness.io/article/z3kpdn6vcb) | Docker Infra Metrics | Yes | +| [Dynatrace](https://ngdocs.harness.io/article/eamwqs2x5a) | Performance | Yes | + +#### Log providers + +Most logging platforms are also supported. + + + +| | | +| --- | --- | +| **Log Provider Name** | **Deployment Verification** | +| [Splunk](https://docs.harness.io/article/mvjds2f3hb-verify-deployments-with-splunk) | Yes | +| [Google Cloud Operations (GCP)](https://docs.harness.io/article/owqpo59gp5-verify-deployments-with-google-cloud-operations) | Yes | + +#### Custom health sources + +Harness offers support for all major APM vendors and log providers, but there are cases where a customized APM or log provider is needed. The Custom Health Source lets you customize APMs and log providers of your choice. + +See [Verify Deployments with Custom Health Source](https://docs.harness.io/article/n67y68fopr-verify-deployments-with-custom-health-metrics). + +### Cloud Cost Management + +#### Supported Kubernetes Management Platform + +The following section lists the support for the Kubernetes management platform for CCM: + + + +| | | | +| --- | --- | --- | +| **Technology** | **Supported Platform** | **Pricing** | +| OpenShift 3.11 | GCP | GCP | +| OpenShift 4.3 | AWSOn-Prem | AWSCustom-rate\* | +| Rancher | AWS | Custom-rate\*\* | +| Kops (Kubernetes Operations) | AWS | AWS | +| Tanzu Kubernetes Grid Integrated Edition (TKGI) | On-Prem | Custom-rate\*\*\* | + +\*Cost data is supported for On-Prem OpenShift 4.3. This uses a custom rate. + +\*\*Cost data is supported for K8s workloads on AWS managed by Rancher, but the cost falls back to the custom rate. + +\*\*\*Cost is computed using a custom rate. This can be modified by Harness on request. + +#### Supported ingress controllers for Kubernetes AutoStopping + +The following table lists the ingress controllers supported for Kubernetes AutoStopping: + + + +| | | +| --- | --- | +| **Ingress Controller** | **Extent of Support** | +| Nginx ingress controller | Fully supported | +| HAProxy ingress controller | Fully supported | +| Traefik as ingress gateway | Supported using ingress routes and manually configured middlewares | +| Istio as API gateway | Fully supported | +| Ambassador as API gateway | Supported by manually editing the mapping | + +#### Feature Support Matrix + +This section lists the feature support matrix for the supported cloud platforms: + +##### AWS Service + + + +| | | | | +| --- | --- | --- | --- | +| | **Inventory Dashboard** | **Recommendations** | **AutoStopping** | +| **EC2** | Yes | Coming soon | Yes (With Spot Orchestration) | +| **ECS** | Yes | Coming soon | Yes | +| **EKS** | Yes | Yes | Yes | +| **RDS** | Yes | No | Yes | +| **EBS** | Yes | No | No | +| **Snapshots** | Yes | No | NA | +| **Elastic** **IPs** | Yes | No | NA | +| **ASGs** | No | No | Yes (With Spot Orchestration) | + +##### GCP Product + + + +| | | | | +| --- | --- | --- | --- | +| | **Inventory Dashboard** | **Recommendations** | **AutoStopping** | +| **GCE VMs** | Yes | Coming soon | Coming soon | +| **GKE** | Yes | Yes | Yes | + +##### Azure Product + + + +| | | | | +| --- | --- | --- | --- | +| | **Inventory Dashboard** | **Recommendations** | **AutoStopping** | +| **Virtual Machine** | Coming soon | Coming soon | Yes (With Spot Orchestration) | +| **AKS** | Yes | Yes | Yes | + +### Service Reliability Management + +Harness supports the following Health Sources and Change Sources. + +#### Health sources + + A Health Source monitors changes in health trends of the Service using metrics and logs collected from an APM and log provider respectively. + +Harness offers support for all major APM vendors, but there are cases where a customized APM is needed. The [Custom Health Source](https://docs.harness.io/article/n67y68fopr-verify-deployments-with-custom-health-metrics) lets you customize APMs of your choice. + +##### Metrics providers and logging tools + +Currently, Harness supports the following APMs and logging tools: + +* AppDynamics +* Prometheus +* Dynatrace +* Splunk +* Custom Health Source +* Google Cloud Operations (formerly Stackdriver) +* New Relic +* Datadog + +More tools will be added soon. + +#### Change sources + +A Change Source monitors change events related to deployments, infrastructure changes, and incidents. Following Change Sources are supported: + +* Harness CD NextGen +* Harness CD +* PagerDuty + +### Security Testing Orchestration + +See [Security Step Settings Reference](../security-testing-orchestration/sto-techref-category/security-step-settings-reference.md). + +### Feature Flags + +Harness Feature Flags support [client-side and server-side SDKs](../feature-flags/4-ff-sdks/1-sdk-overview/1-client-side-and-server-side-sdks.md) for a number of programming languages. + +#### Client-side SDKs + +The following table lists the Client-side Feature Flag SDKs Harness supports. + + + +| SDK | Documentation | +| --- | --- | +| [Android](https://github.com/harness/ff-android-client-sdk) | [Android SDK Reference](../feature-flags/4-ff-sdks/2-client-sdks/1-android-sdk-reference.md) | +| [iOS](https://github.com/harness/ff-ios-client-sdk) | [iOS SDK Reference](../feature-flags/4-ff-sdks/2-client-sdks/3-ios-sdk-reference.md) | +| [Flutter](https://github.com/harness/ff-flutter-client-sdk) | [Flutter SDK Reference](../feature-flags/4-ff-sdks/2-client-sdks/2-flutter-sdk-reference.md) | +| [Javascript](https://github.com/harness/ff-javascript-client-sdk) | [Javascript SDK Reference](../feature-flags/4-ff-sdks/2-client-sdks/4-java-script-sdk-references.md) | +| [React Native](https://github.com/harness/ff-react-native-client-sdk) | [React Native SDK Reference](../feature-flags/4-ff-sdks/2-client-sdks/5-react-native-sdk-reference.md) | +| [Xamarin](https://github.com/harness/ff-xamarin-client-sdk) | [Xamarin SDK Reference](../feature-flags/4-ff-sdks/2-client-sdks/6-xamarin-sdk-reference.md) | + +#### Server-side SDKs + +The following table lists the Server-side Feature Flag SDKs Harness supports. + + + +| SDK | Documentation | +| --- | --- | +| [.NET](https://github.com/harness/ff-dotnet-server-sdk) | [.NET SDK Reference](../feature-flags/4-ff-sdks/3-server-sdks/4-net-sdk-reference.md) | +| [Go](https://github.com/harness/ff-golang-server-sdk) | [Go SDK Reference](../feature-flags/4-ff-sdks/3-server-sdks/2-feature-flag-sdks-go-application.md) | +| [Java](https://github.com/harness/ff-java-server-sdk) | [Java SDK Reference](../feature-flags/4-ff-sdks/3-server-sdks/3-integrate-feature-flag-with-java-sdk.md) | +| [Node.js](https://github.com/harness/ff-nodejs-server-sdk) | [Node.js SDK Reference](../feature-flags/4-ff-sdks/3-server-sdks/5-node-js-sdk-reference.md) | +| [Python](https://github.com/harness/ff-python-server-sdk) | [Python SDK Reference](../feature-flags/4-ff-sdks/3-server-sdks/7-python-sdk-reference.md) | +| [Ruby](https://github.com/harness/ff-ruby-server-sdk) | [Ruby SDK Reference](../feature-flags/4-ff-sdks/3-server-sdks/8-ruby-sdk-reference.md) | +| [PHP](https://github.com/harness/ff-php-server-sdk) | [PHP SDK Reference](../feature-flags/4-ff-sdks/3-server-sdks/6-php-sdk-reference.md) | + +### Harness Chaos Engineering + +Perform chaos experiments on applications in your infrastructure, such as a Kubernetes cluster. Use predefined or custom, Workflow templates. + +See [Harness Chaos Engineering Basics (Public Preview)](https://docs.harness.io/article/v64rj2maiz-harness-chaos-engineering-basics), [Harness Chaos Engineering Quickstart (Public Preview)](https://docs.harness.io/article/da85u0cbhx-harness-chaos-engineering-quickstart-public-preview). + +### Collaboration + +The following table lists Harness support for collaboration tools. + +Most providers are used in both Pipeline Notification Strategies and User Group notifications: + +* [Add a Pipeline Notification Strategy](https://docs.harness.io/article/4bor7kyimj-notify-users-of-pipeline-events) +* [Send Notifications Using Slack](https://docs.harness.io/article/h5n2oj8y5y-send-notifications-using-slack) +* [Send Notifications to Microsoft Teams](https://docs.harness.io/article/xcb28vgn82-send-notifications-to-microsoft-teams) + + + +| Provider Name | Notification | Approval/Change Management | +| --- | --- | --- | +| [Microsoft Teams](https://docs.harness.io/article/xcb28vgn82) | Yes | N/A | +| [Email](https://docs.harness.io/article/4bor7kyimj) | Yes | N/A | +| [Slack](https://docs.harness.io/article/h5n2oj8y5y) | Yes | N/A | +| [Jira](https://docs.harness.io/article/2lhfk506r8) | Yes | Yes | +| [ServiceNow](https://docs.harness.io/article/h1so82u9ub) | N/A | Yes | +| [PagerDuty](https://docs.harness.io/article/4bor7kyimj) | Yes | N/A | + +### Access control + +The following table lists Harness support for SSO protocols and tools. + +See [Add and Manage Access Control](../feature-flags/1-ff-onboarding/3-ff-security-compliance/1-manage-access-control.md). + + + +| SSO Type | SSO Providers | Authentication Supported | Authorization (Group Linking) Supported | SCIM Provisioning | +| --- | --- | --- | --- | --- | +| [SAML 2.0](https://docs.harness.io/article/mlpksc7s6c) | Okta | Yes | Yes | Yes | +| | Azure Active Directory | Yes | Yes | Yes | +| | Others | Yes | Yes | No | +| | OneLogin | Yes | Yes | Yes | +| [OAuth 2.0](https://docs.harness.io/article/rb33l4x893) | Github | Yes | No | N/A | +| | GitLab | Yes | No | N/A | +| | Bitbucket | Yes | No | N/A | +| | Google | Yes | No | N/A | +| | Azure | Yes | No | N/A | +| | LinkedIn | Yes | No | N/A | +| LDAP (Delegate connectivity needed) | Active Directory | Coming soon | Coming soon | N/A | +| | Open LDAP | Coming soon | Coming soon | N/A | +| | Oracle LDAP | Coming soon | Coming soon | N/A | + +### Secret management + +The following table lists Harness support for cloud platform secrets management services. + +See [Harness Secrets Management Overview](https://docs.harness.io/article/hngrlb7rd6-harness-secret-manager-overview). + +| Provider Name | Key Encryption Support | Encrypted Data Storaged with Harness | Support for Referencing Existing Secrets | +| --- | --- | --- | --- | +| [AWS KMS](https://docs.harness.io/article/pt52h8sb6z) | Yes | Yes | No | +| [AWS Secret Manager](https://docs.harness.io/article/a73o2cg3pe) | Yes | No | Yes | +| [Hashicorp Vault](https://docs.harness.io/article/s65mzbyags) | Yes | No | Yes | +| [Azure Key Vault](https://docs.harness.io/article/53jrd1cv4i) | Yes | No | Yes | +| [Google KMS](https://docs.harness.io/article/cyyym9tbqt) | Yes | Yes | No | + +### Harness Self-Managed Enterprise Edition + +The following table lists the major support features for Harness Self-Managed Enterprise Edition offerings. + + + +| Solution | Supported Platform | Connected\* | HA Supported\*\* | Monitoring | Disaster Recovery | Auto Restart | Features Under Controlled Release | +| --- | --- | --- | --- | --- | --- | --- | --- | +| [Kubernetes Cluster](https://docs.harness.io/category/v313myup55) | Kubernetes - GKE - AKS - EKS | Yes | Yes | Prometheus, Grafana | Supported | Supported | | +| [Virtual Machine (VM)](https://docs.harness.io/category/ubhcaw8n0l) | Linux VM (3 VM minimum) | Yes | Yes | Prometheus, Grafana | Supported | Supported | | + +### SDKs installed with the Delegate + +Harness Delegate includes binaries for the SDKs that are required for deployments with Harness-supported integrations. These include binaries for Helm, ChartMuseum, `kubectl`, Kustomize, and so on. + +##### Kubernetes Deployments + +For Kubernetes deployments, the following SDKs/tools are included in the Delegate. + +* kubectl: v1.13, v1.19 +* Helm: v2.13.1, v3.1.2, v3.8.0 +* Kustomize: v3.5.4, v4.0.0 +* OpenShift: v4.2.16 + +The versions can be found in this public GitHub repo: + +For details on updating the default tool versions, go to [Install Software on the Delegate with Initialization Scripts](https://docs.harness.io/article/yte6x6cyhn-run-scripts-on-delegates). + +For Kubernetes deployments, the following SDKs/tools are certified. + + + +| | | | +| --- | --- | --- | +| **Manifest Type** | **Required Tool/SDK** | **Certified Version** | +| Kubernetes | kubectl | v1.24.3 | +| | go-template | v0.4 | +| Helm | kubectl | v1.24.3 | +| | helm | v3.9.2 | +| Helm (chart is stored in GCS or S3) | kubectl | v1.24.3 | +| | helm | v3.9.2 | +| | chartmuseum | v0.8.2 and v0.12.0 | +| Kustomize | kubectl | v1.24.3 | +| | kustomize | v4.5.4 | +| OpenShift | kubectl | v1.24.3 | +| | oc | v4 | + +##### Native Helm deployments + +For [Native Helm deployments](https://docs.harness.io/article/lbhf2h71at-native-helm-quickstart), the following SDKs/tools are certified. + + + +| | | | +| --- | --- | --- | +| **Manifest Type** | **Required Tool/SDK** | **Certified Version** | +| Helm Chart | helm | v3.9.2 | +| | kubectlRequired if Kubernetes version is 1.16+. | v1.24.3 | + +##### Install a Delegate with custom SDK and 3rd-party tool binaries + +To support customization, Harness provides a Harness Delegate image that does not include any third-party SDK binaries. We call this image the No Tools Image. + +Using the No Tools Image and Delegate YAML, you can install the specific SDK versions you want. You install software on the Delegate using the `INIT_SCRIPT` environment variable in the Delegate YAML. + +For steps on using the No Tools Delegate image and installing specific SDK versions, see [Install a Delegate with 3rd Party Tool Custom Binaries](https://docs.harness.io/article/ql86a0iqta-install-a-delegate-with-3-rd-party-tool-custom-binaries). + +### Supported browsers + +The following browsers are supported: + +* **Chrome**: latest version +* **Firefox**: latest version +* **Safari**: latest version +* All Chromium-based browsers. + +Mobile browsers are not supported. + +#### Supported screen resolution + +Minimum supported screen resolution is 1440x900. + diff --git a/docs/legal/terms-of-use.md b/docs/legal/terms-of-use.md index f90966cf5ba..df9abb79aa5 100644 --- a/docs/legal/terms-of-use.md +++ b/docs/legal/terms-of-use.md @@ -4,7 +4,7 @@ hide_title: true title: Terms of Use editCurrentVersion: false custom_edit_url: null -slug: /legal/terms-of-use/ +# slug: /legal/terms-of-use/ --- # Terms of Use diff --git a/docs/platform/10_Git-Experience/_category_.json b/docs/platform/10_Git-Experience/_category_.json new file mode 100644 index 00000000000..6dc9ce101ad --- /dev/null +++ b/docs/platform/10_Git-Experience/_category_.json @@ -0,0 +1 @@ +{"label": "Git Experience", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Git Experience"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "rv2jo2hoiy"}} \ No newline at end of file diff --git a/docs/platform/10_Git-Experience/configure-git-experience-for-harness-entities.md b/docs/platform/10_Git-Experience/configure-git-experience-for-harness-entities.md new file mode 100644 index 00000000000..3090b57f915 --- /dev/null +++ b/docs/platform/10_Git-Experience/configure-git-experience-for-harness-entities.md @@ -0,0 +1,165 @@ +--- +title: Harness Git Experience Quickstart +description: This topic explains steps to configure Git Experience for Harness Entities. +# sidebar_position: 2 +helpdocs_topic_id: grfeel98am +helpdocs_category_id: rv2jo2hoiy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This quickstart shows you how to enable and use Git Experience for your Harness resources, such as Pipelines. + +Harness Git Experience lets you store your resources and configurations in Git and pick Git repos as the source of truth. + +### Objectives + +You'll learn how to: + +1. Enable Git Experience for a Pipeline. +2. Create and sync a Pipeline with your Git repo. +3. Execute a Pipeline + +### Before you begin + +Make sure you have the following set up Before you begin this quickstart: + +* Make sure you have a Git repo with at least one branch. +* Make sure you have a Git connector with a Personal Access Token (PAT) for your Git account.​​ +* A Personal Access Token (PAT) for your Git account. + + Harness needs the PAT to use the Git platform APIs. + + You add the PAT to Harness as a Text Secret and it is encrypted using a Harness Secret Manager. + + Your Git Personal Access Token is stored in your Harness secret and is a private key to which only you have access. This secret cannot be accessed or referenced by any other user. + + The PAT must have the following scope: + - GitHub:![](./static/configure-git-experience-for-harness-entities-35.png) + - Bitbucket:![](./static/configure-git-experience-for-harness-entities-36.png) + + To enable Git Experience for your resources, make sure that you have Create/Edit permissions for them.​​ + +Make sure your repo has at least one branch, such as main or master. For most Git providers, you simply add a README file to the repo, and the branch is created. + +### Supported Git providers + +The following section lists the support for Git providers for Harness Git Sync:​ + +* GitHub +* Bitbucket Cloud +* Bitbucket Server + +Make sure `feature.file.editor` is not set to `false` in the `bitbucket.properties` file if you are using Bitbucket on-prem. + +### Review: Git experience requirements + +You can store your resources and configurations in Git by selecting the **Remote** option while creating the resources. + +For this, you must specify a Harness Git Connector, a repo, branch details, and a file path. + +This topic explains how to create a remote Pipeline and execute it using Harness Git Experience. + +You can also store your configurations in Harness, by selecting the **Inline** option while creating resources. For more information on creating an inline Pipeline, see [Pipelines and Stages](https://docs.harness.io/category/pipelines). + +![](./static/configure-git-experience-for-harness-entities-37.png) +You can store configurations of the following resources in Git: + +* Pipelines +* Input Sets + +Harness tracks where your configuration is kept and manages the whole lifespan of resources by maintaining metadata for each resource. + +### Step 1: Add a remote pipeline + +This quickstart explains how to add a Pipeline and sync it with your Git repo. This is called the Remote option. To add an inline Pipeline, see **Remote** option. To add an inline Pipeline, see [Create a Pipeline](../8_Pipelines/add-a-stage.md#step-1-create-a-pipeline). + +In your Project, click **Pipelines** and then click **Create a Pipeline**. The **Create New Pipeline** settings appear. + +![](./static/configure-git-experience-for-harness-entities-38.png) +Enter a **Name** for your Pipeline. + +Click **Remote**. The additional settings appear to configure Git Experience. + +![](./static/configure-git-experience-for-harness-entities-39.png) +In **Git Connector**, select or create a Git Connector to the repo for your Project. For steps, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +![](./static/configure-git-experience-for-harness-entities-40.png) +Important: Connector must use the Enable API access option and Token**Important**: The Connector must use the Enable API access option and Username and Token authentication. Harness requires the token for API access. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.​ +![](./static/configure-git-experience-for-harness-entities-41.png) +For GitHub, the token must have the following scopes: +![](./static/configure-git-experience-for-harness-entities-42.png)Here's an example of a GitHub Connector that has the correct settings:​ + +![](./static/configure-git-experience-for-harness-entities-43.png)In **Repository**, select your repository. If your repository isn't listed, enter its name since only a select few repositories are filled here. + +![](./static/configure-git-experience-for-harness-entities-44.png) +Create the repository in Git before entering it in **Select Repository**. Harness does not create the repository for you.In **Git Branch**, select your branch. If your branch isn't listed, enter its name since only a select few branches are filled here. + +![](./static/configure-git-experience-for-harness-entities-45.png) +Create the branch in your repository before entering it in **Git Branch**. Harness does not create the branch for you.Harness auto-populates the **YAML Path**. You can change this path and the file name. All your configurations are stored in Git in the [Harness Folder](harness-git-experience-overview.md#harness-folder). + +Make sure that your YAML path starts with `.harness/` and is unique.Click **Start**. + +The Pipeline Studio is displayed with your repo and branch name. + +![](./static/configure-git-experience-for-harness-entities-46.png) +### Step 2: Add a stage + +Click **Add Stage**. The stage options appear. + +Select a stage type and follow its steps. + +The steps you see depend on the type of stage you selected.​ + +For more information, see [Add Stage](../8_Pipelines/add-a-stage.md). + +Add a step and click **Save**. + +The **Save Pipelines to Git** settings appear. + +![](./static/configure-git-experience-for-harness-entities-47.png) +In **Select Branch to Commit**, commit to an existing or new branch. + +* **Commit to an existing branch**: you can start a pull request if you like. +* **Commit to a new branch**: enter the new branch name. You can start a pull request if you like. + +Click **Save**. Your Pipeline is saved to the repo branch. + +![](./static/configure-git-experience-for-harness-entities-48.png) +Click the YAML file to see the YAML for the Pipeline. + +Edit the Pipeline YAML. For example, change the name of a step. + +Commit your changes to Git. + +Return to Harness and refresh the page.​ + +A Pipeline Updated message appears. + +Click **Update**. + +The changes you made in Git are now applied to Harness.​ + +### Step 3: Execute pipeline + +In your Project, click **Pipelines**. + +Click on your Pipeline. + +Select the branch from which you want to execute your Pipeline. + +![](./static/configure-git-experience-for-harness-entities-49.png) +Click **Run**. + +Your Pipeline is ready to run from the branch you just selected. + +![](./static/configure-git-experience-for-harness-entities-50.png) +Click **Run Pipeline**. + +During Pipeline execution, the configurations of the required resources and any referenced entities like Input Sets, are fetched from Git. + +If the referenced entities exist in the same repo, they are fetched from the same branch that you have selected for Pipeline execution.​ + +If the referenced entities exist in a different repo, they are fetched from the default branch of the repo where the entities are stored.​ + +Harness resolves all the dependencies and then proceeds with Pipeline execution.​ + +### Next steps + +* [Manage Input Sets and Triggers in Simplified Git Experience​](manage-input-sets-in-simplified-git-experience.md) + diff --git a/docs/platform/10_Git-Experience/git-experience-overview.md b/docs/platform/10_Git-Experience/git-experience-overview.md new file mode 100644 index 00000000000..a8f5ef8635c --- /dev/null +++ b/docs/platform/10_Git-Experience/git-experience-overview.md @@ -0,0 +1,111 @@ +--- +title: Harness Git Experience Overview +description: Harness Git Experience lets you store configurations for your resources like Pipelines, Input Sets in Git. You can choose Git as the source of truth and use your Git credentials to access and modify… +# sidebar_position: 2 +helpdocs_topic_id: xl028jo9jk +helpdocs_category_id: rv2jo2hoiy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Git Experience lets you store configurations for your resources like Pipelines, Input Sets in Git. You can choose Git as the source of truth and use your Git credentials to access and modify your configurations. + +With Harness Git Experience, you can select the repository and branch from where you want to execute your Pipelines, hence simplifying your Pipeline execution by seamless access to your Harness resources and their configurations stored in Git. + +### Before you begin + +* Make sure you have a Git repo with at least one branch.​ +* Make sure you have a Git connector with a Personal Access Token (PAT) for your Git account.​ + +### Supported Git providers + +The following section lists the support for Git providers for Harness Git Sync:​ + +* GitHub +* Bitbucket Cloud +* Bitbucket Server + +Make sure `feature.file.editor` is not set to `false` in the `bitbucket.properties` file if you are using Bitbucket on-prem. + +### Supported Harness entities + +You can save the following Harness resources in Git using Harness Git Experience: + +* Pipelines +* Input Sets + +### What is Harness Git experience? + +Harness Git Experience lets you choose a Git-First approach for managing Harness configurations as code, using Git as the source of truth. + +You can do this by creating a resource with the **Remote** option and specifying the Git repo and branch where you want to save your configurations. + +For example, you can create a Pipeline by choosing the **Remote** option and save it in Git by providing the repo name and branch name along with the file path. + +![](./static/git-experience-overview-02.png) +Harness Git Experience lets you modify the configurations stored in Git through the Harness UI and save it back to Git. + +You can save the modifications in the existing branch or a new branch through a PR. + +### Harness Git experience workflow + +* When you create a Remote resource in Harness, the configurations are stored in Git. +* You can select the branch from which you want to run the Pipeline.![](./static/git-experience-overview-03.png) +* During Pipeline execution, the configurations of the required resources and any referenced entities like Input Sets, are fetched from Git. +If the referenced entities exist in the same repo, they are fetched from the same branch that you have selected for Pipeline execution. +If the referenced entities exist in a different repo, they are fetched from the default branch of the repo where the entities are stored. +* Harness resolves all the dependencies and then proceeds with the Pipeline execution. + +### Key features + +Following are the key features of Harness Git Experience: + +#### Multiple repo support + +Your Harness resources and their configurations can exist in multiple repos. You can choose the repository where you wish to make the modifications before pushing each configuration. At Pipeline execution, Harness pulls them all together to execute your Pipeline as you designed it. This gives you the flexibility to manage your Git repositories in the way you want. + +You can store your configurations in the following ways: + +* Store configuration files along with the code repository. +* Store configuration files in a repository separate from the code. +* Store the prod configurations in one repo, and the non-prod ones in another repo, so that only the selected developers can access prod configs. +* Store the configuration files of different environments in different branches. +* Store the Pipelines in one repository, and other configuration files in another. + +#### Multiple branch support + +Multiple users can make commits to multiple branches for the resources that are synced with the Git Provider. This provides the flexibility for various branching workflows. + +### What can I do with Harness Git experience? + +Harness Git Experience helps you do the following: + +* Store and retrieve your Harness configurations to/from Git. +* Change the Harness configuration just by changing the YAML files in Git. +* Add a remote Pipeline in Harness and it gets added to your specified Git repo and branch. +* Maintain your key Harness resources like Pipelines, Input sets like you maintain code. +* Submit config changes using the Harness Pipeline Studio (GUI and/or YAML) or entirely in Git. +* Make Harness Pipeline or resource config changes in a branch, test it in isolation (sandbox), and submit changes to master using Harness Manager or your Git platform. + +### What do I need to enable Harness Git experience? + +#### Git connector + +A Harness Git Connector is used to sync your Harness Project with your Git repo. You can set up a [Git Connector](https://docs.harness.io/category/code-repo-connectors) first and simply select it when setting up Git Experience, or you can create a Git Connector as part of the Git Experience setup. + +You will need a Harness Git Connector to connect with the Git provider and perform operations like generating a webhook. Your Git Connector credentials are used to commit to Git when operations are performed using API. + +**Important:** The Connector must use the **Enable API access** option and Username and **Token** authentication. Harness requires the token to access the Git API. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.For detailed steps to add a Git Connector, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +#### Repository + +Harness configurations are stored in repositories. These configuration files can be kept in the same repository as the code, or they can be kept separate. You can map your resources and configurations to multiple repositories. + +You must have valid Git credentials and a repo within this Git account before you enable Harness Git Experience. + +### Next steps + +* [Harness Git Experience Quickstart](configure-git-experience-for-harness-entities.md) +* [Manage Input Sets and Triggers in Git Experience](manage-input-sets-in-simplified-git-experience.md) +* [Manage a Harness Pipeline Repo using Git Experience](manage-a-harness-pipeline-repo-using-git-experience.md) + diff --git a/docs/platform/10_Git-Experience/harness-git-experience-overview.md b/docs/platform/10_Git-Experience/harness-git-experience-overview.md new file mode 100644 index 00000000000..c9763ded1d3 --- /dev/null +++ b/docs/platform/10_Git-Experience/harness-git-experience-overview.md @@ -0,0 +1,209 @@ +--- +title: Harness Git Experience Overview (Deprecated) +description: A summary of Harness Git Experience. +# sidebar_position: 2 +helpdocs_topic_id: utikdyxgfz +helpdocs_category_id: sy6sod35zi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This version of Harness Git Experience has been deprecated. To try out the latest version of Git Experience, see [Harness Git Experience Quickstart](configure-git-experience-for-harness-entities.md).​Harness Git Sync provides seamless integration between your Harness Projects, Pipelines, and resources and your Git repos. You can work entirely from Git or use a hybrid method of Git and the Harness Manager. With Harness Git Sync you can synchronize your configurations with Git and keep them up-to-date. + +Git Sync enables to: + +* Store and retrieve Harness configurations to/from Git. +* Change Harness configuration just by changing the YAML files in Git. + +This topic introduces you to Harness Git Sync. + +### Before you begin + +* In the Git Connectors, all Projects that connect to a given Git repo must use the same Connection Type: SSH or HTTP. For more information, see [Connection Type](../7_Connectors/ref-source-repo-provider/git-connector-settings-reference.md#connection-type). +Let us take an example where Project1 and Project2 connect to repo Repo1. In this case, the Git connectors for both projects must use the same Connection Type. +* Do not change any Types or Identifiers (Account Id, Org Id, Project Id, Connectors, etc) for any Harness entities. Ids are immutable and if they are out of sync, Harness Git Experience will not work. +* Do not delete the branch that you used to enable Git sync. Deleting this branch will cause inconsistencies in the Harness Git Experience.0 + +### Limitations + +* Harness Git Experience is not enabled for Pipeline Triggers at this time. You can still use Pipeline Triggers manually or via Webhooks. +* You cannot clone Pipelines through Harness in your Projects through Harness Git Experience. + +#### Supported Git Providers + +The following section lists the support for Git providers for Harness Git Sync: + + + +| | | | +| --- | --- | --- | +| **Git Provider** | **Status** | **Feature Flag** | +| GitHub | GA | None | +| Bitbucket Cloud | Beta | None | +| Bitbucket Self-Hosted | Not supported | | + +#### Supported Harness Entities + +The following section lists the Harness Entities that Harness Git Sync supports: + + + +| | | +| --- | --- | +| **Harness Entities** | **Scope** | +| Pipelines | Project | +| Connectors | Project | +| Input Sets | Project | +| Templates | Project | +| Feature Flags | Project | + +### What is Harness Git Experience? + +Harness Git Experience is a Git-First approach for managing Harness configurations as code, using Git as the source of truth. This means whatever you see in Git is the true state and you need not validate it in Harness. Any change you make to a Git Synced Project is pushed to Git first and then applied in Harness. It provides seamless integration between Harness and your Git repos. + +This approach provides the following major benefits: + +* Automation and Consistency +* Version Control +* Scalability +* Traceability + +Harness Git Experience allows you to configure Projects, Pipelines, and resources in Harness using YAML. You can perform nearly everything you can do in the Harness user interface with YAML. + +### How Does Harness Git Experience Work? + +Harness Git Experience ensures a bi-directional or a full sync between your Harness entities and your Git repos. This means any changes you make in Harness will be pushed to Git and any changes you make to your synced entities in Git will be reflected in Harness. + +Whenever you make any changes in the configuration, Harness Git Experience takes this change and pushes it to Git. + +![](./static/harness-git-experience-overview-04.png) +Whenever you commit a change to Git, Harness recieves the webhook request. This request contains details about the changes that must be applied to the Harness entities. + +![](./static/harness-git-experience-overview-05.png) +### Key Features + +##### Bi-directional Sync + +Any changes you make in Harness will be pushed to Git and any changes you make to your synced entities in Git will be reflected in Harness. This means you can seamlessly enable Git Sync for your existing Projects that already have Connectors and Pipelines added to it. + +##### Multiple Repo Support + +Your Harness Pipelines and their resources (Connectors, etc) can exist in multiple repos and folders. You can choose the repository where you wish to make the modifications before pushing each configuration. At Pipeline execution, Harness pulls them all together to execute your Pipeline as you designed it. This gives you the flexibility to manage your Git repositories in the way you want. + +You can store your configurations in the following ways: + +* Store configuration files along with the code repository. +* Store configuration files in a repository separate from the code. +* Store the prod configurations in one repo, and the non-prod ones in another repo, so that only the selected developers can access prod configs. +* Store the configuration files of different environments in different branches. +* Store the Pipelines in one repository, and other configuration files in another. + +##### Multiple Branch Support + + Multiple users can make commits to multiple branches in your Harness Projects that are synced with the Git Provider. + +##### Pipeline Changes Pushed using your SCM Credentials + +Harness Git Experience uses your SCM credentials instead of a shared Harness account. This improves auditing and helps you see who made commits from within Harness. Using your SCM credentials also allows you to leverage Harness RBAC. + +##### Git-like User Interface from within Harness Manager + +You can test Pipeline changes on a separate branch and then merge them to the main branch. Harness Git Experience provides a full dev experience with repo, branch, and PR support. + +##### Webhooks for your Synced Repos are Added Automatically + +Webhooks are registered automatically for your Project when you enable Git Experience. These will be used to trigger a Git sync in your repos whenever you make any commit in Git. This keeps your repos and Harness in sync in real-time. + +### What Can I Do with Harness Git Experience? + +Harness Git Experience helps you do the following: + +* Store and retrieve your Harness configurations to/from Git. +* Change the Harness configuration just by changing the YAML files in Git. +* Add a Pipeline in Git first and it gets added to your synced Project automatically. +* Maintain configurations like you maintain code. +* Maintain Harness CI/CD/etc Pipelines alongside the code in your repos. +* Submit config changes using the Harness Pipeline Studio (GUI and/or YAML) or entirely in Git. +* Make Harness Pipeline or resource config changes in a branch, test it in isolation (sandbox), and submit changes to master using Harness Manager or your Git platform. +* Make code and Harness CI/CD/etc Pipeline changes within a single PR. +* Control who can make changes using Prod and Non-Prod configurations in separate repos. +* Audit which Git user accounts made config changes. + +### What Do I Need to Enable Harness Git Experience? + +#### SCM Profile + +A **Harness SCM** is required to sync entities from Harness to Git. If you try to enable Harness Git Experience without first setting up an SCM, Harness will warn you and require you to set one up. + +The Harness SCM stores your user profile information like username and PAT to connect to the Git provider. When you sync a Project with a repo, these credentials are used for the sync. + +For detailed steps to add an SCM, see [Add Source Code Managers](https://docs.harness.io/article/p92awqts2x-add-source-code-managers). + +Any commit activities you perform when you make changes to the Project in Harness require your SCM credentials. Additional tasks, such as registering a webhook, require your Git Connector credentials.Git ConnectorA Harness Git Connector is used to sync your Harness Project with your Git repo. You can set up a [Git Connector](https://docs.harness.io/category/code-repo-connectors) first and simply select it when setting up Git Experience, or you can create a Git Connector as part of the Git Experience setup. + +You will need a Harness Git Connector to connect with the Git provider and perform operations like generating a webhook. Your Git Connector credentials are used to commit to Git when operations are performed using API. + +**Important:** The Connector must use the **Enable API access** option and Username and **Token** authentication. Harness requires the token to access the Git API. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.For detailed steps to add a Git Connector, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +#### Repository + +Harness configurations are stored in repositories. These configuration files can be kept in the same repository as the code, or they can be kept separate. You can map your Project to multiple repositories. + +You must have valid Git credentials and a repo within this Git account before you enable Harness Git Experience.#### Default Branch + +Harness Git Experience enables you to have multiple branches to support various branching workflows. When you create/modify entities from Harness, you can commit in one of the following ways: + +* Commit to an existing branch without PR +* Commit to an existing branch with a PR +* Commit to a new branch without PR +* Commit to a new branch with PR + +There is a [default branch](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell#_git_branches_overview) for every repo. If an entity exists in multiple branches of a given repo, then the entity from the default branch is used. + +#### Harness Folder + +Your Harness Projects and resources are stored in Git in the **Harness Folder**.  + +When you set up Git Experience and sync Harness with a Git repo, you specify the repos and branches to use for your Harness Pipelines and resources. Harness then adds a .harness folder to these locations. This is the Harness Folder. + +Harness scans the **.harness** folder recursively to find all the config files in it. All of the Harness entities are synced to the .harness folder in your repository. + +You can store your YAML files in any subfolder within the Harness Folder. Harness does not enforce a specific format or structure within the .harness folder. You can have multiple Harness Folders to store configs. + +For example, if you have two Connectors in a single Project you can have one Harness folder for each Connector or a common Harness Folder for the entire Project. + +Harness Folders can reside anywhere in Git repos and their subdirectories. + +Create the folder in your repo before setting up Git Experience. You will enter the name of the folder in Harness. Harness does not create the folder for you.### How is Harness NextGen Git Support Different from FirstGen? + +Not sure if you are using FirstGen or NextGen? See [Harness FirstGen vs Harness NextGen](https://docs.harness.io/article/1fjmm4by22).Harness NextGen Git Experience functionality is different than Harness FirstGen Harness Git Sync. If you used FirstGen Git Sync, you will find the Harness NextGen Git Experience setup and usage different and improved. + +If you're a Harness FirstGen user, you're likely familiar with Harness Git Sync. The following table shows the differences between Harness FirstGen Git Sync and NextGen Harness Git Experience: + + + +| | | | +| --- | --- | --- | +| **Feature** | **FirstGen** | **NextGen** | +| Bi-direction Git sync | ✓ | ✓ | +| Git branching | X | ✓ | +| Primary data source | Mongo | Git | +| Optional Git sync | ✓ | ✓ | +| Multi-repo support | X | ✓ | +| Flexible folders | X | ✓ | +| User credentials support | X | ✓ | +| Multi-branch support | X | ✓ | +| Automatic Webhook integration | X | ✓ | + +### Blog Post + +The following blog post walks you through Harness Git Experience: + +[The Git Sync Experience In Harness](https://harness.io/blog/continuous-delivery/git-sync-experience/) + +### Next steps + +* [Harness Git Experience Quickstart](harness-git-experience-quickstart.md) +* [Git Experience How-tos](https://docs.harness.io/article/soavr3jh0i-git-experience-how-tos) +* [Diagnose and Fix Git Sync Errors](https://ngdocs.harness.io/article/24ehx5oa94-git-sync-errors) + diff --git a/docs/platform/10_Git-Experience/harness-git-experience-quickstart.md b/docs/platform/10_Git-Experience/harness-git-experience-quickstart.md new file mode 100644 index 00000000000..7133537009a --- /dev/null +++ b/docs/platform/10_Git-Experience/harness-git-experience-quickstart.md @@ -0,0 +1,305 @@ +--- +title: Harness Git Experience Quickstart (Deprecated) +description: This quickstart shows you how to enable and use Harness Git Experience. +# sidebar_position: 2 +helpdocs_topic_id: dm69dkv34g +helpdocs_category_id: w6r9f17pk3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This version of Harness Git Experience has been deprecated. To try out the latest version of Git Experience, see [Harness Git Experience Quickstart](configure-git-experience-for-harness-entities.md).This quickstart shows you how to enable and use Harness Git Experience. + +Harness Git Experience integrates your Harness Projects, Pipelines, and resources with your Git repos. You can manage and run your Harness Pipelines and resources entirely from Git or use a hybrid method. With Harness Git Experience, your Git repos are always the single source of truth. + +See also: [Git Experience How-tos](https://docs.harness.io/article/soavr3jh0i-git-experience-how-tos), [Harness Git Experience Overview](harness-git-experience-overview.md).### Objectives + +You'll learn how to: + +1. Connect your SCM to Harness. +2. Enable Harness Git Experience in a new Project. +3. Create and sync a new Pipeline with your Git repo. + +### Before you begin + +You'll need a Git repo with at least one branch and a Personal Access Token (PAT) for your account. Harness needs the PAT to use the Git platform APIs. The PAT is encrypted using a [Harness Secret Manager](../6_Security/1-harness-secret-manager-overview.md). Your Git Personal Access Token is stored in your Harness secret and is a private key to which only you have access. This secret cannot be accessed or referenced by any other user. + +Make sure your repo has at least one branch, such as main or master. For most Git providers, you simply add a README file to the repo and the branch in created.### Step 1: Add a Source Code Manager + +A Harness Source Code Manager (SCM) contains your personal account for a Git provider such as GitHub or AWS CodeCommit. You can add one SCM to your account for each provider. + +In Harness, click your account profile at the bottom of the navigation. + +![](./static/harness-git-experience-quickstart-51.png) +In **My Source Code Managers**, click **Add Source Code Manager**. + +In **Add a Source Code Manager**, enter a name for the SCM. + +Select the SCM type, such as GitHub. + +Enter the authentication credentials. + +We'll use GitHub in this example, but you can find the settings for all of the SCMs in [Source Code Manager Settings](../7_Connectors/ref-source-repo-provider/source-code-manager-settings.md). + +Here's a GitHub example: + +![](./static/harness-git-experience-quickstart-52.png) +Click **Add**. The new SCM is listed under **My Source Code Managers**. + +### Step 2: Enable Git Experience in a Project + +In the Git provider, you want to use for syncing your Project, create a repo(s) for the Project. + +In the repo, add a folder named **projects**. + +You can use multiple repos in the Harness Git Experience for a Project. For example, you could add Pipelines to one repo and Connectors to another.For this example, we'll use one repo and the folder named **projects**. + +Here's a new GitHub repo named **GitExpDocExample**. + +![](./static/harness-git-experience-quickstart-53.png) +You do not need all of the resources used by your Pipelines to be synched to your repo. For example, you could use account-level resources such as Delegate or Docker Registry Connectors. These will work fine.In Harness, create a new Project. See [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +Here's a new Project named **GitExp Doc Example**. + +![](./static/harness-git-experience-quickstart-54.png) +When you're done you'll have a new Project containing the modules according to your license. + +![](./static/harness-git-experience-quickstart-55.png) +In your **Project** select a module such as CI or CD. + +Click **Project Setup**, and then click **Git Management**. + +**Enable Git Experience** appears. + +![](./static/harness-git-experience-quickstart-56.png) +Click **Enable Git Experience**. + +The **Configure Harness Folder** settings appear. + +![](./static/harness-git-experience-quickstart-57.png) +In **Repository name**, enter a name for the repo. It doesn't have to be the same as the Git repo name. The name you enter here will appear in Harness only. It'll identify the Project repo. + +For example, here's the **Repository name** `GitExpDocExample` after Harness Git Experience is enabled: + +![](./static/harness-git-experience-quickstart-58.png) +In **Select Connector**, select or create a Git Connector to the repo for your Project. For steps, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +Important: Connector must use the Enable API access option and Token**Important:** the Connector must use the **Enable API access** option and Username and **Token** authentication. Harness needs the PAT to access the Git platform APIs. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector. For details on source code manager settings, see [Source Code Manager Settings](../7_Connectors/ref-source-repo-provider/source-code-manager-settings.md). + +![](./static/harness-git-experience-quickstart-59.png) +For GitHub, the token must have the following scopes: + +![](./static/harness-git-experience-quickstart-60.png) +For other Git providers, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +Here's an example of a GitHub Connector that has the correct settings: + +![](./static/harness-git-experience-quickstart-61.png) +Once you add a Connector, in **Repository URL**, you should see the repo URL. + +Click **Test Connection**. Once Harness verifies the connection, you will see **Connection Successful**. + +In **Select Harness Folder**, enter the name of a folder at the repo root or the name of a subfolder in the repo. + +The folder must already exist in the repo. You will also need to add a readme file to that folder as Git providers typically don't let you create empty folders.When you complete Harness Git Experience setup, Harness will create the a special folder inside the folder you entered. The special folder is called the Harness Folder and all files are stored there. + +The Harness Folder is named **.harness**. This allows you to identify Harness Project YAML files in your repos. + +Later, when you add Pipelines and resources to this Project, you can specify their default folders. + +When a Pipeline in one repo needs to access a Connector/Secret/etc in another repo at runtime, the Pipeline will always use the Connector/Secret/etc in their default branch and folder. + +**Root or Subfolder?** You can specify the repo root or a subfolder in **Select Harness Folder**. + +Using a subfolder?If you want to use a subfolder, create the subfolder in your repo before entering it in **Select Harness Folder**. Harness does not create the folder for you. + +You will also need to add a readme file to that subfolder as Git providers typically don't let you create empty folders. + +Once you have the subfolder, you will enter the name of the subfolder in **Select Harness Folder**. + +For example, here is the subfolder **subfolder\_example** in a repo and how it is added to the **Select Harness Folder** setting: + +![](./static/harness-git-experience-quickstart-62.png) +In **Select Default Branch**, select the branch that you want to use, such as **main**. + +Here's an example of the Configure Harness Folder settings for a repo and root folder: + +![](./static/harness-git-experience-quickstart-63.png) +When you're ready, click **Save**. + +In **Select Connectivity Mode**, you have two options: + +* **Connect Through Manager:** Harness SaaS will connect to your Git repo whenever you make a change and Git and Harness sync. +* **Connect Through Delegate:** Harness will make all connections using the Harness Delegate. This option is used for [Harness On-Prem](https://docs.harness.io/article/tb4e039h8x-harness-on-premise-overview) frequently, but it is also used for Harness SaaS.Secrets: if you select **Connect Through Manager**, the Harness Manager decrypts the secrets you have set up in the Harness Secrets Manager. This is different than **Connect Through Delegate** where only the Harness Delegate, which sits in your private network, has access to your key management system. See Harness Secrets Manager Overview.For this quickstart, select **Connect Through Manager**, and then click **Save and Continue.** +Harness Git Experience is enabled and the new repo and folder are listed:![](./static/harness-git-experience-quickstart-64.png) + +### Step 3: Review the Harness Git Experience in your Project + +Harness does not automatically add a folder to your repo until you create a Pipeline or resource like a Connector in your Project. + +You can see the repo setting in your Project before creating Pipelines and resources. + +In your Project, click one of your modules. In this example, we'll use **Builds**. + +Click **Pipelines**. + +At the top of the page, you can see **All Repositories**. + +![](./static/harness-git-experience-quickstart-65.png) +Click **All Repositories** and select the name of the repo you entered in **Repository name** earlier. + +![](./static/harness-git-experience-quickstart-66.png) +You can now select any branch from the repo. + +![](./static/harness-git-experience-quickstart-67.png) +Harness Git Experience is enabled! + +### Step 4: Add a Pipeline + +Now you can create Pipelines and resources and store their YAML files in your Git repo's branches and folders. + +Git is the single source of truth. The Pipelines and resources are stored in the repo first and then synced with Harness. + +In your Harness Project, click **Builds**. If you don't have the **Builds** module, use another module. + +In **Builds**, click **Pipelines**. + +At the top of the page is **All Repositories**. + +![](./static/harness-git-experience-quickstart-68.png) +You select the repo and branch here to display the Pipelines stored in them. It does not affect the repo and branch where you create a new Pipeline. You will select that repo and branch in the **Create New Pipeline** settings next. + +Click **+Pipelines** to create a new Pipeline. The **Create New Pipeline** settings appear. + +Give the Pipeline a name such as **Example**. + +In **Git Repository Details**, select the repo and branch where you want to store the Pipeline YAML file. You will select a folder in that repo and branch later. + +Click **Start**. + +We're simply demonstrating Harness Git Experience, so we'll create a very simple Pipeline. + +Click **Add Stage** and then click **Build**. + +In **About Your Stage**, enter the name **helloworld**. + +Enable **Clone Codebase**. + +In **Connector**, select or create a Git Connector to the repo for your Project. For steps, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +Click **Set Up Stage**. + +Next, you can just paste the following YAML into the Pipeline to create a very simple Pipeline. + +Click **YAML** and then paste in the following YAML. + + +``` +pipeline: + name: Example + identifier: Example + allowStageExecutions: false + projectIdentifier: DocGitSync + orgIdentifier: default + tags: {} + properties: + ci: + codebase: + connectorRef: DocRepo + build: <+input> + stages: + - stage: + name: helloworld + identifier: helloworld + description: "" + type: CI + spec: + cloneCodebase: true + infrastructure: + type: KubernetesDirect + spec: + connectorRef: examplek8 + namespace: example-delegate-new + execution: + steps: + - step: + type: Run + name: example + identifier: example + spec: + connectorRef: exampledocker + image: sample + command: echo test +``` +Replace `projectIdentifier: GitExp_Doc_Example` with the identifier of your Project. + +You can see the Project ID right after `projects` in the URL of the page: + +`https://app.harness.io/.../projects/GitExp_Doc_Example/...` + +Click **Save**. The **Save Pipeline to Git** settings appear. + +![](./static/harness-git-experience-quickstart-69.png) +Click **Save**. The Pipeline is ready. Now we can save it to Git. + +### Step 5: Save the Pipeline to Git + +When you click **Save**, the **Save Pipelines to Git** settings appear. + +In **Harness Folder**, select one of the folders set up in the Project's Git Experience settings. + +The YAML file for the Pipeline will be saved to this folder. But you can add subfolders in **File Path**. + +In **File Path**, enter a name for the YAML file, such as `Example.yaml`. Harness will generate one automatically from the Pipeline name, but you can add your own. + +To enter a subfolder of the Harness Folder you selected, enter the folder name in front of the file name like `mybuilds/Example.yaml`. + +In this example, we use `mybuilds/Example.yaml`. + +In **Commit Details**, enter a message. + +In **Select Branch to Commit**, commit to an existing or new branch. + +* **Existing branch:** you can start a pull request if you like. +* **New branch:** enter the new branch name. You can start a pull request if you like. + +Here's a simple example: + +![](./static/harness-git-experience-quickstart-70.png) +Click **Save**. + +![](./static/harness-git-experience-quickstart-71.png) +The Pipeline is saved to the repo branch and folder. + +### Step 6: View the Pipeline in Git and Harness Git Experience + +In your Git repo, locate the branch, folder, and file. + +Harness created a **.harness** folder under the folder you selected in **Harness Folder.** + +If you added a folder to **File Path**, open that folder. + +Click the YAML file for your Pipeline. The YAML is displayed. + +![](./static/harness-git-experience-quickstart-72.png) +In your Harness Project, click **Project Setup**, and then click **Git Management**. + +In **Git Management**, click **Entities**. + +In **Entities by repositories**, expand the Project name. + +The Pipeline is listed along with its file path in the repo. + +![](./static/harness-git-experience-quickstart-73.png) +Now you have a Pipeline stored in Git. + +### Next steps + +Congratulations! You now have Harness Git Experience set up, synced with your Git repo, and storing a new Pipeline. + +Next, explore other Harness features: + +* [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) +* [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) +* [Git Experience How-tos](https://docs.harness.io/article/soavr3jh0i-git-experience-how-tos) + diff --git a/docs/platform/10_Git-Experience/import-a-pipeline.md b/docs/platform/10_Git-Experience/import-a-pipeline.md new file mode 100644 index 00000000000..123f0ff4ddf --- /dev/null +++ b/docs/platform/10_Git-Experience/import-a-pipeline.md @@ -0,0 +1,70 @@ +--- +title: Import a Pipeline From Git +description: Topic describing how to import a Pipeline from Git to Harness. +# sidebar_position: 2 +helpdocs_topic_id: q1nnyk7h4v +helpdocs_category_id: rv2jo2hoiy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness lets you create a Pipeline in the following ways: + +* Create an inline Pipeline and save its configuration in Harness. +* Create a remote Pipeline and save its configuration in Git. +* Import a Pipeline from Git and save its configuration in Git. + +This topic explains how to import a Pipeline from your Git repo to Harness. + +### Before you begin + +* [Harness Git Experience Overview](git-experience-overview.md) +* [Harness Git Experience Quickstart​](configure-git-experience-for-harness-entities.md) +* [Manage a Harness Pipeline Repo Using Git Experience](manage-a-harness-pipeline-repo-using-git-experience.md) + +### Permissions + +* Make sure you have **Create/Edit** permissions for Pipelines. + +### Step: Import pipeline + +You can import a Pipeline from the CI or CD module in Harness. + +This topic shows you how to import a Pipeline to the CD module. + +1. In Harness, click **Deployments**. +2. Select your Project and click on **Pipelines**. +3. Select **Import From Git**. + + ![](./static/import-a-pipeline-29.png) + + The **Import Pipeline From Git** settings appear. + + ![](./static/import-a-pipeline-30.png) + +4. Enter a **Name** for your Pipeline. +5. In **Git Connector**, select or create a Git Connector to connect to your Git repo. For steps, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors).Important: Connector must use the Enable API access option and Token**Important**: The Connector must use the Enable API access option and Username and Token authentication. Harness requires the token for API access. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.​ + +![](./static/import-a-pipeline-31.png) + +For GitHub, the token must have the following scopes: + +![](./static/import-a-pipeline-32.png) + +Here's an example of a GitHub Connector that has the correct settings: + +​![](./static/import-a-pipeline-33.png) + +6. In **Repository**, select the repository from where you want to import the Pipeline. If your repository isn't listed, enter its name since only a select few repositories are filled here.Create the repository in Git before entering it in **Select Repository**. Harness does not create the repository for you. +7. In **Git Branch**, select the branch from where you want to import the Pipeline. If your branch isn't listed, enter its name since only a select few branches are filled here.Create the branch in your repository before entering it in **Git Branch**. Harness does not create the branch for you. +8. Enter the **YAML Path** from where you want to import the Pipeline. All your configurations are stored in Git in the [Harness Folder](harness-git-experience-overview.md#harness-folder).Make sure that your YAML path starts with `.harness/` and the YAML file already exists in the specified Git repo and branch. +9. Click **Import**. +Click on your Pipeline to proceed. +By default, Harness fetches your Pipeline details from the default branch. If you have imported Pipeline from some other branch you will see the below error.![](./static/import-a-pipeline-34.png) +Select the branch from where you imported the Pipeline and continue. + +### See also + +* [Manage Input Sets and Triggers in Simplified Git Experience​](manage-input-sets-in-simplified-git-experience.md) +* [Manage a Harness Pipeline Repo Using Git Experience](manage-a-harness-pipeline-repo-using-git-experience.md) + diff --git a/docs/platform/10_Git-Experience/import-a-template-from-git.md b/docs/platform/10_Git-Experience/import-a-template-from-git.md new file mode 100644 index 00000000000..cb556038d09 --- /dev/null +++ b/docs/platform/10_Git-Experience/import-a-template-from-git.md @@ -0,0 +1,58 @@ +--- +title: Import a Template From Git +description: This topic describes how to import various Templates from Git. +# sidebar_position: 2 +helpdocs_topic_id: etz5whjn5x +helpdocs_category_id: rv2jo2hoiy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness enables you to add Templates to create reusable logic and Harness entities (like Steps, Stages, and Pipelines) in your Pipelines. You can link these Templates in your Pipelines or share them with your teams for improved efficiency. + +Templates enhance developer productivity, reduce onboarding time, and enforce standardization across the teams that use Harness. + +You can create Templates in Harness in the following ways: + +* Create an inline Template and save its configuration in Harness. +* Create a remote Template and save its configuration in Git. +* Import a Template from Git and save its configuration in Git. + +This topic explains how to import a Template from your Git repo to Harness. + +### Before you begin + +* See [Harness Git Experience Overview](git-experience-overview.md) +* See [Harness Git Experience Quickstart​](configure-git-experience-for-harness-entities.md) +* See [Templates Overview](../13_Templates/template.md) + +### Permissions + +* To import a Template, make sure you have the **Create/Edit** permissions for Templates. + +### Step: Import a template + +You can import a Template in the Account, Org, or Project scope. + +This topic explains how to import a Template in the Project scope. + +1. In your Harness Account, go to your Project. +You can import a Template from the CI or CD module in Harness. +This topic shows you how to import a Template to the CD module. +2. Click **Deployments**. +3. In **PROJECT SETUP**, click **Templates**. +4. Click **New Template** and then click **Import From Git**.![](./static/import-a-template-from-git-23.png) +The **Import Template From Git** settings appear.![](./static/import-a-template-from-git-24.png) +5. Enter a **Name** for your Template. +6. In **Version Label**, enter a version for the Template. +7. In **Git Connector**, select or create a Git Connector to connect to your Git repo. For steps, see [Code Repo Connectors](https://harness.helpdocs.io/category/xyexvcc206-ref-source-repo-provider).Important: Connector must use the Enable API access option and TokenThe Connector must use the Enable API access option and Username and Token authentication. Harness requires a token for API access. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.​ +![](./static/import-a-template-from-git-25.png) +For GitHub, the token must have the following scopes: +![](./static/import-a-template-from-git-26.png) +8. In **Repository**, select the repository from where you want to import the Template. If you don't see your repository in the list, enter its name since only a select few repositories are filled here.Create the repository in Git before entering it in **Select Repository**. Harness does not create the repository for you. +9. In **Git Branch**, select the branch from where you want to import the Template. If you don't see your branch in the list, enter its name since only a select few branches are filled here.Create the branch in your repository before entering it in **Git Branch**. Harness does not create the branch for you. +10. Enter the **YAML Path** from where you want to import the Template. All your configurations are stored in Git in the [Harness Folder](harness-git-experience-overview.md#harness-folder).![](./static/import-a-template-from-git-27.png) +11. Click **Import**. +Click on your Template to proceed. +By default, Harness fetches your Template details from the default branch. If you have imported Template from some other branch, select the branch from where you imported the Template and continue.![](./static/import-a-template-from-git-28.png) + diff --git a/docs/platform/10_Git-Experience/import-input-sets.md b/docs/platform/10_Git-Experience/import-input-sets.md new file mode 100644 index 00000000000..686c03909b3 --- /dev/null +++ b/docs/platform/10_Git-Experience/import-input-sets.md @@ -0,0 +1,53 @@ +--- +title: Import an Input Set From Git +description: This topic explains the steps to import an Inputset from Git. +# sidebar_position: 2 +helpdocs_topic_id: j7kdfi3640 +helpdocs_category_id: rv2jo2hoiy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Input Sets are collections of runtime inputs for a Pipeline provided before execution. + +All Pipeline settings can be set as runtime inputs in Pipeline Studio **Visual** and **YAML** editors. + +Before running a Pipeline, you can select one or more Input Sets and apply them to the Pipeline. + +You can either [create](../8_Pipelines/run-pipelines-using-input-sets-and-overlays.md#step-1-create-the-input-sets) a new Input Set or import one from your Git repo. + +This topic explains how to import an Input Set from your Git repo and apply it to your Pipeline. + +### Before you begin + +* [Harness Git Experience Overview](git-experience-overview.md) +* [Harness Git Experience Quickstart​](configure-git-experience-for-harness-entities.md) +* [Input Sets and Overlays](https:/article/3fqwa8et3d-input-sets) +* [Manage a Harness Pipeline Repo Using Git Experience](manage-a-harness-pipeline-repo-using-git-experience.md) +* [Manage Input Sets and Triggers in Git Experience](manage-input-sets-in-simplified-git-experience.md) + +### Step: Import an input set + +You can import an Input Set from the CI or CD module in Harness. + +This topic shows you how to import an Input Set to the CD module. + +1. In Harness, click **Deployments**. +2. Select your Project and click on **Pipelines** and click on **Input Sets**. +3. Click **New Input Set** and select **Import From Git**. + + ![](./static/import-input-sets-00.png) + + The **Import Input Set From Git** settings appear. + + ![](./static/import-input-sets-01.png) + +4. Enter a **Name** for your Input Set. +5. Harness fetches the following details and auto-fills them: + 1. **Git Connector** + 2. **Repository** + 3. **Git Branch** +6. Enter the **YAML Path** from where you want to import the Input Set. All your configurations are stored in Git in the [Harness Folder](harness-git-experience-overview.md#harness-folder).Make sure that your YAML path starts with `.harness/` and the YAML file already exists in the specified Git repo and branch. +7. Click **Import**. +Click on your **Run Pipeline** to proceed. + diff --git a/docs/platform/10_Git-Experience/manage-a-harness-pipeline-repo-using-git-experience.md b/docs/platform/10_Git-Experience/manage-a-harness-pipeline-repo-using-git-experience.md new file mode 100644 index 00000000000..fbb159b5ba9 --- /dev/null +++ b/docs/platform/10_Git-Experience/manage-a-harness-pipeline-repo-using-git-experience.md @@ -0,0 +1,131 @@ +--- +title: Manage a Harness Pipeline Repo Using Git Experience +description: Git Experience enables you to store and manage your Harness Pipelines and configs in your Git repos. +# sidebar_position: 2 +helpdocs_topic_id: 5nz7j3e1yc +helpdocs_category_id: rv2jo2hoiy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Git Experience enables you to store and manage your Harness Pipelines and Input Sets as YAML definition files in your Git repos. You can store your Harness definitions in the same repo with your code. You can also store your Harness definitions in a separate repo from your codebase. + +This topic describes the second workflow. We start with two code repos in Git, for a front-end service and a back-end service. Then we create Pipelines, Input Sets, and Triggers for the two codebases in a separate Harness repo. + +![](./static/manage-a-harness-pipeline-repo-using-git-experience-14.png) + +### Before you begin + +This topic assumes that you are familiar with the following: + +* How to create a Pipeline using Git Experience. See [Harness Git Experience QuickStart](configure-git-experience-for-harness-entities.md). +* How to create Input Sets and Triggers using Git Experience. See [Manage Input Sets and Triggers in Git Experience](manage-input-sets-in-simplified-git-experience.md). +* A basic understanding of how Pipelines, Input Sets, and Triggers work together: + + [Run Pipelines using Input Sets and Overlays](../8_Pipelines/run-pipelines-using-input-sets-and-overlays.md) + + [Trigger Pipelines using Git Event Payloads](../11_Triggers/trigger-pipelines-using-custom-payload-conditions.md) + +This topic also assumes you have a Git repo with the codebase you want to build and at least one branch. + +### Step 1: Create the Harness repo + +Log in to your Git provider and create a new repo. In this workflow we call it **myHarnessConfigs**. + +### Step 2: Create a new pipeline + +In the Harness Pipeline Studio, go to your CI project and then click **Pipelines** > **+New Pipeline**. The **Create New Pipeline** window appears. + +Enter a name that corresponds to the code repo for the Pipeline. In this case we use the same name as the repo: **myFrontEndService**. Under **How do you want to set up your pipeline**, select **Remote**. + +Select the Git Connector and the Git repo where you want to save the Pipeline. In this case, we select **myHarnessConfigs**. + +For the YAML path, enter **.harness/{PIPELINE\_SUBFOLDER}/{PIPELINE\_NAME}.yml**. The root folder **.harness** is required. The **{*****PIPELINE\_SUBFOLDER*****}** is not required, but it is good practice if you want to store multiple Pipelines in the same repo. This makes it much easier to manage all of your Harness definitions. + +In this case, we save the Pipeline YAML as `./harness/myFrontEndService/myFrontEndService.yaml`. + +![](./static/manage-a-harness-pipeline-repo-using-git-experience-15.png) + +Click **Start**. You can now set up your Pipeline. + +### Step 3: Set up your build stage and codebase + +In the Pipeline Studio, click **Add Stage** and select **Build** for the stage type. + +In About your Stage, select the Git repo with the codebase that you want the Pipeline to build. Click **Set Up Stage**. + +![](./static/manage-a-harness-pipeline-repo-using-git-experience-16.png) + +Set up your Build Stage in the Pipeline Studio: define the build infrastructure and add at least one Step. Click **Save**. The **Save Pipelines to Git** window appears. + +Select the branch where you want to save the Pipeline and click **Save**. You generally want to save it to the default branch on the first save. You can then create different branches in the Harness repo if you want to create different versions of your Pipeline. + +![](./static/manage-a-harness-pipeline-repo-using-git-experience-17.png) + +### Step 4: Create an input set + +With Git Experience enabled, any Input Sets you create get stored in the same repo and branch as the Pipeline definition. In this step, you will create a simple Input Set and save it with your Pipeline. + +Click **Run**. The **Run Pipeline** screen appears. + +Under **Build Type**, select **Git Branch**. + +For the **Branch Name**, select **Expression** and enter `<+trigger.targetBranch>` as a runtime expression. + +Click **Save as Input Set**. The Save InputSets to Git screen appears. + +Select **Commit to an existing branch**. + +Enter the name and YAML path for the Input Set. For the YAML path, use the same format as you did with the Pipeline: `.harness` root folder, Pipeline subfolder, filename. In this example, we enter `.harness/myFrontEndService/trigger-target-branch.yaml`. + +Click **Save** and save the Input Set into the default branch in your Harness repo. + +Every Input Set is associated with a specific Pipeline, which is specified by the `pipeline : identifier` element in the Input Set YAML. + +![](./static/manage-a-harness-pipeline-repo-using-git-experience-18.png) + +### Step 5: Create a trigger + +Now that you have a Pipeline and Input Set in your default branch, you create a Trigger that uses the Input Set you just created. + +**Create Input Set before Creating Trigger:** If you want to use an Input Set as part of a Trigger, create and sync the Input Set before creating the Trigger.In the Pipeline Studio, click **Triggers** and create a new Trigger. Note the following: + +* In the **Configuration** tab > **Repository Name** field, make sure you specify the codebase repo and not the Harness repo. +* In the **Pipeline Input Repo** > **Pipeline Input**, select the Input Set you just created. +* In the **Pipeline Input Repo** > **Pipeline Reference Branch** field, specify the default branch in the Harness repo where you initially saved the Pipeline. When the Trigger receives a payload, it looks in the repo where you store your Harness definitions. Then it uses the Pipeline in the branch specified by this field.The default setting for the Pipeline Reference Branch field is `<+trigger.branch>`. This is a reasonable default the Trigger is webhook-based AND your code and Harness configs are in the same repo. The second condition does not apply in this case. Therefore, you must set this field manually. +* For information on other fields, [Trigger Pipelines using Git Event Payload Conditions](../11_Triggers/trigger-pipelines-using-custom-payload-conditions.md). + +In the Pipeline Input tab, select the Input Set you just created and click **Create Trigger**. + +You now have a Pipeline, Input Set, and Trigger that you can use in new branches that you create from the default branch. + +Unlike Pipelines and Input Sets, Trigger definitions are saved in the Harness database and not in your Git repo. Each Trigger is associated with a specific Pipeline, which is specified by the `pipelineIdentifier` element in the Trigger YAML.![](./static/manage-a-harness-pipeline-repo-using-git-experience-19.png) + +### Next steps + +You now have a Pipeline, Input Set, and Trigger for your codebase. The Pipeline and Input Set are in one repo, `myHarnessConfigs`. The code is in another repo, `myFrontEndService`. Note that both repos have the same default branch, `main`. When your Trigger receives a matching payload, it starts a build using the Pipeline in `myHarnessConfigs`. The Trigger uses its `pipelineBranchName` element in its YAML definition and uses the Pipeline in this branch (`main`) to run the build. + +#### Set up more pipelines + +Follow the previous workflow for each additional codebase you want to build in Harness. When saving multiple Pipelines in the same repo, remember to save your Pipelines and Input Sets in separate subfolders. In this example, we've added a Pipeline and Input Set for our `myBackEndService` codebase: + + +``` +% pwd +~/myHarnessConfigs/.harness +% ls -aR +. .. myBackEndService myFrontEndService +./myBackEndService: +. .. myBackEndService.yaml trigger-target-branch.yaml +./myFrontEndService: +. .. myFrontEndService.yaml trigger-target-branch.yaml +``` +#### Create branch-specific pipelines + +You might find that you can use the default Pipeline, Input Set, and Trigger you just created for most of your builds, regardless of which codebase branch gets updated. The codebase sends a payload; the payload includes the updated branch; the Trigger builds from this branch using the runtime expression `<+trigger.targetBranch>`. + +Git Experience enables you to create branches in your Harness repo so you can create different versions of the same Pipeline for different use cases. For example, suppose you want your Pipeline to push to different registries depending on the updated branch. Updates to `main` push to a public registry; updates to all other branches push to a private registry. To implement this, do the following: + +1. Customize your default Pipeline and click Enter. +2. Select **Commit to a new branch** and enter the branch name. In this case, we save the Pipeline in a new branch `push-to-private`:![](./static/manage-a-harness-pipeline-repo-using-git-experience-20.png) +3. Customize the Input Sets and Triggers for the new Pipeline as needed. For this specific use case, you would add a condition to the Trigger so it uses the Pipeline in `push-to-private` when it receives a payload from any branch except `main`.![](./static/manage-a-harness-pipeline-repo-using-git-experience-21.png) +4. In the Trigger editor > Pipeline Input field, make sure that the Pipeline Reference Branch field references the new branch:![](./static/manage-a-harness-pipeline-repo-using-git-experience-22.png) + diff --git a/docs/platform/10_Git-Experience/manage-input-sets-in-simplified-git-experience.md b/docs/platform/10_Git-Experience/manage-input-sets-in-simplified-git-experience.md new file mode 100644 index 00000000000..c053301dd2e --- /dev/null +++ b/docs/platform/10_Git-Experience/manage-input-sets-in-simplified-git-experience.md @@ -0,0 +1,120 @@ +--- +title: Manage Input Sets and Triggers in Git Experience +description: Once you have saved your Pipeline in your repo, you can set up your Input Sets and Triggers. You can set up your Input Set definitions in your repo along with your Pipeline. You can then set up your… +# sidebar_position: 2 +helpdocs_topic_id: 8tdwp6ntwz +helpdocs_category_id: rv2jo2hoiy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Once you have saved your Pipeline in your repo, you can set up your Input Sets and Triggers. You can set up your Input Set definitions in your repo along with your Pipeline. You can then set up your Triggers to use specific Input Sets in your repo. + +This topic covers a simple workflow for setting up your Input Sets and Triggers. It does not cover these topics in detail. For details on those, see: + +* [Run Pipelines using Input Sets and Overlays](../8_Pipelines/run-pipelines-using-input-sets-and-overlays.md) +* [Trigger Pipelines using Git Event Payloads](../11_Triggers/trigger-pipelines-using-custom-payload-conditions.md) + +### Before you begin + +You'll need the following: + +* A Git repo with at least one branch and a Personal Access Token (PAT) for your account. Harness needs the PAT to use the Git platform APIs. The PAT is encrypted using a Harness Secret Manager. Your Git Personal Access Token is stored in your Harness secret and is a private key to which only you have access. This secret cannot be accessed or referenced by any other user.Make sure your repo has at least one branch, such as main or master. For most Git providers, you simply add a README file to the repo, and the branch is created. +* A Harness Pipeline with Git Experience enabled. In this how-to, you will cross-check your updates in both your codebase and in the Harness UI. See [Harness Git Experience QuickStart](https://newdocs.helpdocs.io/article/grfeel98am/preview). + +### Initial setup + +#### Step 1: Select the branch + +When you edit your Pipeline in the Harness UI, you are editing a branched version of that Pipeline. Make sure that you are editing in the correct branch. You can switch branches using the branch picker in the top left. + +![](./static/manage-input-sets-in-simplified-git-experience-06.png) + +#### Step 2: Create an input set + +With Git Experience enabled, any Input Sets you create get stored in the same repo and branch as the Pipeline definition. In this step, you will create a simple Input Set and save it with your Pipeline. + +Click **Run**. The Run Pipeline screen appears. + +Under Build Type, select Git Branch. + +For the Branch Name, select **Expression** and enter `<+trigger.targetBranch>` as a runtime expression. + +![](./static/manage-input-sets-in-simplified-git-experience-07.png) + +Click **Save as Input Set**. In the popup that appears, enter the name of the Input Set. (Note that the Yaml Path field auto-populates with the path (.harness/) and filename based on the name you enter.) + +![](./static/manage-input-sets-in-simplified-git-experience-08.png) + +Click **Save**. The Save Input Sets to Git screen appears. + +Select **Commit to an existing branch** and click **Save**. The Input Set is now saved with your Pipeline under `.harness` in your repo and branch. + +![](./static/manage-input-sets-in-simplified-git-experience-09.png) + +In the Run Pipeline screen, click **Cancel**. + +#### Step 3: Create a trigger for the pipeline + +Now that you have a Pipeline and Input Set in your default branch, create a Trigger that uses the Input Set you just created. + +In the Pipeline Studio, create a new Trigger as described in [Trigger Pipelines using Git Event Payload Conditions](../11_Triggers/trigger-pipelines-using-custom-payload-conditions.md). + +In the Pipeline Input tab, select the Input Set you just created and click **Create Trigger**. + +![](./static/manage-input-sets-in-simplified-git-experience-10.png) + +You now have a Pipeline, Input Set, and Trigger that you can use in new branches that you create from the default branch. When a webhook payload arrives, the Trigger selects the branch to use based on the Pipeline Reference Branch field (`<+trigger.branch>`) and the Git Branch field in the Input Set (`<+trigger.branch>`). + +### Example workflow: Create a custom pipeline in a new branch + +Suppose you're a developer working on a new feature in your own branch. You want your Pipeline to run some additional tests on your code before it generates an artifact. In this example workflow, we customize the Pipeline and Input Set in a new branch. Then we create a Trigger specifically for that branch. + +#### Step 1: Check `.harness` is in the new branch + +This workflow assumes that your branch has a `.harness` subfolder with the same Pipeline and Input Set as `main`. + +If you created the new branch from `main` *after* you did the initial setup described above, proceed to the next step. + +If you created the new branch from `main` *before* you did the initial setup, commit the `.harness` folder in `main` to the new branch. + +#### Step 2: Update the pipeline + +In the Pipeline Studio, check the branch pull-down to make sure you're in the correct branch. (You might need to refresh the page to see the new branch.) + +![](./static/manage-input-sets-in-simplified-git-experience-11.png) + +Update the Pipeline with the branch-specific behavior you want the Pipeline to perform. (In this example workflow, you would add some Run Test Steps to your Build Stage.) + +When you finish updating, click **Save** and save the Pipeline in your new branch. + +#### Step 3: Create a branch-specific trigger + +In this step, you will create a Trigger specifically for the new branch. Do the following: + +* In the Configuration tab, include the branch in the trigger name. For example, **build-on-push-to-my-new-feature-branch**. +* In the Conditions tab, set the Condition to trigger on the specified branch only. If you want to trigger on a Pull Request, for example, set the Target Branch field to `my-new-feature-branch`. +You might also want to set the Changed Files field to exclude the .harness folder. This will ensure that updates to your Harness configs don't trigger unwanted builds. + + ![](./static/manage-input-sets-in-simplified-git-experience-12.png) + +* In the Pipeline Input tab, specify the branch name in the **Pipeline Reference Branch** field. + + ![](./static/manage-input-sets-in-simplified-git-experience-13.png) + +### Notes + +Review the following notes in case you encounter issues using Git Experience with Input Sets. + +#### Pipeline reference branch field + +When Git Experience is enabled for your Pipeline, the Pipeline Input tab includes the **Pipeline Reference Branch** field. This field is set to `<+trigger.branch>` by default. This means that any Build started by this Trigger uses the Pipeline and Input Set definitions in the branch specified in the webhook payload. + +This default is applicable *only* if the Trigger is webhook-based. For all other Trigger types, you need to enter a specific branch name. + +#### Create input set before creating trigger + +If you want to use an Input Set as part of a Trigger, create and sync the Input Set before creating the Trigger. + +For more details, go to [Manage a Harness Pipeline Repo Using Git Experience](manage-a-harness-pipeline-repo-using-git-experience.md). + diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-35.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-35.png new file mode 100644 index 00000000000..3cfdbf5c77b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-35.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-36.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-36.png new file mode 100644 index 00000000000..181470e78ca Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-36.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-37.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-37.png new file mode 100644 index 00000000000..c0c154fef34 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-37.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-38.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-38.png new file mode 100644 index 00000000000..4510708a02b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-38.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-39.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-39.png new file mode 100644 index 00000000000..0832b0a2b8f Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-39.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-40.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-40.png new file mode 100644 index 00000000000..7763aa0e364 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-40.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-41.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-41.png new file mode 100644 index 00000000000..fd1efb186d4 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-41.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-42.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-42.png new file mode 100644 index 00000000000..3cfdbf5c77b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-42.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-43.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-43.png new file mode 100644 index 00000000000..fd1efb186d4 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-43.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-44.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-44.png new file mode 100644 index 00000000000..a329cec99c3 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-44.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-45.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-45.png new file mode 100644 index 00000000000..18759caf2fe Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-45.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-46.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-46.png new file mode 100644 index 00000000000..9020388a08a Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-46.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-47.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-47.png new file mode 100644 index 00000000000..072c295f9b6 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-47.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-48.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-48.png new file mode 100644 index 00000000000..867f3690348 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-48.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-49.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-49.png new file mode 100644 index 00000000000..b723507f049 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-49.png differ diff --git a/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-50.png b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-50.png new file mode 100644 index 00000000000..0851b13a31a Binary files /dev/null and b/docs/platform/10_Git-Experience/static/configure-git-experience-for-harness-entities-50.png differ diff --git a/docs/platform/10_Git-Experience/static/git-experience-overview-02.png b/docs/platform/10_Git-Experience/static/git-experience-overview-02.png new file mode 100644 index 00000000000..4510708a02b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/git-experience-overview-02.png differ diff --git a/docs/platform/10_Git-Experience/static/git-experience-overview-03.png b/docs/platform/10_Git-Experience/static/git-experience-overview-03.png new file mode 100644 index 00000000000..b723507f049 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/git-experience-overview-03.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-overview-04.png b/docs/platform/10_Git-Experience/static/harness-git-experience-overview-04.png new file mode 100644 index 00000000000..d029c444d49 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-overview-04.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-overview-05.png b/docs/platform/10_Git-Experience/static/harness-git-experience-overview-05.png new file mode 100644 index 00000000000..f2ed590e505 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-overview-05.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-51.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-51.png new file mode 100644 index 00000000000..643653ba653 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-51.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-52.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-52.png new file mode 100644 index 00000000000..cfab056b7bc Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-52.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-53.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-53.png new file mode 100644 index 00000000000..da566304478 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-53.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-54.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-54.png new file mode 100644 index 00000000000..87cf0512acd Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-54.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-55.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-55.png new file mode 100644 index 00000000000..58f2e83da1b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-55.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-56.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-56.png new file mode 100644 index 00000000000..7904548f79d Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-56.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-57.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-57.png new file mode 100644 index 00000000000..57b457880cb Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-57.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-58.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-58.png new file mode 100644 index 00000000000..fe111c21403 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-58.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-59.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-59.png new file mode 100644 index 00000000000..289749f1bf7 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-59.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-60.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-60.png new file mode 100644 index 00000000000..60a6a8e25ef Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-60.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-61.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-61.png new file mode 100644 index 00000000000..fd1efb186d4 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-61.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-62.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-62.png new file mode 100644 index 00000000000..300f3b65f00 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-62.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-63.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-63.png new file mode 100644 index 00000000000..54795df8ea1 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-63.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-64.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-64.png new file mode 100644 index 00000000000..a36521a4f11 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-64.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-65.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-65.png new file mode 100644 index 00000000000..c362017f11b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-65.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-66.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-66.png new file mode 100644 index 00000000000..fe111c21403 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-66.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-67.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-67.png new file mode 100644 index 00000000000..096efbaf1b3 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-67.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-68.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-68.png new file mode 100644 index 00000000000..1301e77df21 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-68.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-69.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-69.png new file mode 100644 index 00000000000..562a60e01b5 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-69.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-70.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-70.png new file mode 100644 index 00000000000..df0280c1a56 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-70.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-71.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-71.png new file mode 100644 index 00000000000..f4908e2a54a Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-71.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-72.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-72.png new file mode 100644 index 00000000000..4ddb6e5627d Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-72.png differ diff --git a/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-73.png b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-73.png new file mode 100644 index 00000000000..fe9bc51ca01 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/harness-git-experience-quickstart-73.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-pipeline-29.png b/docs/platform/10_Git-Experience/static/import-a-pipeline-29.png new file mode 100644 index 00000000000..cbb209016ca Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-pipeline-29.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-pipeline-30.png b/docs/platform/10_Git-Experience/static/import-a-pipeline-30.png new file mode 100644 index 00000000000..517787ec0e2 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-pipeline-30.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-pipeline-31.png b/docs/platform/10_Git-Experience/static/import-a-pipeline-31.png new file mode 100644 index 00000000000..fd1efb186d4 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-pipeline-31.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-pipeline-32.png b/docs/platform/10_Git-Experience/static/import-a-pipeline-32.png new file mode 100644 index 00000000000..3cfdbf5c77b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-pipeline-32.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-pipeline-33.png b/docs/platform/10_Git-Experience/static/import-a-pipeline-33.png new file mode 100644 index 00000000000..fd1efb186d4 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-pipeline-33.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-pipeline-34.png b/docs/platform/10_Git-Experience/static/import-a-pipeline-34.png new file mode 100644 index 00000000000..ece6f7351d2 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-pipeline-34.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-template-from-git-23.png b/docs/platform/10_Git-Experience/static/import-a-template-from-git-23.png new file mode 100644 index 00000000000..388ecc47c78 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-template-from-git-23.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-template-from-git-24.png b/docs/platform/10_Git-Experience/static/import-a-template-from-git-24.png new file mode 100644 index 00000000000..461d6eb91c2 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-template-from-git-24.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-template-from-git-25.png b/docs/platform/10_Git-Experience/static/import-a-template-from-git-25.png new file mode 100644 index 00000000000..fd1efb186d4 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-template-from-git-25.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-template-from-git-26.png b/docs/platform/10_Git-Experience/static/import-a-template-from-git-26.png new file mode 100644 index 00000000000..3cfdbf5c77b Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-template-from-git-26.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-template-from-git-27.png b/docs/platform/10_Git-Experience/static/import-a-template-from-git-27.png new file mode 100644 index 00000000000..2e61041fb02 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-template-from-git-27.png differ diff --git a/docs/platform/10_Git-Experience/static/import-a-template-from-git-28.png b/docs/platform/10_Git-Experience/static/import-a-template-from-git-28.png new file mode 100644 index 00000000000..56b4af404c0 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-a-template-from-git-28.png differ diff --git a/docs/platform/10_Git-Experience/static/import-input-sets-00.png b/docs/platform/10_Git-Experience/static/import-input-sets-00.png new file mode 100644 index 00000000000..6411ce1feed Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-input-sets-00.png differ diff --git a/docs/platform/10_Git-Experience/static/import-input-sets-01.png b/docs/platform/10_Git-Experience/static/import-input-sets-01.png new file mode 100644 index 00000000000..f8a23f5766f Binary files /dev/null and b/docs/platform/10_Git-Experience/static/import-input-sets-01.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-14.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-14.png new file mode 100644 index 00000000000..513d40a07ac Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-14.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-15.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-15.png new file mode 100644 index 00000000000..65ec319620f Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-15.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-16.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-16.png new file mode 100644 index 00000000000..21529a34877 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-16.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-17.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-17.png new file mode 100644 index 00000000000..db6679176e9 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-17.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-18.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-18.png new file mode 100644 index 00000000000..5e26acaaacd Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-18.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-19.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-19.png new file mode 100644 index 00000000000..b65a6b636bb Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-19.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-20.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-20.png new file mode 100644 index 00000000000..f1e3a4cdd67 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-20.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-21.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-21.png new file mode 100644 index 00000000000..bd00e40ea0c Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-21.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-22.png b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-22.png new file mode 100644 index 00000000000..b28a01d25f5 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-a-harness-pipeline-repo-using-git-experience-22.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-06.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-06.png new file mode 100644 index 00000000000..19b398cbb84 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-06.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-07.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-07.png new file mode 100644 index 00000000000..92d2e37e606 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-07.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-08.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-08.png new file mode 100644 index 00000000000..3909b92d16d Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-08.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-09.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-09.png new file mode 100644 index 00000000000..f5f25988699 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-09.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-10.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-10.png new file mode 100644 index 00000000000..90f88b320b4 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-10.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-11.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-11.png new file mode 100644 index 00000000000..85f01e2ba18 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-11.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-12.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-12.png new file mode 100644 index 00000000000..3244d030fd8 Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-12.png differ diff --git a/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-13.png b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-13.png new file mode 100644 index 00000000000..5bee5526fcb Binary files /dev/null and b/docs/platform/10_Git-Experience/static/manage-input-sets-in-simplified-git-experience-13.png differ diff --git a/docs/platform/11_Triggers/_category_.json b/docs/platform/11_Triggers/_category_.json new file mode 100644 index 00000000000..a469e84764e --- /dev/null +++ b/docs/platform/11_Triggers/_category_.json @@ -0,0 +1 @@ +{"label": "Triggers", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Triggers"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "oya6qhmmaw"}} \ No newline at end of file diff --git a/docs/platform/11_Triggers/schedule-pipelines-using-cron-triggers.md b/docs/platform/11_Triggers/schedule-pipelines-using-cron-triggers.md new file mode 100644 index 00000000000..bf3239ce9b6 --- /dev/null +++ b/docs/platform/11_Triggers/schedule-pipelines-using-cron-triggers.md @@ -0,0 +1,89 @@ +--- +title: Schedule Pipelines using Triggers +description: Schedule Pipeline executions using Cron-based Triggers. +# sidebar_position: 2 +helpdocs_topic_id: 4z9mf24m1b +helpdocs_category_id: oya6qhmmaw +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can schedule Pipeline executions using Cron-based Triggers. + +For example, you can have a Pipeline run every Monday at 1AM. Harness will generate the Cron expression (`0 1 * * MON`). + +For general Triggers reference, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + +### ​Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) +* [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) + +### ​Step 1: Add a trigger to a pipeline​ + +Open your Harness Pipeline in Pipeline Studio. + +Click **Triggers**. + +Click **New Trigger**. + +Click **Cron**. + +For Git-based Trigger types or CodeCommit, see [Trigger Pipelines using Git Events](triggering-pipelines.md). + +In **Trigger Overview**, enter a name, description, and Tags for the Trigger. + +### ​Step 2: Schedule the trigger​ + +In **Schedule**, use the settings to schedule the Trigger. + +When you edit a Cron Trigger later, you can type or paste in a Cron expression. + +The Cron expression will be evaluated against UTC time. + +Here's a reminder of Cron expression formatting: + + +``` +0 0 4 7 ? 2014 +| | | | | | +| | | | | \------- YEAR (2014) +| | | | \--------- DAY_OF_WEEK (NOT_SPECIFIED) +| | | \------------- MONTH (JULY) +| | \--------------- DAY_OF_MONTH (4th) +| \----------------- HOUR (0- MIDNIGHT LOCAL TIME) +\------------------- MINUTE (0) +``` +### ​Step 3: Set pipeline input + +Pipelines often have [Runtime Inputs](../20_References/runtime-inputs.md) like codebase branch names or artifact versions and tags. + +Provide values for the inputs. You can also use [Input Sets](../8_Pipelines/input-sets.md). + +Click **Create Trigger**. + +The Trigger is now added to the Triggers page. + +### Step 4: Enable or disable trigger + +Use the Enable setting to turn the Trigger on and off. + +![](./static/schedule-pipelines-using-cron-triggers-20.png) +That's it. Your Pipeline will run when the Cron expression equals the current time. + +### Option: Run once + +To specify a run-once schedule, specify a fully qualified date and time. + +Simply enter the time, day of month, month, and then allow for any day of the week. + +The below example runs on **At 1:45 PM, on day 13 of the month, and on Tuesday, only in September** + +`45 13 13 09 Tue` + +![](./static/schedule-pipelines-using-cron-triggers-21.png) +### See also + +* [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md) + diff --git a/docs/platform/11_Triggers/static/schedule-pipelines-using-cron-triggers-20.png b/docs/platform/11_Triggers/static/schedule-pipelines-using-cron-triggers-20.png new file mode 100644 index 00000000000..50b69f33638 Binary files /dev/null and b/docs/platform/11_Triggers/static/schedule-pipelines-using-cron-triggers-20.png differ diff --git a/docs/platform/11_Triggers/static/schedule-pipelines-using-cron-triggers-21.png b/docs/platform/11_Triggers/static/schedule-pipelines-using-cron-triggers-21.png new file mode 100644 index 00000000000..24c745f4dcb Binary files /dev/null and b/docs/platform/11_Triggers/static/schedule-pipelines-using-cron-triggers-21.png differ diff --git a/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-00.png b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-00.png new file mode 100644 index 00000000000..f3625da0f09 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-00.png differ diff --git a/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-01.png b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-01.png new file mode 100644 index 00000000000..49e02ac160f Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-01.png differ diff --git a/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-02.png b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-02.png new file mode 100644 index 00000000000..f3625da0f09 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-02.png differ diff --git a/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-03.png b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-03.png new file mode 100644 index 00000000000..b9e9645119f Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-deployments-using-custom-triggers-03.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-22.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-22.png new file mode 100644 index 00000000000..8f80be0d4cd Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-22.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-23.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-23.png new file mode 100644 index 00000000000..b31f658ac17 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-23.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-24.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-24.png new file mode 100644 index 00000000000..994db8543f5 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-24.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-25.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-25.png new file mode 100644 index 00000000000..b4be19b7110 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-25.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-26.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-26.png new file mode 100644 index 00000000000..2a5686d4c29 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-26.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-27.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-27.png new file mode 100644 index 00000000000..ef0e86efc3c Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-27.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-28.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-28.png new file mode 100644 index 00000000000..c5f71156755 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-28.png differ diff --git a/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-29.png b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-29.png new file mode 100644 index 00000000000..d05ddb39a6d Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-on-a-new-artifact-29.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-04.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-04.png new file mode 100644 index 00000000000..92254061a03 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-04.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-05.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-05.png new file mode 100644 index 00000000000..4c1b7310b89 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-05.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-06.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-06.png new file mode 100644 index 00000000000..2fe3e535990 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-06.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-07.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-07.png new file mode 100644 index 00000000000..d57b73ca608 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-07.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-08.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-08.png new file mode 100644 index 00000000000..186cc34d397 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-08.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-09.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-09.png new file mode 100644 index 00000000000..c3e21d6920a Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-09.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-10.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-10.png new file mode 100644 index 00000000000..d7617012918 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-10.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-11.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-11.png new file mode 100644 index 00000000000..4955e5d67e6 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-11.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-12.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-12.png new file mode 100644 index 00000000000..9c0d103259b Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-12.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-13.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-13.png new file mode 100644 index 00000000000..c5f71156755 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-13.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-14.png b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-14.png new file mode 100644 index 00000000000..3e89642f63b Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-on-new-helm-chart-14.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-30.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-30.png new file mode 100644 index 00000000000..9afda402df2 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-30.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-31.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-31.png new file mode 100644 index 00000000000..4ae0232c07b Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-31.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-32.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-32.png new file mode 100644 index 00000000000..68044b86c67 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-32.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-33.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-33.png new file mode 100644 index 00000000000..ac7919becc1 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-33.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-34.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-34.png new file mode 100644 index 00000000000..72990dba687 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-34.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-35.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-35.png new file mode 100644 index 00000000000..f50124ac60c Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-35.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-36.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-36.png new file mode 100644 index 00000000000..b6f88181f58 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-36.png differ diff --git a/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-37.png b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-37.png new file mode 100644 index 00000000000..f384f9e9519 Binary files /dev/null and b/docs/platform/11_Triggers/static/trigger-pipelines-using-custom-payload-conditions-37.png differ diff --git a/docs/platform/11_Triggers/static/triggering-pipelines-15.png b/docs/platform/11_Triggers/static/triggering-pipelines-15.png new file mode 100644 index 00000000000..acd28d1e4bc Binary files /dev/null and b/docs/platform/11_Triggers/static/triggering-pipelines-15.png differ diff --git a/docs/platform/11_Triggers/static/triggering-pipelines-16.png b/docs/platform/11_Triggers/static/triggering-pipelines-16.png new file mode 100644 index 00000000000..65b0c6c4c75 Binary files /dev/null and b/docs/platform/11_Triggers/static/triggering-pipelines-16.png differ diff --git a/docs/platform/11_Triggers/static/triggering-pipelines-17.png b/docs/platform/11_Triggers/static/triggering-pipelines-17.png new file mode 100644 index 00000000000..4105dd13bf5 Binary files /dev/null and b/docs/platform/11_Triggers/static/triggering-pipelines-17.png differ diff --git a/docs/platform/11_Triggers/static/triggering-pipelines-18.png b/docs/platform/11_Triggers/static/triggering-pipelines-18.png new file mode 100644 index 00000000000..dc98fe7f3ad Binary files /dev/null and b/docs/platform/11_Triggers/static/triggering-pipelines-18.png differ diff --git a/docs/platform/11_Triggers/static/triggering-pipelines-19.png b/docs/platform/11_Triggers/static/triggering-pipelines-19.png new file mode 100644 index 00000000000..69fc7d46a92 Binary files /dev/null and b/docs/platform/11_Triggers/static/triggering-pipelines-19.png differ diff --git a/docs/platform/11_Triggers/trigger-deployments-using-custom-triggers.md b/docs/platform/11_Triggers/trigger-deployments-using-custom-triggers.md new file mode 100644 index 00000000000..07290f0600b --- /dev/null +++ b/docs/platform/11_Triggers/trigger-deployments-using-custom-triggers.md @@ -0,0 +1,342 @@ +--- +title: Trigger Deployments using Custom Triggers +description: Harness includes Triggers for Git providers, artifact providers,. Trigger a Deployment using cURL Get Deployment Status using REST +# sidebar_position: 2 +helpdocs_topic_id: qghequ5vxu +helpdocs_category_id: oya6qhmmaw +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic shows you how to create and run Custom Triggers for your Harness pipelines using platform-agnostic Webhooks and cURL commands. + +### Overview + +In addition to Triggers that use Git providers, artifact providers, manifests, and cron scheduling, Harness includes Custom Triggers that you can use to run Pipelines via a platform-agnostic Webhook. + +Once you create a Custom Trigger, Harness provides the Webhook URL and cURL command to initiate the Trigger. + +![](./static/trigger-deployments-using-custom-triggers-00.png) +You can do the following with a Custom Trigger: + +* Start a deployment using a cURL command. +* Use a REST call to get deployment status. +* Start a deployment using a URL provided by Harness. + +### Create the custom trigger + +1. In your Harness Pipeline in Pipeline Studio, click **Triggers**. +2. Click **New Trigger**. +3. In **Webhook**, click **Custom**. +![](./static/trigger-deployments-using-custom-triggers-01.png) +4. Name the new Trigger and click **Continue**. + +The **Payload Type** is set as Custom. If this were a Git provider Trigger, you would specify the repo URL and events for the Trigger. + +For more details, see [Trigger Pipelines using Git Event Payload Conditions](trigger-pipelines-using-custom-payload-conditions.md) and [Trigger Pipelines using Git Events](triggering-pipelines.md). + +### Conditions + +Conditions specify criteria in addition to events and actions. + +Conditions help to form the overall set of criteria to trigger a Pipeline based on changes in a given source. + +For example: + +* Execute Pipeline if the source/target branch name matches a pattern. +* Execute Pipeline if the event is sent for file changes from specific directories in the Git repo. This is very useful when working with a monorepo (mono repository). It ensures that only specific Pipelines are triggered in response to a change. + +Conditions support Harness built-in expressions for accessing Trigger settings, Git payload data and headers. + +JEXL expressions are also supported. + +For details on these settings, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + +Conditions are ANDed together (boolean AND operation). All Conditions must match an event payload for it to execute the Trigger.### Pipeline Input + +Pipelines often have [Runtime Inputs](../20_References/runtime-inputs.md) like codebase branch names or artifact versions and tags. + +Provide values for the inputs. You can also use [Input Sets](../8_Pipelines/input-sets.md). + +Click **Create Trigger**. + +The Trigger is now added to the Triggers page. + +### Trigger a deployment using cURL + +1. On the Triggers page, in the **Webhook** column, click the link icon for your Trigger and then click **Copy as cURL Command**. + +![](./static/trigger-deployments-using-custom-triggers-02.png) +Here's an example of the cURL command: + + +``` +curl -X POST -H 'content-type: application/json' --url 'https://app.harness.io/gateway/pipeline/api/webhook/custom/v2?accountIdentifier=H5W8iol5TNWc4G9h5A2MXg&orgIdentifier=default&projectIdentifier=CD_Docs&pipelineIdentifier=Triggers&triggerIdentifier=Custom' -d '{"sample_key": "sample_value"}' +``` +Run this command in a Terminal to trigger a Pipeline execution. The response will look something like this: + + +``` +{ + "status":"SUCCESS", + "data":{ + "eventCorrelationId":"632394c7b018985c661747be", + "apiUrl":"https://app.harness.io/gateway/pipeline/api/webhook/triggerExecutionDetails/632394c7b018985c661747be?accountIdentifier=H5W8iol5TNWc4G9h5A2MXg", + "uiUrl":"https://app.harness.io/ng/#/account/H5W8iol5TNWc4G9h5A2MXg/cd/orgs/default/projects/CD_Docs/deployments?pipelineIdentifier=Triggers&page=0", + "uiSetupUrl":"https://app.harness.io/ng/#/account/H5W8iol5TNWc4G9h5A2MXg/cd/orgs/default/projects/CD_Docs/pipelines/Triggers/pipeline-studio/" + }, + "metaData":null, + "correlationId":"5f86c64b-b1a2-4385-88b0-2eaf1085c310" +} +``` +The Execution History page shows that the execution was triggered by a Custom Trigger: + +![](./static/trigger-deployments-using-custom-triggers-03.png) +### Links in the response + +The JSON response of the Custom Trigger cURL command contains several links. + + +``` +{ + "status":"SUCCESS", + "data":{ + "eventCorrelationId":"632394c7b018985c661747be", + "apiUrl":"https://app.harness.io/gateway/pipeline/api/webhook/triggerExecutionDetails/632394c7b018985c661747be?accountIdentifier=H5W8iol5TNWc4G9h5A2MXg", + "uiUrl":"https://app.harness.io/ng/#/account/H5W8iol5TNWc4G9h5A2MXg/cd/orgs/default/projects/CD_Docs/deployments?pipelineIdentifier=Triggers&page=0", + "uiSetupUrl":"https://app.harness.io/ng/#/account/H5W8iol5TNWc4G9h5A2MXg/cd/orgs/default/projects/CD_Docs/pipelines/Triggers/pipeline-studio/" + }, + "metaData":null, + "correlationId":"5f86c64b-b1a2-4385-88b0-2eaf1085c310" +} +``` +The following section describe each link and what you can do with them. + +#### apiUrl + +**apiUrl** can be used to track deployment status programmatically, such as using a REST call. + +See [Get Deployment Status using REST](#get-deployment-status-using-rest) below. + + +#### uiUrl + +The **uiUrl** from the cURL command output can be used directly in a browser. + +To run a deployment from a browser, paste the URL from **uiUrl** into the browser location field and hit **ENTER**. + +The browser will open **app.harness.io** and display the running deployment. + +#### uiSetupUrl + +In the JSON response of a Pipeline executed by a Custom Trigger, the **uiSetupUrl** label displays the URL or the Pipeline that was run. + +### Get Deployment Status using REST + +The **apiUrl** property in the JSON response can be used to track deployment status programmatically, such as using a REST call. + +The `eventCorrelationId` contains the same Id as the URL in `apiUrl`.To get deployment status using a REST call (in this example, cURL), use the following cURL command, replacing **API\_URL** with the URL from **apiUrl**: + + +``` +curl -X GET --url "API_URL" +``` +For example: + + +``` +curl -X GET --url "https://app.harness.io/gateway/pipeline/api/webhook/triggerExecutionDetails/632394c7b018985c661747be?accountIdentifier=H5W8iol5TNWc4G9h5A2MXg" +``` +The response from the cURL command will contain the status of the deployment. For example:  + + +``` +{ + "status":"SUCCESS", + "data":{ + "webhookProcessingDetails":{ + "eventFound":true, + "eventId":"632394c7b018985c661747be", + "accountIdentifier":"xxx", + "orgIdentifier":"default", + "projectIdentifier":"CD_Docs", + "triggerIdentifier":"Custom", + "pipelineIdentifier":"Triggers", + "pipelineExecutionId":"_iodHvEhT2y_Mn_DLaR32A", + "exceptionOccured":false, + "status":"TARGET_EXECUTION_REQUESTED", + "message":"Pipeline execution was requested successfully", + "payload":"{\"sample_key\": \"sample_value\"}", + "eventCreatedAt":1663276236705, + "runtimeInput":"pipeline: {}\n" + }, + "executionDetails":{ + "pipelineExecutionSummary":{ + "pipelineIdentifier":"Triggers", + "planExecutionId":"_iodHvEhT2y_Mn_DLaR32A", + "name":"Triggers", + "status":"Success", + "tags":[ + + ], + "executionTriggerInfo":{ + "triggerType":"WEBHOOK_CUSTOM", + "triggeredBy":{ + "uuid":"systemUser", + "identifier":"Custom", + "extraInfo":{ + "execution_trigger_tag_needed_for_abort":"H5W8iol5TNWc4G9h5A2MXg:default:CD_Docs:Triggers", + "triggerRef":"H5W8iol5TNWc4G9h5A2MXg/default/CD_Docs/Custom", + "eventCorrelationId":"632394c7b018985c661747be" + } + }, + "isRerun":false + }, + "governanceMetadata":{ + "id":"0", + "deny":false, + "details":[ + + ], + "message":"", + "timestamp":"1663276236674", + "status":"pass", + "accountId":"H5W8iol5TNWc4G9h5A2MXg", + "orgId":"default", + "projectId":"CD_Docs", + "entity":"accountIdentifier%3AH5W8iol5TNWc4G9h5A2MXg%2ForgIdentifier%3Adefault%2FprojectIdentifier%3ACD_Docs%2FpipelineIdentifier%3ATriggers", + "type":"pipeline", + "action":"onrun", + "created":"1663276236657" + }, + "moduleInfo":{ + "cd":{ + "__recast":"io.harness.cdng.pipeline.executions.beans.CDPipelineModuleInfo", + "envGroupIdentifiers":[ + + ], + "envIdentifiers":[ + "dev" + ], + "environmentTypes":[ + "PreProduction" + ], + "infrastructureIdentifiers":[ + null + ], + "infrastructureNames":[ + null + ], + "infrastructureTypes":[ + "KubernetesDirect" + ], + "serviceDefinitionTypes":[ + "Kubernetes" + ], + "serviceIdentifiers":[ + "dev" + ] + } + }, + "layoutNodeMap":{ + "XZoMGLJIRgm11QqGYbIElA":{ + "nodeType":"Deployment", + "nodeGroup":"STAGE", + "nodeIdentifier":"trigger", + "name":"trigger", + "nodeUuid":"XZoMGLJIRgm11QqGYbIElA", + "status":"Success", + "module":"cd", + "moduleInfo":{ + "cd":{ + "__recast":"io.harness.cdng.pipeline.executions.beans.CDStageModuleInfo", + "serviceInfo":{ + "__recast":"io.harness.cdng.pipeline.executions.beans.ServiceExecutionSummary", + "identifier":"dev", + "displayName":"dev", + "deploymentType":"Kubernetes", + "gitOpsEnabled":false, + "artifacts":{ + "__recast":"io.harness.cdng.pipeline.executions.beans.ServiceExecutionSummary$ArtifactsSummary", + "sidecars":[ + + ] + } + }, + "infraExecutionSummary":{ + "__recast":"io.harness.cdng.pipeline.executions.beans.InfraExecutionSummary", + "identifier":"dev", + "name":"dev", + "type":"PreProduction" + } + } + }, + "startTs":1663276236851, + "endTs":1663276251023, + "edgeLayoutList":{ + "currentNodeChildren":[ + + ], + "nextIds":[ + + ] + }, + "nodeRunInfo":{ + "whenCondition":"<+OnPipelineSuccess>", + "evaluatedCondition":true, + "expressions":[ + { + "expression":"OnPipelineSuccess", + "expressionValue":"true", + "count":1 + } + ] + }, + "failureInfo":{ + "message":"" + }, + "failureInfoDTO":{ + "message":"", + "failureTypeList":[ + + ], + "responseMessages":[ + + ] + }, + "nodeExecutionId":"YC_1XgBQSUu79da21J7aVA", + "executionInputConfigured":false + } + }, + "modules":[ + "cd" + ], + "startingNodeId":"XZoMGLJIRgm11QqGYbIElA", + "startTs":1663276236674, + "endTs":1663276251126, + "createdAt":1663276236698, + "canRetry":true, + "showRetryHistory":false, + "runSequence":11, + "successfulStagesCount":1, + "runningStagesCount":0, + "failedStagesCount":0, + "totalStagesCount":1, + "executionInputConfigured":false, + "allowStageExecutions":false, + "stagesExecution":false + } + } + }, + "metaData":null, + "correlationId":"4b76cec6-c4b3-408c-b66b-7e14540c6e14" +} +``` +### Custom Trigger authorization using API keys + +You can use Harness API keys in your cURL command + +#### Adding authorization to custom webhook Triggers + +#### Enforcing authorization for custom webhook Triggers + diff --git a/docs/platform/11_Triggers/trigger-on-a-new-artifact.md b/docs/platform/11_Triggers/trigger-on-a-new-artifact.md new file mode 100644 index 00000000000..da1b45ad479 --- /dev/null +++ b/docs/platform/11_Triggers/trigger-on-a-new-artifact.md @@ -0,0 +1,193 @@ +--- +title: Trigger Pipelines on a New Artifact +description: Trigger Harness Pipeline deployments in response to a new artifact version being added to a registry. +# sidebar_position: 2 +helpdocs_topic_id: c1eskrgngf +helpdocs_category_id: oya6qhmmaw +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is behind the feature flags `NG_SVC_ENV_REDESIGN` and `CD_TRIGGERS_REFACTOR`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +::: + +You can trigger Harness Pipelines in response to a new artifact version being added to a registry. + +For example, every time a new Docker image is pushed to your Docker Hub account, it triggers a CD Pipeline that deploys it automatically. + +On New Artifact Triggers simply listen to the registry where one or more of the artifacts in your Pipeline are hosted. + +You can set conditions on the Triggers, such as matching a Docker tag or label or a traditional artifact build name or number. + +This Trigger is a simple way to automate deployments for new builds. + +### Before you begin + +* You should be familiar with Harness CD Pipelines, such as the one you create in the [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart). + +### Important notes + +* If more than one artifact is collected during the polling interval (two minutes), only one deployment will be started and will use the last artifact collected. +* The Trigger is executed based on **file names** and not metadata changes. +* Do not trigger on the **latest** tag of an artifact, such as a Docker image. With latest, Harness only has metadata, such as the tag name, which has not changed, and so Harness does not know if anything has changed. The Trigger will not be executed. +* In Harness, you can select who is able to create and use Triggers within Harness, but you must use your repos' RBAC to control who can add the artifacts or initiate the events that start the Harness Trigger. + +### Visual summary + +This 5min video walks you through building an app from source code and pushing it to Docker Hub using Harness CIE, and then having an On New Artifact Trigger execute a CD Pipeline to deploy the new app release automatically. + +### Review: artifact polling + +Once you have created a Trigger to listen for new artifacts, Harness will poll for new artifacts continuously. + +Polling is immediate because Harness uses a perpetual task framework that constantly monitors for new builds/tags. + +### Using the <+trigger.artifact.build> expression + +When you add a Harness Service to the CD stage, you can set the artifact tag to use in **Artifacts Details**. + +![](./static/trigger-on-a-new-artifact-22.png) + +If you use a [Fixed Value](../20_References/runtime-inputs.md) for the artifact **Tag** (for example, **2**), when the Trigger executes the Pipeline, Harness will deploy the artifact with that tag (**2**). + +If you want the Pipeline to deploy the artifact version that initiated the Trigger, use the expression `<+trigger.artifact.build>`. + +![](./static/trigger-on-a-new-artifact-23.png) + +You can also set Tag as a Runtime Input and then use `<+trigger.artifact.build>` in the Trigger's [Pipeline Input](#step-3-select-pipeline-inputs) settings. + +### Create an artifact trigger + +1. Select a Harness Pipeline that includes an artifact in the Stage's **Service Definition**. + + ![](./static/trigger-on-a-new-artifact-24.png) + + You reference an artifact in the Stage's Service Definition in your manifests using the expression `<+artifact.image>`. See [Add Container Images as Artifacts for Kubernetes Deployments](https://docs.harness.io/article/4ifq51cp0i-add-artifacts-for-kubernetes-deployments). + +2. Click **Triggers**. +3. Click **New Trigger**. +4. The On New Artifact Trigger options are listed under **Artifact**. Each of the **Artifact** options are described below. +5. Select the artifact registry where your artifact is hosted. If you artifact is hosted on Docker Hub and you select GCR, you won't be able to set up your Trigger. + +### Option: Docker Registry Artifacts + +1. In **Configuration**, in **Name**, enter a name for the Trigger. +2. In **Listen on New Artifact**, click **Define Artifact Source**. This is where you tell Harness what artifact repository to poll for changes. +3. Create or select the Connector to connect Harness to the repository, and then click **Continue**. For steps on Docker Registry Connectors, go to [Add Docker Registry Artifact Servers](https://docs.harness.io/article/tdj2ghkqb0-add-docker-registry-artifact-servers). +4. In **Artifact Details**, enter the artifact for this Trigger to listen for and click **Submit**. For example, in Docker Hub, you might enter `library/nginx`. The artifact is now listed in Trigger. + + ![](./static/trigger-on-a-new-artifact-25.png) + +5. Click **Continue**. + +Jump to [Step 2: Set Conditions](#step-2-set-conditions). + +In your Docker Registry Connector, to connect to a public Docker registry like Docker Hub, use `https://registry.hub.docker.com/v2/`. To connect to a private Docker registry, use `https://index.docker.io/v2/`.### Option: GCR Artifacts + +1. In **Configuration**, in **Name**, enter a name for the Trigger. +2. In **Listen on New Artifact**, click **Define Artifact Source**. +3. Create or select the GCP Connector to connect Harness to GCR, and then click **Continue**. For steps on GCP Connectors, go to [Add a Google Cloud Platform (GCP) Connector](../7_Connectors/connect-to-google-cloud-platform-gcp.md). +4. In **Artifact Details**, in GCR Registry URL, select the location of the registry, listed as **Hostname** in GCR. + +5. ![](./static/trigger-on-a-new-artifact-26.png) + +6. In **Image Path**, enter the artifact for this Trigger to listen for. You can click the copy button in GCR and then paste the path into Harness. + +7. ![](./static/trigger-on-a-new-artifact-27.png) + +8. Click **Submit**. +9. Click **Continue**. + +Jump to [Step 2: Set Conditions](#step-2-set-conditions). + +### Option: ECR Artifacts + +1. In **Configuration**, in **Name**, enter a name for the Trigger. +2. In **Listen on New Artifact**, click **Define Artifact Source**. +3. Create or select the AWS Connector to connect Harness to ECR, and then click **Continue**. For steps on AWS Connectors, go to [AWS Connector Settings Reference](../7_Connectors/ref-cloud-providers/aws-connector-settings-reference.md). +4. In **Artifact Details**, in **Region**, select the region for the ECR service you are using. +5. In **Image Path**, enter the path to the repo and image. You can copy the URI value from the repo in ECR. For example, `public.ecr.aws/l7w9l6a8/todolist` (public repo) or `085111111113.dkr.ecr.us-west-2.amazonaws.com/todolist` (private repo). +6. Click **Continue**. + +Jump to [Step 2: Set Conditions](#step_2_set_conditions). + +### Option: AWS S3 + +1. In **Configuration**, in **Name**, enter a name for the Trigger. +2. In **Listen on New Artifact**, click **Define Artifact Source**. +3. Create or select the AWS Connector to connect Harness to S3, and then click **Continue**. For steps on AWS Connectors, go to [AWS Connector Settings Reference](../7_Connectors/ref-cloud-providers/aws-connector-settings-reference.md). +4. In **Artifact Details**, in **Region**, select the region for the S3 service you are using. While S3 is regionless, Harness needs a region for the S3 API. +5. In **Bucket Name**, enter the S3 bucket name. +6. In **File Path Regex**, enter a regex like `todolist*.zip`. The expression must either contain a `*` or end with `/`. +7. Click **Continue**. + +### Option: Artifactory + +1. In **Configuration**, in **Name**, enter a name for the Trigger. +2. In **Listen on New Artifact**, click **Define Artifact Source**. +3. Create or select the Artifactory Connector to connect Harness to Artifactory, and then click **Continue**. For steps on Artifactory Connectors, go to [Artifactory Connector Settings Reference](../7_Connectors/ref-cloud-providers/artifactory-connector-settings-reference.md). +4. In **Artifact Details**, in **Repository Format**, select **Generic** or **Docker**. + 1. Generic: + 1. **Repository:** enter the **Name** of the repo. + 2. **Artifact Directory:** enter the **Repository Path**. + 2. Docker: + 1. **Repository:** enter the **Name** of the repo. + 2. **Artifact/Image Path:** enter the **Repository Path**. + 3. **Repository URL (optional):** enter the **URL to file**. +5. Click **Continue**. + +Jump to [Step 2: Set Conditions](#step_2_set_conditions). + +### Option: ACR + +1. In **Configuration**, in **Name**, enter a name for the Trigger. +2. In **Listen on New Artifact**, click **Define Artifact Source**. +3. Create or select the Azure Connector to connect Harness to ACR, and then click **Continue**. For steps on Azure Connectors, go to [Add a Microsoft Azure Cloud Connector](../7_Connectors/add-a-microsoft-azure-connector.md). +4. In **Artifact Details**, in **Subscription Id**, select the Subscription Id from the ACR registry. +5. In **Registry**, select the registry you want to use. +6. In **Repository**, select the repository to use. +7. Click **Continue**. + +Jump to [Step 2: Set Conditions](#step_2_set_conditions). + +### Step 2: Set Conditions + +In **Conditions**, enter any conditions that must be matched in order for the Trigger to execute. + +#### Regex and Wildcards + +You can use wildcards in the condition's value and you can select **Regex**. + +For example, if the build is `todolist-v2.0`: + +* With Regex not selected, both `todolist*` or `*olist*` will match. +* With Regex selected, the regex `todolist-v\d.\d` will match. + +If the regex expression does not result in a match, Harness ignores the value. + +Harness supports standard Java regex. For example, if Regex is enabled and the intent is to match any branch, the wildcard should be `.*` instead of simply a wildcard `*`. If you wanted to match all of the files that end in `-DEV.tar` you would enter `.*-DEV\.tar`. + +### Step 3: Select Pipeline Inputs + +If your Pipeline uses [Input Sets](../8_Pipelines/input-sets.md), you can select the Input Set to use when the Trigger executes the Pipeline. + +### Option: Enable or Disable Trigger + +You can enable or disable Triggers using the Enabled toggle: + +![](./static/trigger-on-a-new-artifact-28.png) + +### Option: Reuse Trigger YAML to Create New Triggers + +You can reuse Triggers by copying and pasting Trigger YAML. This can be helpful when you have advanced Conditions you don't want to set up each time. + +![](./static/trigger-on-a-new-artifact-29.png) + +### See also + +* [Schedule Pipelines using Triggers](schedule-pipelines-using-cron-triggers.md) +* [Trigger Pipelines using Git Events](triggering-pipelines.md) + diff --git a/docs/platform/11_Triggers/trigger-pipelines-on-new-helm-chart.md b/docs/platform/11_Triggers/trigger-pipelines-on-new-helm-chart.md new file mode 100644 index 00000000000..0aadb74c148 --- /dev/null +++ b/docs/platform/11_Triggers/trigger-pipelines-on-new-helm-chart.md @@ -0,0 +1,198 @@ +--- +title: Trigger Pipelines on new Helm Chart +description: Trigger Harness Pipelines in response to a new Helm chart version being added to an HTTP Helm repo. +# sidebar_position: 2 +helpdocs_topic_id: 54eqk0d1bd +helpdocs_category_id: oya6qhmmaw +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is behind the feature flags `NG_SVC_ENV_REDESIGN` and `CD_TRIGGERS_REFACTOR`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +::: + +You can trigger Harness Pipelines in response to a new Helm chart version being added to an HTTP Helm repo. + +For example, every time a new Helm chart is pushed to an HTTP Helm repo, it triggers a CD Pipeline that deploys it automatically. + +Helm Chart Triggers simply listen to the repo where one or more of the Helm charts in your Pipeline are hosted. + +You can set conditions on the Triggers, such as matching one or more chart versions. + +This Trigger is a simple way to automate deployments for new Helm charts. + +### Before you begin + +* You should be familiar with Harness CD Pipelines for Helm charts, such as the one you create in the [Helm Chart deployment tutorial](https://docs.harness.io/article/cifa2yb19a-helm-cd-quickstart). + +### Summary and important notes + +The following requirements and notes apply to Harness Helm Chart Triggers. + +#### What can I trigger with a Helm Chart change? + +When you add a Helm Chart Trigger to a Pipeline, you tell Harness what Helm Chart to listen on for changes. When a new version of the Helm Chart is added in its repo, Harness initiates the Trigger and the Pipeline is executed. + +Typically, you add a Helm Chart Trigger to a Pipeline that deploys the same Helm Chart. The Helm Chart is added to the CD stage in the Pipeline, as part of the Harness Service **Manifest**. And the same Helm Chart is added to the Trigger. + +However, the Helm Chart you specify in the Trigger does not have to be used in the Pipeline. + +You can have a change in a Helm Chart trigger any Pipeline, even one that isn't deploying a Helm Chart. + +You can have a change in a Helm Chart trigger a Pipeline that deploys a different Helm Chart. + +#### Chart polling + +Once you have created a Trigger to listen for new Helm chart versions, Harness will poll for new charts continuously. + +Polling is immediate because Harness uses a perpetual task framework that constantly monitors for new versions. + +Harness looks to see what has changed in the repo to determine if a new chart version has been added. If Harness detects a change, it will initiate the Trigger. + +#### Chart versions in artifacts + +When you add the Helm Chart to Harness as a Manifest, you have different options for the Chart Version. + +![](./static/trigger-pipelines-on-new-helm-chart-04.png) +* **Fixed Value:** if you use [Fixed Value](../20_References/runtime-inputs.md) for **Chart Version** (for example, `0.1.4`), Helm Chart Triggers will work, but Harness will not select the latest chart version. Instead, Harness will select the hardcoded chart version in **Chart Version** (`0.1.4`). +* **Runtime Input:** if you use [Runtime Input](../20_References/runtime-inputs.md) for **Chart Version**, you can enter the version to use in your Trigger as part of the Trigger Pipeline Inputs. See [Select Pipeline Inputs](trigger-pipelines-on-new-helm-chart.md#step-4-select-pipeline-inputs) below. +* **Expression:** if you use [Expression](../20_References/runtime-inputs.md) for **Chart Version**, you can: + + Use a [Harness variable expression](../12_Variables-and-Expressions/harness-variables.md), like a Service variable. + + Use the expression `<+trigger.manifest.version>` to have the new chart version that initiated the Trigger passed in as the version to deploy. + +![](./static/trigger-pipelines-on-new-helm-chart-05.png) +#### OCI Helm registries are not supported with Harness Triggers + +You cannot use [OCI Helm Registries](../7_Connectors/connect-to-an-artifact-repo.md) with Helm Chart Triggers. + +### Create a Helm Chart Trigger + +Typically, you add a Helm Chart Trigger to a Pipeline that deploys a Helm Chart. The Helm Chart is added to the CD stage in the Pipeline, as part of the Harness Service **Manifest**. + +1. Select a Harness Pipeline that includes a Helm Chart in the Stage's **Service Definition**. + + ![](./static/trigger-pipelines-on-new-helm-chart-06.png) + + See [Helm Chart deployment tutorial](https://docs.harness.io/article/cifa2yb19a-helm-cd-quickstart) for details on adding Helm Charts to a Stage's **Service Definition**. + +Next, let's add the Trigger. + +2. Click **Triggers**. +3. Click **New Trigger**. +4. Click the **Helm Chart** Trigger listed under **Manifest**. The **On New Manifest** Trigger settings appear. +5. In **Configuration**, in **Name**, enter a name for the Trigger. + +### Select the Helm Chart for the Trigger to listen on + +Define what Helm Chart you want Harness to listen on for the Trigger. + +1. In **Listen on New Artifact**, click **Define Manifest Source**. +2. In **Specify Helm Chart Store**, select the repo type. + 1. HTTP Helm: go to [HTTP Helm Repo Connector Settings Reference](../7_Connectors/ref-source-repo-provider/http-helm-repo-connector-settings-reference.md). + 2. Google Cloud Storage: go to [Google Cloud Platform (GCP) Connector Settings Reference](../7_Connectors/ref-cloud-providers/gcs-connector-settings-reference.md). + 3. AWS S3: go to [AWS Connector Settings Reference](../7_Connectors/ref-cloud-providers/aws-connector-settings-reference.md). +3. Once you have selected a Connector, click **Continue**. +4. In **Manifest Details**, enter the name of the Helm Chart to listen on in **Chart Name**. For example, `nginx` or `etcd`. +5. In **Helm Version**, select the version of Helm your repo uses. + +![](./static/trigger-pipelines-on-new-helm-chart-07.png) + + +:::note +The required settings are determined by the Helm Chart Store you selected. + +::: + +6. Click **Submit**. + +The Helm Chart is added to the Trigger. Now Harness will poll that Helm Chart for any changes. + +![](./static/trigger-pipelines-on-new-helm-chart-08.png) + +### Set Conditions + +In **Conditions**, enter any conditions that must be matched in order for the Trigger to execute. For example, the Helm version number. + +#### Regex and Wildcards + +You can use wildcards in the condition's value and you can select **Regex**. + +For example, if the build is `todolist-v2.0`: + +* With Regex not selected, both `todolist*` or `*olist*` will match. +* With Regex selected, the regex `todolist-v\d.\d` will match. + +If the regex expression does not result in a match, Harness ignores the value. + +Harness supports standard Java regex. For example, if Regex is enabled and the intent is to match filename, the wildcard should be `.*` instead of simply a wildcard `*`. If you wanted to match all of the files that end in `-DEV.tgz` you would enter `.*-DEV\.tgz`. + +### Select Pipeline Inputs + +If your Pipeline uses [Runtime Inputs](../20_References/runtime-inputs.md) or [Input Sets](../8_Pipelines/input-sets.md), you can select the inputs to use when the Trigger executes the Pipeline. + +For example, here's an example where you select Runtime Inputs in the Trigger: + +![](./static/trigger-pipelines-on-new-helm-chart-09.png) +### Test Trigger + +1. Once your Trigger is set up, click **Create Trigger**. The new Trigger is listed. + +Once the Pipeline is executed using the Trigger, in **Deployments**, you can see the Trigger and the user who initiated the deployment. + +![](./static/trigger-pipelines-on-new-helm-chart-10.png) +If you look at the Trigger in your Pipeline again you can see its activation records: + +![](./static/trigger-pipelines-on-new-helm-chart-11.png) +And these records are also in the Trigger details: + +![](./static/trigger-pipelines-on-new-helm-chart-12.png) +You can test the Trigger by pushing a new chart version to your Helm Chart registry. + +You can build and push to your registry using Harness CIE. See [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md).Here's a simple curl example using a Nexus repo that works as a Helm chart HTTP server. + +Add repo: + + +``` +helm repo add nexus_http https://nexus3.dev.example.io/repository// --username '' --password '' +``` +Fetch chart: + + +``` +helm fetch nexus_http/ +``` +Next, update the version in your chart. + +Package the chart: + + +``` +helm package +``` +Push the new version to the Helm HTTP Server: + + +``` +curl -u : https://nexus3.dev.example.io/repository// --upload-file -.tgz -v +``` +Now your Helm chart HTTP server should have the new version of the Helm chart. + +### Option: enable or disable Trigger + +You can enable or disable Triggers using the Enabled toggle: + +![](./static/trigger-pipelines-on-new-helm-chart-13.png) +### Option: reuse Trigger YAML to create new Triggers + +You can reuse Triggers by copying and pasting Trigger YAML. This can be helpful when you have advanced Conditions you don't want to set up each time. + +![](./static/trigger-pipelines-on-new-helm-chart-14.png) +### See also + +* [Schedule Pipelines using Triggers](schedule-pipelines-using-cron-triggers.md) +* [Trigger Pipelines using Git Events](triggering-pipelines.md) + diff --git a/docs/platform/11_Triggers/trigger-pipelines-using-custom-payload-conditions.md b/docs/platform/11_Triggers/trigger-pipelines-using-custom-payload-conditions.md new file mode 100644 index 00000000000..7d3d1612ff6 --- /dev/null +++ b/docs/platform/11_Triggers/trigger-pipelines-using-custom-payload-conditions.md @@ -0,0 +1,179 @@ +--- +title: Trigger Pipelines using Git Event Payload Conditions +description: You can trigger Pipelines in response to Git events that match specific payload conditions you set up in the Harness Trigger. For example, when a pull request or push event occurs on a Git repo and y… +# sidebar_position: 2 +helpdocs_topic_id: 10y3mvkdvk +helpdocs_category_id: oya6qhmmaw +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can trigger Pipelines in response to Git events that match specific payload conditions you set up in the Harness Trigger. + +For example, when a pull request or push event occurs on a Git repo and your Trigger settings match the payload conditions, a CI or CD Pipeline can execute. + +In this example, we create a Custom Trigger for GitHub payload conditions. + +This topic covers payload conditions in detail. For a general overview of creating Triggers using Git Events, see [Trigger Pipelines using Git Events](triggering-pipelines.md).For general Triggers reference, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Kubernetes CD Quickstart](https://ngdocs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) +* [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) +* [Trigger Pipelines using Git Events](triggering-pipelines.md) + +### Limitations + +* Currently, Harness supports Git-based Triggers for the most common Git providers. Harness includes a Custom Trigger for other repo providers. +* The **IN** and **NOT IN** operators do not support Regex. + +#### Important Notes + +* All Triggers in a Harness account have the same URL. This means that you must set up your Trigger Conditions carefully to ensure that a Pipeline triggers Builds for relevant events only. +* If a Build does not start in response to an incoming event, do the following: + + Check the execution history (click Execution History in the top right of the Pipeline Studio). + + Verify that the runtime inputs are correct. + + Check the payloads sent from the Git provider and compare the relevant fields with the Conditions in your Triggers. For example, in GitHub you can view the full payload of each event sent from a specific Webhook. + +### Step 1: Add a Trigger to a Pipeline + +1. Open your Harness Pipeline in **Pipeline Studio**. +2. Click **Triggers**. +3. Click **New Trigger**. +4. Choose your Git SaaS provider, such as **GitHub** or **BitBucket**, or **Custom** if you are using a different provider. + +### Step 2: Configure the Trigger + +In the Configuration tab of the new Trigger, specify the following: + +* **Name** and **Description** +* **Payload Type:** This should match your Git SaaS provider. +* **Connector:** A [Connector](https://docs.harness.io/category/code-repo-connectors) to your Git SaaS provider. (This is required for all Git trigger types except **Custom**.) In the Credentials page of the Connector setup wizard, make sure that API access is selected with the correct permissions. +A Connector is required for all Git trigger types except Custom. For Custom Triggers, you set up the external tool to send paylods to to the Trigger URL. The specific steps to do this vary depending on the external tool +* **Event:** Select the Git event type for the Webhook. +If the event type you select results in the **Actions** settings appearing, select the actions for the Webhook or select **Any Actions**. +* **Auto-abort Previous Execution:** Use this option if you want to override active Pipeline executions. When the branch of interest is updated, the Pipeline aborts any active Builds on the branch before it launches the new Build. + +### Step 3: Set Trigger Conditions + +Conditions specify criteria in addition to events and actions. + +Conditions help to form the overall set of criteria to trigger a Pipeline based on changes in a given source. + +Conditions support Harness built-in expressions for accessing Trigger settings, Git payload data, and headers. + +#### Option 1: Branches and Changed Files + +You can configure Triggers based on the source branches, target branches, and changed files in a Git merge. + +If you want to specify multiple paths, use the Regex operator. + +![](./static/trigger-pipelines-using-custom-payload-conditions-30.png) + +#### Option 2: Header Condition + +In the Header condition, enter the Git Webhook Header data that matches your value.  + +The header expression format is `<+trigger.header['key-name']>`.  + +For example: `<+trigger.header['X-GitHub-Event']>`. + +Refer to [Built-in Git Trigger and Payload Expressions](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md#built-in-git-trigger-and-payload-expressions) for more trigger expressions in Harness. + +#### Option 3: Payload Condition + +Conditions based on the values of the JSON payload. Harness treats the JSON payload as a data model and parses the payload and listens for events on a JSON payload key. + +To reference payload values, you use `<+eventPayload.` followed by the path to the key name. + +For example: `<+eventPayload.repository.full_name>` + +For details on Payload Condition, see [Payload Condition](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md#payload-conditions). + +#### Option 4: JEXL Condition + +JEXL expressions are also supported. For example: `<+eventPayload.repository.owner.name> == "repositoryOwnerName"` + +JEXL expressions are also supported. Here are some examples: + +* `<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo")` +* `<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo") || <+trigger.payload.repository.owner.name> == "wings-software"` +* `<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo") && (<+trigger.payload.repository.owner.name> == "wings-software" || <+trigger.payload.repository.owner.name> == "harness")` + +For details on Trigger settings, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + +If you select multiple conditions, the conditions are ANDed together (boolean AND operation). All Conditions must match an event payload for it to execute the Trigger. If you select any one condition, Trigger will execute based on the condition you entered. + +![](./static/trigger-pipelines-using-custom-payload-conditions-31.png) + +Click **Continue**. + +### Step 4: Set Pipeline Input + +Pipelines often have [Runtime Inputs](../20_References/runtime-inputs.md) like codebase branch names or artifact versions and tags. + +Provide values for the inputs. You can also use [Input Sets](../8_Pipelines/input-sets.md). + +Click **Create Trigger**. + +The Trigger is now added to the Triggers page. + +### Step 5: Register Webhook in the Git Provider + +When you create or edit the custom webhook trigger, you need to copy the webook URL and add it to your repo webhooks. However, make sure you have the following permissions for GitHub Personal Access Token for webhook registration to work: + +* Scopes: select all the repo, user, and admin:repo\_hook options + +![](./static/trigger-pipelines-using-custom-payload-conditions-32.png) + +You should also be repo admin. + +1. In the **Pipeline Studio**, click **Triggers.** +2. Select your Custom Webhook. +3. Click on Webhook URL icon. +4. Click the link button to copy the webhook URL. + + ![](./static/trigger-pipelines-using-custom-payload-conditions-33.png) + +5. Log in to your repo in the Git provider and navigate to its Webhook settings.  +All Webhook URLs in an account have the same format: `https://app.harness.io/gateway/ng/api/webhook?accountIdentifier=ACCOUNT_ID` +6. Create a new webhook and paste the URL you copied from Harness in Step 4. +7. Make sure that the content type of the outbound requests is **Application/json**. +8. Make sure that **Enable verification** is enabled. +9. Select the events that you would like to trigger this webhook. +In this example, we select **Just the push event**. It means that this webhook will only be triggered if there is a push event. +10. Click **Update webhook**. + +![](./static/trigger-pipelines-using-custom-payload-conditions-34.png) + +### Step 6: Test Trigger + +Make a change on the repo and push the changes to Github and see if it executes the Trigger. For example, change a file, commit it on the main branch, and make a push event. + +In your Git provider repo, you can see that the request and response were successful. + +![](./static/trigger-pipelines-using-custom-payload-conditions-35.png) + +Note that the webhook conditions specified in [Step 3](trigger-pipelines-using-custom-payload-conditions.md#step-3-set-trigger-conditions) match the Payload data. As a result, the Pipeline was triggered. + +In Harness, view the **Pipeline execution**. + +In Harness CI, click **Builds** (1). You can see the source branch (2), target branch (3), and the push request comment and number (4). + +![](./static/trigger-pipelines-using-custom-payload-conditions-36.png) + +Click the push request number and it opens the Git provider repo at the push request. + +If you open the Trigger in the Pipeline, you will see a status in **Last Activation Details**. + +![](./static/trigger-pipelines-using-custom-payload-conditions-37.png) + +Activation indicates that the Trigger was successful in requesting Pipeline execution. + +### See also + +* [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md) +* [Harness Git Sync Overview](../10_Git-Experience/git-experience-overview.md) +* [Trigger Pipelines Using Git Events](triggering-pipelines.md) + diff --git a/docs/platform/11_Triggers/triggering-pipelines.md b/docs/platform/11_Triggers/triggering-pipelines.md new file mode 100644 index 00000000000..11c6d1fc839 --- /dev/null +++ b/docs/platform/11_Triggers/triggering-pipelines.md @@ -0,0 +1,179 @@ +--- +title: Trigger Pipelines using Git Events +description: You can trigger Pipelines in response to Git events automatically. For example, when a pull request or push event occurs on a Git repo, a CI or CD Pipeline can execute. Triggers enable event driven C… +# sidebar_position: 2 +helpdocs_topic_id: hndnde8usz +helpdocs_category_id: oya6qhmmaw +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can trigger Pipelines in response to Git events automatically. + +For example, when a pull request or push event occurs on a Git repo, a CI or CD Pipeline can execute. + +Triggers enable event driven CI/CD and support the practice of every commit building and/or deploying to a target environment. + + +:::note +For general Triggers reference, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + +::: + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) +* [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) + +### Limitations + +* Currently, Harness supports Git-based Triggers for the most common Git providers. Harness includes a Custom Trigger for other repo providers. +* The **IN** and **NOT IN** operators do not support Regex. +* In Harness, you can select who is able to create and use Triggers within Harness, but you must use your repos' RBAC to control who can initiate the Git events that start the Harness Trigger. + +### Visual Summary + +Here's a two minute video showing you how to create and run a Trigger in response to Git events. + +### Step 1: Set Up Your Codebase + +For a Trigger to process Git events, the Pipeline must have a Codebase object that points to the Git repo that sends the events. + +The first Build Step in your Pipeline specifies the codebase to build. If your Pipeline doesn't include a Build Step, click **Add Stage**, select **Build**, and specify the Codebase. Select **Clone Codebase** and specify the Connector and source repo for the events. + +To edit an existing Codebase, click **Codebase** on the right side of the Pipeline Studio. + +See [Create and Configure a Codebase](../../continuous-integration/use-ci/codebase-configuration/create-and-configure-a-codebase.md). + +### Step 2: Add a Trigger to a Pipeline + +Open your Harness Pipeline in Pipeline Studio. + +Click **Triggers**. + +Click **New Trigger**. + +Click one of the Git-based Trigger types. In this example, we'll use GitHub. + +### Step 3: Set up Webhook Listener + +Enter a name for the Trigger. + +In **Payload Type**, select your Git provider. This setting is populated with the provider you selected automatically. + +Select or create a Connector to the Git account for the Trigger repo. See [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +* **If you set up an account-level Code Repo Connector:** in **Repository Name**, enter the name of the repo in the account in the Connector. +* **If you set up a repo-level Code Repo Connector:** the repo name cannot be edited. + +In **Event**, select the Git event for the Webhook. + +If the event you select results in the **Actions** settings appearing, select the actions for the Webhook or select **Any Actions**. + +For details on these settings, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + + +:::note +For details on the payloads of the different repo Webhooks, see GitHub [Event Types & Payloads](https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads), Bitbucket [Event Payloads](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html), and Gitlab [Events](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html#events). + +::: + +### Option: Auto-abort Previous Execution + +Use this option if you want to override active Pipeline executions whenever the branch is updated. + +If you select this option, when the branch you specified in the **Connector** is updated, then any active Pipeline executions using the branch and this Trigger are cancelled. + +The updated branch will initiate a new Trigger execution. + +### Option: Polling Frequency + + +:::note +Currently, this feature is behind the feature flag `GIT_WEBHOOK_POLLING`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.By default, Harness Git-based triggers listen to Git events using webhooks. + +::: + +Sometimes webhook events can be missed due to a firewall or a network issue and cannot reach Harness. + +To prevent webhook issues from happening, enter an polling interval in **Polling Frequency**. + +Permitted values: + +* minimum value: `2m`. +* maxium value: `1h`. + +### Option: Set Trigger Conditions + +Conditions specify criteria in addition to events and actions. + +Conditions help to form the overall set of criteria to trigger a Pipeline based on changes in a given source. + +For example: + +* Execute Pipeline if the source/target branch name matches a pattern. +* Execute Pipeline if the event is sent for file changes from specific directories in the Git repo. This is very useful when working with a monorepo (mono repository). It ensures that only specific Pipelines are triggered in response to a change. + +Conditions support Harness built-in expressions for accessing Trigger settings, Git payload data and headers. + +![](./static/triggering-pipelines-15.png) + +JEXL expressions are also supported. + +For details on these settings, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + + +:::note +Conditions are ANDed together (boolean AND operation). All Conditions must match an event payload for it to execute the Trigger. + +::: + +### Step 3: Set Pipeline Input + +Pipelines often have [Runtime Inputs](../20_References/runtime-inputs.md) like codebase branch names or artifact versions and tags. + +Provide values for the inputs. You can also use [Input Sets](../8_Pipelines/input-sets.md). + +Click **Create Trigger**. + +The Trigger is now added to the Triggers page. + +### Review: Automatic Webhook Registration + +When you create or edit the Trigger, Harness registers the webhook in your Git provider automatically. You don't need to copy it and add it to your repo webhooks. However, make sure you have the following permission for GitHub Personal Access Token for automatic webhook registration to work: + +* **Scopes:** select all the **repo**, **user**, and **admin:repo\_hook** options + +![](./static/triggering-pipelines-16.png) + +You should also be repo admin. + +### Step 4: Test Trigger + +Make a change on the repo and see if it executes the Trigger. For example, change a file, commit it on a branch, and make a pull request. + +In your Git provider repo, you can see that the request and response were successful. + +![](./static/triggering-pipelines-17.png) + +In Harness, view the Pipeline execution. + +In Harness CI, click **Builds**. + +You can see the source and target branches. You can also see the pull request comment and number. + +![](./static/triggering-pipelines-18.png) + +Click the pull request number and it opens the Git provider repo at the pull request. + +If you open the Trigger in the Pipeline you will see a status in **Last Activation Details**. + +![](./static/triggering-pipelines-19.png) + +Activation means the Trigger was able to request Pipeline execution. It does not mean that the Webhook didn't work. + +### See also + +* [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md) + diff --git a/docs/platform/12_Variables-and-Expressions/_category_.json b/docs/platform/12_Variables-and-Expressions/_category_.json new file mode 100644 index 00000000000..08bb7d13cf3 --- /dev/null +++ b/docs/platform/12_Variables-and-Expressions/_category_.json @@ -0,0 +1 @@ +{"label": "Variables and Expressions", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Variables and Expressions"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "bp8t5hf922"}} \ No newline at end of file diff --git a/docs/platform/12_Variables-and-Expressions/add-a-variable.md b/docs/platform/12_Variables-and-Expressions/add-a-variable.md new file mode 100644 index 00000000000..a09d1035555 --- /dev/null +++ b/docs/platform/12_Variables-and-Expressions/add-a-variable.md @@ -0,0 +1,178 @@ +--- +title: Add Account, Org, and Project-level Variables +description: Describes steps to add Variables as Resources. +# sidebar_position: 2 +helpdocs_topic_id: f3450ye0ul +helpdocs_category_id: bp8t5hf922 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +In your Pipelines, Variables can be added at the Pipeline-level, which makes them available to all Stages in the Pipeline. Within a Stage, you can add Variables at the Stage and Service-level. Here's [a video](https://youtu.be/lqbmO6EVGuU) covering those Variable types. + +But what about when you need to use the same Variable across multiple Pipelines, or even Pipelines in multiple Projects? + +With Account-level, Org-level, and Project-level Variables, Harness lets you store values that you can share and use across multiple Pipelines in multiple Projects. + +This topic explains how to add Variables as an Account-level and Org-level Resource in Harness. + + +:::note +For details on Harness built-in variables, see [Built-in Harness Variables Reference](harness-variables.md). + +::: + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts). +* Make sure you have [all permissions](../4_Role-Based-Access-Control/9-add-manage-roles.md) on Variables to add and manage Variables. + + ![](./static/add-a-variable-00.png) + +### Limitations + +* Harness supports only String type Account-level, Org-level, and Project-level Variables. This is only a temporary limitation. You can use Secrets in Pipeline, Stage, and Service variables. +* If you delete a Variable that is referenced using [expressions](harness-variables.md) in entities like Pipelines, the reference expressions are not deleted. At runtime, when the expressions are resolved, the expression will resolve as null. + +### Visual Summary + +Here is a quick overview of how Variables can be shared across Pipelines. + +![](./static/add-a-variable-01.png) + +### Step 1: Add Account, Org, and Project Variables + +You can add a Variable to the Account, Organization, or Project [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md#rbac-scope). + +#### Account + +In Harness, click **Account Settings**. + +Click **Account Resources** and then click **Variables**. + +![](./static/add-a-variable-02.png) + +Click **New Variable**. The **Add Variable** settings appear. + +![](./static/add-a-variable-03.png) + +Enter a **Name** for your Variable. + +In **Fixed Value**, enter a value for your Variable. + +Click **Save**. + +![](./static/add-a-variable-04.png) + +#### Org + +Click **Account Settings**. + +Click **Organizations**. + +Select an Org. + +In **Organization Resources**, click **Variables**. + +![](./static/add-a-variable-05.png) + +Click **New Variable**. + +Enter a name, select the variable type (for example, **String**), and enter a value. + +For example, here's a variable named **organiz\_var**. + +![](./static/add-a-variable-06.png) + +Note the Id. That Id is used to reference the variable. + +Click **Save**. + +#### Project + +In a Harness Project, click **Project Setup**, and then click **Variables**. + +Click **New Variable**. + +Enter a name, select the variable type (for example, **String**), and enter a value. + +For example, here's a variable named **proj\_var**. + +![](./static/add-a-variable-07.png) + +Note the Id. That Id is used to reference the variable. + +Click **Save**. + +### Step 2: Reference Variables in a Pipeline + +To reference an Account and Org-level Variable, you must use the following expression in your Pipeline: + +`<+variable.[scope].[variable_id]>` + +* Account-level reference: `<+variable.account.[var Id]>` +* Org-level reference: `<+variable.org.[var Id]>` +* Project-level reference: `<+variable.[var Id]>` + + +:::note +The expression to reference **Project** scope Variables is `<+variable.Example>`. You do not need to specify `scope` to reference Project Variables. + +::: + +For example, to reference the Variable you just created, the expression will be: + +`<+variable.account.Example>` + +Let us add the Variable in a Pipeline now. + +In Harness go to a Pipeline in the same Org as the variable you created. + +In **Execution**, add a [Shell Script](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) step and reference the variables: + + +``` +echo "account var: "<+variable.account.acct_var> +echo "org var: "<+variable.org.organiz_var> +echo "project var: " <+variable.proj_var> +``` +When you run the Pipeline, the variable references are resolved and output: + +![](./static/add-a-variable-08.png) + +### Review: Using an Account, Org, or Project Variable in a Service Variable + +In **Service**, in **Advanced**, click **Add Variable**. + +![](./static/add-a-variable-09.png) + +The **Add Variable** settings appear. + +In **Variable** **Name**, enter a name for your Variable. + +Select **String** as **Type** and click **Save**. + +Your Variable is now listed under **Variables**. + +In **VALUE**, select **Expression** and enter `<+variable.account.acct_var>`. + +![](./static/add-a-variable-10.png) + +Now, when you run your Pipeline the referenced value is evaluated at runtime. + +Copy the Service variable from **Variables**: + +![](./static/add-a-variable-11.png) + +In your Shell Script step, reference the Service variable with `<+stage.spec.serviceConfig.serviceDefinition.spec.variables.serv_var>`. + +Run the Pipeline and see that the value for the Account Variable is passed into the Service Variable: + +![](./static/add-a-variable-12.png) + +You can refer to a Variable in most settings. For example, if you an Account Variable storing a Service named **Example**, you can refer to it inline using the same expression. + +![](./static/add-a-variable-13.png) + +Now, when you run your Pipeline the referenced value is evaluated at runtime. + diff --git a/docs/platform/12_Variables-and-Expressions/extracting-characters-from-harness-variable-expressions.md b/docs/platform/12_Variables-and-Expressions/extracting-characters-from-harness-variable-expressions.md new file mode 100644 index 00000000000..555be1aa409 --- /dev/null +++ b/docs/platform/12_Variables-and-Expressions/extracting-characters-from-harness-variable-expressions.md @@ -0,0 +1,30 @@ +--- +title: Extracting Characters from Harness Variable Expressions +description: You can return the character at a specified index in a Harness variable expression string. This can be helpful with built-in variable expressions such as <+artifact.tag> , <+artifact.image> , or vari… +# sidebar_position: 2 +helpdocs_topic_id: 91bhqk7t4q +helpdocs_category_id: bp8t5hf922 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can return the character at a specified index in a Harness variable expression string. + +This can be helpful with [built-in variable expressions](harness-variables.md) such as `<+artifact.tag>`, `<+artifact.image>`, or variables that contain version numbers or other important strings. + +To return characters, you can use the `charAt()` method with your variable expression. + +### Step 1: Use the charAt() Method + +Let's look at an example where you have a variable named `<+version>`. that evaluates to `1234`. + +To get the first character of the string, you would use `<+<+version>.charAt(0)>`. This would return `1`. + +The `<+version>` references the variable, and the rest of the expression evaluates the string using the `charAt()` method. + +### See also + +* Java [String methods](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#method.summary) +* [JEXL reference](https://commons.apache.org/proper/commons-jexl/reference/syntax.html) from Apache. +* [Built-in Harness Variables Reference](harness-variables.md) + diff --git a/docs/platform/12_Variables-and-Expressions/harness-variables.md b/docs/platform/12_Variables-and-Expressions/harness-variables.md new file mode 100644 index 00000000000..184ec3b3d13 --- /dev/null +++ b/docs/platform/12_Variables-and-Expressions/harness-variables.md @@ -0,0 +1,1121 @@ +--- +title: Built-in and Custom Harness Variables Reference +description: List of default (built-in) Harness expressions. +# sidebar_position: 2 +helpdocs_topic_id: lml71vhsim +helpdocs_category_id: dr1dwvwa54 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the default (built-in) and custom Harness expressions, as well as the prefixes used to identify user-created variables. This list will be periodically updated when new expressions are added to Harness. + +Looking for how-tos? See [Variable Expressions How-tos](https://docs.harness.io/category/variables-and-expressions). + +### Variable Expression Basics + +Let's quickly review what Harness built-in and custom variable expressions are and how they work. + +#### What is a Harness Variable Expression? + +Harness variables are a way to refer to something in Harness, such as an entity name or a configuration setting. At Pipeline runtime, Harness evaluates all variables and replaces them with the resulting value. + +Harness variables are powerful because they let you template configuration information, Pipeline settings, and values in your scripts, and they enable your Pipelines to pass information between Stages and settings. + +When you use a variable, you add it as an expression. + +Harness expressions are identified using the `<+...>` syntax. For example, `<+pipeline.name>` references the name of the Pipeline where the expression is evaluated. + +The content between the `<+...>` delimiters is passed on to the [Java Expression Language (JEXL)](http://commons.apache.org/proper/commons-jexl/) where it is evaluated. Using JEXL, you can build complex variable expressions that use JEXL methods. For example, here's an expression that uses Webhook Trigger payload information: + + +``` +<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo") || <+trigger.payload.repository.owner.name> == "wings-software" +``` +Harness pre-populates many variables, as documented below, and you can set your own variables in the form of context output from [shell scripts](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) and other steps. + +#### You can use all Java String methods + +You can use all Java String methods on Harness variables expressions. + +The above example used `contains()`: + +`<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo")` + +Let's look at another example. Imagine you have a variable called `abc` with the value `def:ghi`. You can use `split()` like this: + + +``` +echo <+pipeline.variables.abc.split(':')[1]> +``` +The result would be `ghi`. + +#### FQNs and Expressions + +Everything in Harness can be referenced by a Fully Qualified Name (FQN) expression. + +The FQN is the path to a setting in the YAML of your Pipeline: + +![](./static/harness-variables-14.png) + +You can select the expression for a setting or value in the Pipeline editor or execution. + +You don't need to build the expression yourself. Harness provides multiple places where you can copy the variable expression. + +For example, you can click the copy button in a Pipeline execution to get the expressions of settings and values. + +![](./static/harness-variables-15.png) + +When building a Pipeline in Pipeline Studio, you can copy the FQN of a setting using **Variables**. + +![](./static/harness-variables-16.png) + +#### Stage-level and Pipeline-level Expressions + +Every section and step in a stage contains input information you can reference as expressions. + +Click **Variables** in the Pipeline to view all the inputs and copy their expressions. + +![](./static/harness-variables-17.png) + +There are two expressions for each input: + +* **Stage-level:** use this option to reference the input anywhere in its Stage. +* **Pipeline-level:** begins with `pipeline.stages`. Use this option to reference the input anywhere in the Pipeline. + +#### Expression Example + +Here is a simple example of a Shell Script step echoing some common variable expressions: + + +``` +echo "Harness account name: "<+account.name> + +echo "Harness comp name: "<+account.companyName> + +echo "pipeline executionId: "<+pipeline.executionId> + +echo "pipeline sequenceId: "<+pipeline.sequenceId> + +echo "stage name: "<+stage.name> + +echo "service name: "<+service.name> + +echo "service variables: "<+serviceVariables.example_var> + +echo "artifact image: "<+artifact.image> + +echo "artifact.imagePath: "<+artifact.imagePath> + +echo "environment name: "<+env.name> + +echo "infrastructure connectorRef: "<+infra.connectorRef> + +echo "infrastructure namespace: "<+infra.namespace> + +echo "infrastructure releaseName: "<+infra.releaseName> +``` +Here is an example of the output: + + +``` +Harness account name: Harness.io + +Harness comp name: Harness.io + +pipeline executionId: T4a7uBs7T-qWhNTr-LnFDw + +pipeline sequenceId: 16 + +stage name: dev + +service name: nginx + +service variables: foo + +artifact image: index.docker.io/library/nginx:stable + +artifact.imagePath: library/nginx + +environment name: quickstart + +infrastructure connectorRef: account.harnesstestpmdemocluster + +infrastructure namespace: default + +infrastructure releaseName: docs + +Command completed with ExitCode (0) +``` +#### Input and Output Variable Expressions + +You can reference the inputs and outputs of any part of your Pipeline. + +* **Input variable expressions** reference the values and setting selections you made in your Pipeline. +* **Output variable expressions** reference the results of a Pipeline execution. + +You can reference inputs in Pipeline **Variables**: + +![](./static/harness-variables-18.png) + +##### Input and Output Variable Expressions in Executions + +Inputs and outputs are displayed for every part of the Pipeline execution. + +Here's the inputs and outputs for a Kubernetes Rollout Deployment step: + + + +| | | +| --- | --- | +| **Inputs** | **Outputs** | +| ![](./static/rolloutdeployment1.png) | ![](./static/rolloutdeployment2.png) | + +You can copy the expressions for the names or values of any input or output. + + + +| | | +| --- | --- | +| **Name** | **Value** | +| ![](./static/name.png)|![](./static/value.png) | + +Here are the **Name** and **Value** expressions for the `podIP` setting: + +* Name: +``` +<+pipeline.stages.Deploy_Service.spec.execution.steps.rolloutDeployment.deploymentInfoOutcome.serverInstanceInfoList[0].podIP> +``` +* Value: `10.100.0.6` + +#### Using Expressions in Settings + +You can use Harness variable expressions in most settings. + +When you select **Expression** in a setting, you type `<+` and a value and the list of available variables appears: + +![](./static/harness-variables-19.png) + +Simply click a variable expression name to use it as the value for this setting. + +At runtime, Harness will replace the variable with the runtime value. + +You can also paste in expressions that don't appear. Such as expressions that reference settings in previous Stages. + +See [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). + +#### Only Use Expressions After They'll Be Resolved + +When Harness encounters an expression during Pipeline execution, it tries to resolve it with the information it has at that point in the execution. Consequently, you can only use an expression after Harness has the required information. If you try to use an expression before Harness has its information, it will fail. + +In this illustration, you can see how the information in each section of the Stage are referenced: + +![](./static/harness-variables-20.png) + +Here's how you reference the information in each of these sections: + +* **Service expressions** can only be used after Harness has progressed through the **Service** section of the Pipeline. + + **Service** expressions they can be used in **Infrastructure** and **Execution**. +* **Infrastructure expressions** can only be used after Harness has progressed through the **Infrastructure** section of the Pipeline. + + In **Infrastructure**, you can reference **Service** settings. + + Since **Execution** follows **Infrastructure**, you can reference **Infrastructure** expressions in **Execution**. +* **Execution expressions** apply to steps in **Execution**. + + Each step's **Execution** expressions can only be used after Harness has progressed through that step in the **Execution** section:![](./static/harness-variables-21.png) + + +##### Variable Expressions in Conditional Execution Settings + +Stages and Steps support variable expressions in the JEXL conditions of their **Conditional Execution** settings. + +You can only use variable expressions in the JEXL conditions that can be resolved before the stage. + +Since **Conditional Execution** settings are used to determine if the stage should be run, you cannot use variable expressions that can't be resolved until the stage is run. + +For more information on Conditional Execution, see [Stage and Step Conditional Execution Settings](../8_Pipelines/w_pipeline-steps-reference/step-skip-condition-settings.md). + +### Variable Expression Limitations and Restrictions + +Review the following variable expression limitations and restrictions to avoid errors when using variable expressions. + +#### Scope + +Harness permits variables only within their scope. You will not see a variable available in a field where it cannot be used. + +You cannot refer to a Pipeline step's expressions within the same step. + +For example, if you have an HTTP step with the Id `foo` you cannot use the expression `<+execution.steps.foo.spec.url>` to reference the HTTP URL within that same step. Put another way, you can only reference a step's settings from a different step. + +#### Variable Value Size + +A variable value (the evaluated expression) is limited to 256 KB. + +#### Scripts Within Expressions + +You cannot write scripts within an expression `<+...>`. For example, the following script will not work: + + +``` +if ((x * 2) == 5) { <+pipeline.name = abc>; } else { <+pipeline.name = def>; } +``` +#### Variable Names Across the Pipeline + +Variables names must be unique in the same Stage. You can use the same variable names in different Stages in the same Pipeline or other Pipelines. + +#### Hyphens in Variable Names + +Do not use hyphens (dashes) in variable names, as some Linux distributions and deployment-related software do not allow them. Also, it can cause issues with headers. + +For example, `<+execution.steps.httpstep.spec.headers.x-auth>` will not work. + +As a workaround, you can put the variable name in `["..."]`, like this: + +`<+execution.steps.httpstep.spec.headers["x-auth"]>` + +This also works for nested expressions: + +`<+execution.steps.httpstep.spec.newHeaders["x-auth"]["nested-hyphen-key"]>` + +`<+execution.steps.httpstep.spec.newHeaders["x-auth"].nonhyphenkey>` + +#### Variable Expression Name Restrictions + +A variable name is the name in the variable expression, such as `foo` in `<+stage.variables.foo>`. + +Variable names may only contain `a-z, A-Z, 0-9, _`. They cannot contain hyphens or dots. + +Certain platforms and orchestration tools, like Kubernetes, have their own naming restrictions. For example, Kubernetes doesn't allow underscores. Make sure that whatever expressions you use resolve to the allowed values of your target platforms. + +#### Reserved Words + +The following keywords are reserved, and cannot be used as a variable name or property: + +`or and eq ne lt gt le ge div mod not null true false new var return shellScriptProvisioner` + +See [JEXL grammar details](https://people.apache.org/~henrib/jexl-3.0/reference/syntax.html). + +#### Number variables + +Number type variables are always treated as a Double (double-precision floating-point): + +* -1.79769313486231E308 to -4.94065645841247E-324 for negative values +* 4.94065645841247E-324 to 1.79769313486232E308 for positive values + +For example, here's a pipeline variable of Number type: + + +``` + variables: + - name: double_example + type: Number + description: "" + value: 10.1 +``` +The expression to reference that pipeline variable, `<+pipeline.variables.double_example>`, will be treated as a Double when it is resolved to `10.1`. + +##### Numbers as doubles and strings + +Whether the number in a variable is treated as a double or string depends on the field that you use it in. + +If you entered 123 in a string filed, such as a name, it is treated as a string. If you entered 123 in a count field, such as instance count, it is treated as a double. + +### Built-in CIE Codebase Variables Reference + +In Harness, you set up your [Codebase](https://docs.harness.io/article/6vks5ym7sq-edit-a-ci-pipeline-codebase-configuration) by connecting to a Git repo using a Harness [Connector](../7_Connectors/ref-source-repo-provider/git-connector-settings-reference.md) and cloning the code you wish to build and test in your Pipeline. + +Harness also retrieves your Git details and presents them in your Build stage once a Pipeline is run. + +Using Harness built-in expressions, you can refer to the various attributes of your Codebase in Harness stages. + +Here is a simple example of a Shell Script step echoing some common Codebase variable expressions: + + +``` +echo <+codebase.commitSha> +echo <+codebase.targetBranch> +echo <+codebase.sourceBranch> +echo <+codebase.prNumber> +echo <+codebase.prTitle> +echo <+codebase.commitRef> +echo <+codebase.repoUrl> +echo <+codebase.gitUserId> +echo <+codebase.gitUserEmail> +echo <+codebase.gitUser> +echo <+codebase.gitUserAvatar> +echo <+codebase.pullRequestLink> +echo <+codebase.pullRequestBody> +echo <+codebase.state> +``` +See [Built-in CIE Codebase Variables Reference](../../continuous-integration/ci-technical-reference/built-in-cie-codebase-variables-reference.md). + +### Account + +#### <+account.identifier> + +The entity [identifier](../20_References/entity-identifier-reference.md) of the Harness account. + +![](./static/harness-variables-22.png) + +#### <+account.name> + +Harness account name. + +#### <+account.companyName> + +The name of the company for the account. + +#### Custom Account Variables + +See [Add Account, Org, and Project-level Variables](add-a-variable.md). + +### Org + +#### <+org.identifier> + +The entity [identifier](../20_References/entity-identifier-reference.md) of an organization. + +![](./static/harness-variables-23.png) + +#### <+org.name> + +The name of the Org. + +#### <+org.description> + +The description of the Org. + +#### Custom Org Variables + +See [Add Account, Org, and Project-level Variables](add-a-variable.md). + +### Project + +#### <+project.name> + +The name of the Harness Project. + +#### <+project.description> + +The description of the Harness Project. + +#### <+project.tags> + +All Harness Tags attached to the Project. + +#### <+project.identifier> + +The entity [identifier](../20_References/entity-identifier-reference.md) of the Harness Project. + +#### Custom Project Variables + +See [Add Account, Org, and Project-level Variables](add-a-variable.md). + +### Pipeline + +#### Pipeline-level Variables + +Here's a quick video that explains how to create and reference Pipeline, Stage, and Service variables: + +#### <+pipeline.identifier> + +The [Entity Identifier](../20_References/entity-identifier-reference.md) (Id) for the Pipeline. + +![](./static/harness-variables-24.png) + +#### <+pipeline.executionId> + +Every execution of a Pipeline is given a universally unique identifier (UUID). The UUID can be referenced anywhere. + +For example, in the following execution URL the UUID follows `executions` and is `kNHtmOaLTu66f_QNU-wdDw`: + + +``` +https://app.harness.io/ng/#/account/12345678910/cd/orgs/default/projects/CD_Quickstart/pipelines/Helm_Quickstart/executions/kNHtmOaLTu66f_QNU-wdDw/pipeline +``` +#### <+pipeline.execution.url> + +The execution URL of the Pipeline. This is the same URL you see in your browser when you are viewing the Pipeline execution. + +For example: + + +``` +https://app.harness.io/ng/#/account/H5W8iol5TNWc4G9h5A2MXg/cd/orgs/default/projects/CD_Docs/pipelines/Triggers/executions/EpE_zuNVQn2FXjhIkyFQ_w/pipeline +``` +#### <+pipeline.name> + +The name of the current Pipeline. + +![](./static/harness-variables-25.png) + +#### <+pipeline.sequenceId> + +The incremental sequential ID for the execution of a Pipeline. A `<+pipeline.executionId>` does not change, but a `<+pipeline.sequenceId>` is incremented with each run of the Pipeline. + +The first run of a Pipeline receives a sequence ID of 1 and each subsequent execution is incremented by 1. + +For CD Pipelines the ID is named Execution. For CI Pipelines the ID is named Builds. + +![](./static/harness-variables-26.png) + +You can use `<+pipeline.sequenceId>` to tag a CI build when you push it to a repo, and then use `<+pipeline.sequenceId>` to pull the same build and tag in a subsequent stage. See [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md). + +#### <+pipeline.startTs> + +The start time of a Pipeline execution in [Unix Epoch format](https://www.epoch101.com/). See [Trigger How-tos](https://docs.harness.io/category/triggers). + +#### <+pipeline.triggerType> + +The type of Trigger. See [Trigger How-tos](https://docs.harness.io/category/triggers). + +#### <+pipeline.triggeredBy.name> + +The name of the user or the Trigger name if the Pipeline is triggered using a Webhook. See [Trigger Pipelines using Git Events](../11_Triggers/triggering-pipelines.md). + +If a user name is not present in the event payload, the `<+pipeline.triggeredBy.name>` expression will resolve as empty. For example, in the SaaS edition of Bitbucket, a user name is not present. + +#### <+pipeline.triggeredBy.email> + +The email of the user who triggered the Pipeline. This returns NULL if the Pipeline is triggered using a webhook. See [Trigger How-tos](https://docs.harness.io/category/triggers). + +### Deployment and Step Status + +Deployment status values are a Java enum. The list of values can be seen in the Deployments **Status** filter: + +![](./static/harness-variables-27.png) + +You can use any status value in a JEXL condition. For example, `<+pipeline.stages.stage1.status> == "FAILED"`. + +#### Step status + +The expression `<+execution.steps.[step Id].status>` resolves to the status of a step. For example, `<+execution.steps.mystep.status>`. + +You must use the expression after the step in Execution. + +### Stage + +#### Stage-level Variables + +Here's a quick video that explains how to create and reference Pipeline, Stage, and Service variables: + +Once you've created a stage, its settings are in the **Overview** tab. For example, here's the **Overview** tab for a Deploy stage: + +![](./static/harness-variables-28.png) + +In **Advanced**, you can add **Stage Variables**. + +Stage variables are custom variables you can add and reference in your stage and Pipeline. They're available across the Pipeline. You can override their values in later stages. + +You can even reference stage variables in the files fetched at runtime. + +For example, you could create a stage variable `name` and then reference its identifier in the Kubernetes values.yaml file used by this stage: `name: <+stage.variables.name>`: + + +``` +name: <+stage.variables.name> +replicas: 2 + +image: <+artifact.image> +... +``` +When you run this Pipeline, the value for `name` is used for the values.yaml file. The value can be a Fixed Value, Expression, or Runtime Input. + +You reference stage variables **within their stage** using the expression `<+stage.variables.[variable name]>`. + +You reference stage variables **outside their stage** using the expression `<+pipeline.stages.[stage name].variables.[variable name]>`. + +#### <+stage.name> + +The name of the stage where the expression is evaluated. + +![](./static/harness-variables-30.png) + +#### <+stage.description> + +The description of the stage where the expression is evaluated. + +#### <+stage.tags> + +The tags on the stage where the expression is evaluated. See [Tags Reference](../20_References/tags-reference.md). + +These tags are different from Docker image tags. + +#### <+stage.identifier> + +The [entity identifier](../20_References/entity-identifier-reference.md) of the stage where the expression is evaluated. + +#### <+stage.output.hosts> + +Lists all of the target hosts when deploying to multiple hosts. + +When you are deploying to multiple hosts, such as with an SSH, WinRM, or Deployment Template stage, you can run the same step on all of the target hosts. + +To run the step on all hosts, you use the Repeat [Looping Strategy](../8_Pipelines/looping-strategies-matrix-repeat-and-parallelism.md) and identify all the hosts for the stage as the target: + + +``` +repeat: + items: <+stage.output.hosts> +``` +Here's an example with a Shell Script step: + +![](./static/harness-variables-31.png) + +For examples, see the looping strategies used in the [Secure Shell (SSH) deployment tutorial](https://docs.harness.io/article/mpx2y48ovx-ssh-ng). + +### Service + +Currently, there are two versions of Services and Environments, v1 and v2. Services and Environments v1 is being replaced by Services and Environments v2. + +The use of variable expressions is different between v1 and v2. + +For more information, go to [Services and Environments Overview](https://docs.harness.io/article/9ryi1ay01f-services-and-environments-overview). + +#### Service-level Variables for Service v2 + +To reference a Service variable, use the expression `<+serviceVariables.[variable name]>`. + +For example, `<+serviceVariables.myvar>`. + +#### Service-level Variables for Service v1 + +Here's a quick video that explains how to create and reference Pipeline, Stage, and Service variables: + +#### <+serviceConfig.serviceDefinition.spec.variables.[var\_name]> + +The value of the Service-level variable in `[var_name]`. + +![](./static/harness-variables-32.png) + +Use expression anywhere after the Service step in the Pipeline. + +To reference the variables, click the copy button: + +![](./static/harness-variables-33.png) + +There are two options: + +* **Copy variable name:** use this option if you will only be referencing this variable in the current Stage. Expression: + + `<+serviceConfig.serviceDefinition.spec.variables.[name]>` +* **Copy fully qualified name:** use this option if you will be referencing this variable in another Stage. Example: + + `<+pipeline.stages.[stage_name].spec.serviceConfig.serviceDefinition.spec.variables.[name]>` + +You can use these expressions in any setting in your Pipeline. You simply select the Expression option and enter the expression: + +![](./static/harness-variables-34.png) + +To override the Service variable in a script, you simply reference its name and use a new value. + +#### <+service.name> + +The name of the Service where the expression is evaluated. + +![](./static/harness-variables-35.png) + +#### <+service.description> + +The description of the Service where the expression is evaluated. + +#### <+service.tags> + +The tags on the Service where the expression is evaluated. + +To reference a specific tag use `<+service.tags.[tag_key]>`. + +#### <+service.identifier> + +The [entity identifier](../20_References/entity-identifier-reference.md) of the Service where the expression is evaluated. + +#### <+service.type> + +Resolves to stage Service type, such as Kubernetes. + +![](./static/harness-variables-36.png) + +#### <+service.gitOpsEnabled> + +Resolves to a boolean value to indicate whether the GitOps option is enabled (true) or not (false). + +![](./static/harness-variables-37.png) + +For details on using the GitOps option, go to [Harness GitOps ApplicationSet and PR Pipeline Tutorial](https://docs.harness.io/article/lf6a27usso-harness-git-ops-application-set-tutorial). + +### Manifest + +There are generic and deployment type-specific expressions for manifests. + +Manifest settings are referenced by **name**. + +You can always determine the expressions you can use by looking at the Service YAML. + +For example, the expression `<+manifests.mymanifest.valuesPaths>` can be created by using the manifest name and the valuesPaths key in the YAML: + + +``` +... + manifests: + - manifest: + identifier: mymanifest + type: K8sManifest + spec: + store: + type: Harness + spec: + files: + - account:/Templates + valuesPaths: + - account:/values.yaml + skipResourceVersioning: false +... +``` +Let's look at a few generic manifest expressions. + +#### <+manifests.[manifest name].identifier> + +Resolves to the manifest Id in Harness. + + +``` +... + manifests: + - manifest: + identifier: mymanifest +... +``` +#### <+manifests.[manifest name].type> + +Resolves to the manifest type. For example, `K8sManifest`: + + +``` +... + manifests: + - manifest: + identifier: mymanifest + type: K8sManifest +... +``` +#### <+manifests.[manifest name].store> + +Resolves to where the manifest is stored. For example, this manifest is stored in the [Harness File Store](https://docs.harness.io/article/oaihv6nry9-add-inline-manifests-using-file-store): + + +``` +... + manifests: + - manifest: + identifier: mymanifest + type: K8sManifest + spec: + store: + type: Harness + spec: + files: + - account:/Templates +... +``` +### Artifact + +If an artifact expression is in a manifest or step and you have not selected an artifact in a Service Definition, or set the artifact is set as a Runtime Input, you will be prompted to select an artifact at runtime. This is true even if the Stage does not deploy an artifact (such as a Custom Stage or a Stage performing a [Kustomize](https://docs.harness.io/article/uiqe6jz9o1-kustomize-quickstart) deployment). If you want to reference an artifact that isn't the primary deployment artifact without being prompted, you can use an expression with quotes, like `docker pull <+artifact<+".metadata.image">>`.The artifact expressions will resolve to settings and values specified in a Service's **Artifacts** section. + + + +| | | +| --- | --- | +| | | + +For example, here's how the common artifact expressions resolve for a Kubernetes deployment with a Docker image on Docker Hub: + +* **<+artifact.tag>:** `stable` +* **<+artifact.image>:** `index.docker.io/library/nginx:stable` +* **<+artifact.imagePath>:** `library/nginx` +* **<+artifact.imagePullSecret>:** `eJjcmV0em1hbiIsInBhc3N3b3JkIjoiIzhDNjk3QVhUdSJ9fQ==:` +* **<+artifact.type>:** `DockerRegistry` +* **<+artifact.connectorRef>:** `DockerHub` + +Here's a script you can add to a [Shell Script](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) step to view the artifact info: + + +``` +echo "artifact.tag: "<+artifact.tag> +echo "artifact.image: "<+artifact.image> +echo "artifact.imagePath: "<+artifact.imagePath> +echo "artifact.imagePullSecret: "<+artifact.imagePullSecret> +echo "artifact.type: "<+artifact.type> +echo "artifact.connectorRef: "<+artifact.connectorRef> +``` +Here's the example log from the deployment: + + +``` +Executing command ... +artifact.tag: stable +artifact.image: index.docker.io/library/nginx:stable +artifact.imagePath: library/nginx +artifact.imagePullSecret: eyJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOnsidXNlcm5hbWUiOiJjcmV0em1hbiIsInBhc3N3b3JkIjoiIzhDNjk3QVhUdSJ9fQ== +artifact.type: DockerRegistry +artifact.connectorRef: DockerHub +Command completed with ExitCode (0) +``` +#### <+artifact.tag> + +Not Harness Tags. This expression evaluates to the tags on the artifact pushed, pulled, or deployed. For example, AMI tags, or if you are deploying Docker image `nginx:stable-perl` then `stable-perl` is the tag. + +#### <+artifact.image> + +The full location to the Docker image. For example, `docker.io/bitnami/nginx:1.22.0-debian-11-r0`. + +For non-containerized artifacts, use `<+artifact.path>`, described [below](#artifact_path).To see just the image name, use `<+artifact.imagePath>`. + +You use `<+artifact.image>` or `<+artifact.imagePath>` is your Values YAML file when you want to deploy an artifact you have added to the **Artifacts** section of a CD stage Service Definition. + +For example, here's the **Artifacts** section with an artifact: + +![](./static/harness-variables-38.png) + +Here's the Values YAML file referencing the artifact in **Artifacts**: + + +``` +name: example +replicas: 2 + +image: <+artifact.image> +# dockercfg: <+artifact.imagePullSecret> + +createNamespace: true +namespace: <+infra.namespace> + +... +``` +See [Example Kubernetes Manifests using Go Templating](https://docs.harness.io/article/qvlmr4plcp-example-kubernetes-manifests-using-go-templating). + +#### <+artifact.path> + +The full path to the non-containerized artifact. This expression is used in non-containerized deployments. + +#### <+artifact.filePath> + +The file name of the non-containerized artifact. This expression is used in non-containerized deployments. For example, a ZIP file in AWS S3. + +#### <+artifact.imagePath> + +The image name, such as `nginx`. To see the entire image location use `<+artifact.image>`. + +#### <+artifact.imagePullSecret> + +If some cases, your Kubernetes cluster might not have the permissions needed to access a private Docker registry. For these cases, the values.yaml or manifest file in Service Definition **Manifests** section must use the `dockercfg` parameter. + +If the Docker image is added in the Service Definition **Artifacts** section, then you reference it like this: `dockercfg: <+artifact.imagePullSecret>`. + +values.yaml: + + +``` +name: <+stage.variables.name> +replicas: 2 + +image: <+artifact.image> +dockercfg: <+artifact.imagePullSecret> + +createNamespace: true +namespace: <+infra.namespace> +... +``` +See [Pull an Image from a Private Registry for Kubernetes](https://docs.harness.io/article/o1gf8jslsq-pull-an-image-from-a-private-registry-for-kubernetes). + +#### <+artifact.type> + +The type of repository used to add this artifact in the Service **Artifacts**. For example, Dockerhub, Ecr, or Gcr. + +#### <+artifact.connectorRef> + +The [entity identifier](../20_References/entity-identifier-reference.md) for the Connector used to connect to the artifact repo. + +![](./static/harness-variables-39.png) + +#### <+artifact.label.get("")> + +This expression resolves to the Docker labels of a Docker image. + +For example, here are the labels for a Docker image: + +* `maintainer=dev@someproject.org` +* `build_date=2017-09-05` +* `multi.author=John Doe` +* `key-value=xyz` +* `multi.key.value=abc` + +In a Harness Shell Script step or any setting where you want use the labels, you can reference them: + + +``` +echo <+artifact.label.get("maintainer")> +echo <+artifact.label.get("build_date")> +echo <+artifact.label.get("multi.author")> +echo <+artifact.label.get("key-value")> +echo <+artifact.label.get("multi.key.value")> +``` +When you run the Pipeline, the expressions will resolve to their respective label values: + +![](./static/harness-variables-40.png) + +#### <+artifact.primary.identifier> + +The Id of the Primary artifact added in a Service **Artifacts** section. + +![](./static/harness-variables-41.png) + +#### Sidecar Artifacts + +Sidecar artifact expressions use the **Sidecar Identifier** to reference the sidecar artifact. + +![](./static/harness-variables-42.png) + +The sidecar identifier is set when you add the sidecar artifact. It can bee seen in the artifact listing: + +![](./static/harness-variables-43.png) + +Here are the sidecar expressions: + +* `<+artifacts.sidecars.[sidecar_identifier].imagePath>` +* `<+artifacts.sidecars.[sidecar_identifier].image>` +* `<+artifacts.sidecars.[sidecar_identifier].type>` +* `<+artifacts.sidecars.[sidecar_identifier].tag>` +* `<+artifacts.sidecars.[sidecar_identifier].connectorRef>` + +### Environment + +#### Environment-level Variables for Service v2 + +Currently, there are two versions of Services and Environments, v1 and v2. Services and Environments v1 is being replaced by Services and Environments v2. + +The use of variable expressions is different between v1 and v2. + +For more information, go to [Services and Environments Overview](https://docs.harness.io/article/9ryi1ay01f-services-and-environments-overview). + +To reference an Environment-level variable, use the expression `<+env.variables.[variable name]>`. + +For example, here is an Environment variable named `envvar`. + +![](./static/harness-variables-44.png) + +You would reference it as `<+env.variables.envvar>`. + +#### <+env.name> + +The name of the stage Environment. + +![](./static/harness-variables-45.png) + +#### <+env.identifier> + +The [entity identifier](../20_References/entity-identifier-reference.md) of the stage's Environment. + +#### <+env.description> + +The description of the Environment. + +#### <+env.type> + +The Environment Type, such as Production or Non-Production. + +### Infrastructure + +#### <+infra.name> + +The name of the Infrastructure Definition used in the Pipeline stage. + +![](./static/harness-variables-46.png) + +#### <+infra.connectorRef> + +The name of the Connector used in the Infrastructure Definition. + +#### <+INFRA\_KEY> + +The infrastructure key. The key is a unique string that identifies a deployment target infrastructure. It is typically used in the **Release Name** setting to add labels to release for tracking. + +For example, in a Deploy stage's Infrastructure Definition, the `<+INFRA_KEY>` is used in the **Release Name** to give the release a unique name: + +![](./static/harness-variables-47.png) + +When you deploy, Harness adds the Release Name as a label. For example, in a Kubernetes deployment you can see `harness.io/release-name=release-2f9eadcc06e2c2225265ab3cbb1160bc5eacfd4f`: + + +``` +... +Pod Template: + Labels: app=hello + deployment=hello + harness.io/release-name=release-2f9eadcc06e2c2225265ab3cbb1160bc5eacfd4f + Containers: + the-container: + Image: monopole/hello:1 +... +``` +Harness can now track the release for comparisons and rollback. + +#### <+infra.namespace> + +The namespace used in the Infrastructure Definition. + +#### <+infra.releaseName> + +The release name used in the Infrastructure Definition. + +### Instances + +The following instance expressions are supported in SSH, WinRM, and custom deployments using Deployment Templates. These deployments can be done on Physical Data Centers, AWS, and Azure. + +For details on these deployment types, go to [Secure Shell (SSH) deployment tutorial](https://docs.harness.io/article/mpx2y48ovx-ssh-ng), [WinRM deployment tutorial](https://docs.harness.io/article/l8795ji7u3-win-rm-tutorial), and [Custom deployments using Deployment Templates tutorial](https://docs.harness.io/article/6k9t49p6mn-custom-deployment-tutorial). + +To use these instance expressions in a step, you must use the Repeat [Looping Strategy](../8_Pipelines/looping-strategies-matrix-repeat-and-parallelism.md) and identify all the hosts for the stage as the target: + + +``` +repeat: + items: <+stage.output.hosts> +``` +![](./static/harness-variables-48.png) + +For examples, see [Run a script on multiple target instances](https://docs.harness.io/article/c5mcm36cp8-run-a-script-on-multiple-target-instances). + +For Microsoft Azure, AWS, or any platform-agnostic Physical Data Center (PDC): + +* `​<+instance.hostName>​` +* `​<+instance.host.instanceName>` +* `​<+instance.name>` + +For Microsoft Azure or AWS: + +* `​<+instance.host.privateIp>​` +* `​<+instance.host.publicIp>` + +#### Deployment Templates + +For [Deployment Templates](https://docs.harness.io/article/6k9t49p6mn-custom-deployment-tutorial), you can use `<+instance...>` expressions to reference host(s) properties. + +The `<+instance...>` expressions refer to the **Instance Attributes** in the Deployment Template: + +![](./static/harness-variables-49.png) + +The following expressions refer to instance(s) collected by the mandatory **instancename** field: + +* `​<+instance.hostName>​` +* `​<+instance.host.instanceName>` +* `​<+instance.name>` + +The expression `<+instance.host.properties.[property name]>` can used to reference the other properties you added to **Instance Attributes**. + +For example, in the example above you can see the `artifact` field name mapped to the `artifactBuildNo` property. + +To reference `artifact` you would use `<+instance.host.properties.artifact>`. + +`instance.name` has the same value as `instance.hostName`. Both are available for backward compatibility. +#### <+instance.hostName> + +The host/container/pod name where the microservice/application is deployed. + +If you use this variable in a Pipeline, such as in a Shell Script step, Harness will apply the script to all target instances. You do not have to loop through instances in your script. + +#### ​<+instance.host.instanceName> + +The same as `<+instance.hostName>`. + +#### <+instance.name> + +The name of the instance on which the service is deployed. + +If you use this variable in a Pipeline, such as in a Shell Script step, Harness will apply the script to all target instances. You do not have to loop through instances in your script. + +#### <+instance.host.privateIp> + +The private IP of the host where the service is deployed. + +If you use this variable in a Pipeline, such as in a Shell Script step, Harness will apply the script to all target instances. You do not have to loop through instances in your script. + +#### <+instance.host.publicIp> + +The public IP of the host where the service is deployed. + +If you use this variable in a Pipeline, such as in a Shell Script step, Harness will apply the script to all target instances. You do not have to loop through instances in your script. + +### Triggers + +#### <+trigger.artifact.build> + +Resolves to the artifact version (such as a Docker Tag) that initiated an [On New Artifact Trigger](../11_Triggers/trigger-on-a-new-artifact.md). + +When you add an On New Artifact Trigger, you select the artifact to listen on and its **Tag** setting is automatically populated with `<+trigger.artifact.build>`. + +![](./static/harness-variables-50.png) + +The `<+trigger.artifact.build>` is used for **Tag** to ensure that the new artifact version that executed the Trigger is used for the deployment. + +When a new tag is added to the artifact, the Trigger is fired and the Pipeline executes. Harness then resolves `<+trigger.artifact.build>` to the tag that fired the Trigger. This ensures that the new tag is used when pulling the artifact and that version is deployed. + +#### Git Trigger and Payload Expressions + +Harness includes built-in expressions for referencing trigger details such as a PR number. + +For example: + +* `<+trigger.type>` + + Webhook. +* `<+trigger.sourceRepo>` + + Github, Gitlab, Bitbucket, Custom +* `<+trigger.event>` + + PR, PUSH, etc. + +For a complete list, see [Triggers Reference](../8_Pipelines/w_pipeline-steps-reference/triggers-reference.md). + +#### Triggers and RBAC + +Harness RBAC is applied to Triggers in Harness, but it is not applied to the repos used by the Triggers. + +For example, you might have an [On New Artifact Trigger](../11_Triggers/trigger-on-a-new-artifact.md) that is started when a new artifact is added to the artifact repo. Or a [Webhook Trigger](../11_Triggers/triggering-pipelines.md) that is started when a PR is merged. + +In Harness, you can select who is able to create and use these Triggers within Harness, but you must use your repos' RBAC to control who can add the artifacts or initiate the events that start the Harness Trigger. + +### Kubernetes + +#### ${HARNESS\_KUBE\_CONFIG\_PATH} + +The path to a Harness-generated kubeconfig file containing the credentials you provided to Harness. The credentials can be used by kubectl commands by exporting its value to the KUBECONFIG environment variable. + +Harness only generates this kubeconfig file when a Delegate is outside of the target cluster and is making a remote connection. When you set up the Kubernetes Cluster Connector to connect to the cluster, you select the **Specify master URL and credentials** option. The master URL and credentials you supply in the Connector are put in the kubeconfig file and used by the remote Delegate to connect to the target cluster. + +Consequently, you can only use `${HARNESS_KUBE_CONFIG_PATH}` when you are using a Delegate outside the target cluster and a Kubernetes Cluster Connector with the **Specify master URL and credentials** option. + +If you are running the script using an in-cluster Delegate with the **Use the credentials of a specific Harness Delegate** credentials option, then there are no credentials to store in a kubeconfig file since the Delegate is already an in-cluster process. + +You can use this variable in a [Shell Script](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) step to set the environment variable at the beginning of your kubectl script: + +`export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH}` + +For example: + + +``` +export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH} kubectl get pods -n default +``` +The `${HARNESS_KUBE_CONFIG_PATH}` expression can be used in scripts in Shell Script steps. It cannot be used in other scripts such as a Terraform script. + +### Tag Expressions + +You can reference Tags using Harness expressions. + +You simply reference the tagged entity and then use `tags.[tag name]`, like `<+pipeline.tags.docs>` + +For example, here are several different references: + +* `<+pipeline.tags.[tag name]>` +* `<+stage.tags.[tag name]>` +* `<+pipeline.stages.s1.tags.[tag name]>` +* `<+serviceConfig.service.tags.[tag name]>` + +### See also + +* [Codebase Variables Reference](../../continuous-integration/ci-technical-reference/built-in-cie-codebase-variables-reference.md) +* [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). + diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-00.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-00.png new file mode 100644 index 00000000000..2543afc7627 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-00.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-01.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-01.png new file mode 100644 index 00000000000..64e74a0b9b2 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-01.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-02.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-02.png new file mode 100644 index 00000000000..ea2fcbe4433 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-02.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-03.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-03.png new file mode 100644 index 00000000000..9958c23d407 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-03.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-04.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-04.png new file mode 100644 index 00000000000..a07bf3579b3 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-04.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-05.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-05.png new file mode 100644 index 00000000000..30d98d1691e Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-05.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-06.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-06.png new file mode 100644 index 00000000000..c869c04d016 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-06.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-07.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-07.png new file mode 100644 index 00000000000..8cb5a946385 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-07.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-08.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-08.png new file mode 100644 index 00000000000..fa97fbd201f Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-08.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-09.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-09.png new file mode 100644 index 00000000000..3511573e245 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-09.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-10.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-10.png new file mode 100644 index 00000000000..1f25ce85df6 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-10.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-11.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-11.png new file mode 100644 index 00000000000..174590b8d86 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-11.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-12.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-12.png new file mode 100644 index 00000000000..24b46a3a0bb Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-12.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/add-a-variable-13.png b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-13.png new file mode 100644 index 00000000000..2f3b45f2c24 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/add-a-variable-13.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-14.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-14.png new file mode 100644 index 00000000000..9bd09920e85 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-14.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-15.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-15.png new file mode 100644 index 00000000000..4f9620fe0f8 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-15.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-16.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-16.png new file mode 100644 index 00000000000..c2e77666c7b Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-16.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-17.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-17.png new file mode 100644 index 00000000000..c2e77666c7b Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-17.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-18.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-18.png new file mode 100644 index 00000000000..9c327843b09 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-18.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-19.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-19.png new file mode 100644 index 00000000000..c507308e486 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-19.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-20.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-20.png new file mode 100644 index 00000000000..775d47c294d Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-20.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-21.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-21.png new file mode 100644 index 00000000000..0d9f0c6f896 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-21.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-22.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-22.png new file mode 100644 index 00000000000..5f03ab87424 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-22.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-23.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-23.png new file mode 100644 index 00000000000..95a8bb9d4ea Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-23.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-24.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-24.png new file mode 100644 index 00000000000..069457c5c84 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-24.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-25.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-25.png new file mode 100644 index 00000000000..138b746c278 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-25.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-26.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-26.png new file mode 100644 index 00000000000..783b30be2d5 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-26.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-27.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-27.png new file mode 100644 index 00000000000..392940cc709 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-27.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-28.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-28.png new file mode 100644 index 00000000000..613f6117a6b Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-28.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-29.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-29.png new file mode 100644 index 00000000000..613f6117a6b Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-29.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-30.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-30.png new file mode 100644 index 00000000000..d04a6235424 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-30.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-31.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-31.png new file mode 100644 index 00000000000..26ff87edceb Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-31.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-32.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-32.png new file mode 100644 index 00000000000..44f66bc3ce7 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-32.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-33.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-33.png new file mode 100644 index 00000000000..ed9c3441f0d Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-33.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-34.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-34.png new file mode 100644 index 00000000000..006614e155e Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-34.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-35.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-35.png new file mode 100644 index 00000000000..641d2f15ee4 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-35.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-36.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-36.png new file mode 100644 index 00000000000..0ba788da7b2 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-36.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-37.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-37.png new file mode 100644 index 00000000000..c9f289776d0 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-37.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-38.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-38.png new file mode 100644 index 00000000000..2f321de87a5 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-38.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-39.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-39.png new file mode 100644 index 00000000000..9331f67f758 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-39.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-40.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-40.png new file mode 100644 index 00000000000..5b02c7871e4 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-40.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-41.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-41.png new file mode 100644 index 00000000000..bfaf0731507 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-41.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-42.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-42.png new file mode 100644 index 00000000000..5db1cf8d21e Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-42.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-43.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-43.png new file mode 100644 index 00000000000..eff4b2524a2 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-43.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-44.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-44.png new file mode 100644 index 00000000000..0bc250c782c Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-44.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-45.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-45.png new file mode 100644 index 00000000000..94fcd68a7ba Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-45.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-46.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-46.png new file mode 100644 index 00000000000..50fa73791f7 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-46.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-47.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-47.png new file mode 100644 index 00000000000..edaa5abea18 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-47.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-48.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-48.png new file mode 100644 index 00000000000..c1ea8e38b77 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-48.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-49.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-49.png new file mode 100644 index 00000000000..a42708ef6c0 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-49.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/harness-variables-50.png b/docs/platform/12_Variables-and-Expressions/static/harness-variables-50.png new file mode 100644 index 00000000000..6107523a6df Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/harness-variables-50.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/name.png b/docs/platform/12_Variables-and-Expressions/static/name.png new file mode 100644 index 00000000000..da3f18a752c Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/name.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/rolloutdeployment1.png b/docs/platform/12_Variables-and-Expressions/static/rolloutdeployment1.png new file mode 100644 index 00000000000..76642e5dcd5 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/rolloutdeployment1.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/rolloutdeployment2.png b/docs/platform/12_Variables-and-Expressions/static/rolloutdeployment2.png new file mode 100644 index 00000000000..ae4415dc226 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/rolloutdeployment2.png differ diff --git a/docs/platform/12_Variables-and-Expressions/static/value.png b/docs/platform/12_Variables-and-Expressions/static/value.png new file mode 100644 index 00000000000..80692368f18 Binary files /dev/null and b/docs/platform/12_Variables-and-Expressions/static/value.png differ diff --git a/docs/platform/13_Templates/_category_.json b/docs/platform/13_Templates/_category_.json new file mode 100644 index 00000000000..cf759c1d841 --- /dev/null +++ b/docs/platform/13_Templates/_category_.json @@ -0,0 +1 @@ +{"label": "Templates", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Templates"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "m8tm1mgn2g"}} \ No newline at end of file diff --git a/docs/platform/13_Templates/add-a-stage-template.md b/docs/platform/13_Templates/add-a-stage-template.md new file mode 100644 index 00000000000..7b9282c8af9 --- /dev/null +++ b/docs/platform/13_Templates/add-a-stage-template.md @@ -0,0 +1,195 @@ +--- +title: Create a Stage Template +description: The Harness Template Library enables you to standardize and create Templates that you can use across Harness Pipelines and teams. A Stage Template is a Harness CD, CI, or Approval Stage Template that… +# sidebar_position: 2 +helpdocs_topic_id: s3wrqjsg43 +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness Template Library enables you to standardize and create Templates that you can use across Harness Pipelines and teams. + +A Stage Template is a Harness CD, CI, or Approval Stage Template that can be used in any Pipeline in any Project. + +This topic walks you through the steps to create a CD Stage Template, but the steps are the same for the other Stage types. + +### Objectives + +You'll learn how to:  + +* Create a Deploy Stage Template. +* Define Stage Template parameters. +* Use the Deploy Stage Template in a Pipeline. + +### Before you begin + +* Review [Template Library Overview](template.md). +* Review [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) to establish a general understanding of Harness. +* The Stage Template in this quickstart is added to a CD Pipeline. If you are new to Harness CD, see [CD Quickstarts](https://ngdocs.harness.io/category/c9j6jejsws-cd-quickstarts). +* You can also create CI Build Stage Templates and Manual and Jira Approval Stage Templates. See ​[CIE Quickstarts](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) and [Using Manual Harness Approval Stages](../9_Approvals/adding-harness-approval-stages.md) and [Adding Jira Approval Stages and Steps](../9_Approvals/adding-jira-approval-stages.md). +* The Stage Template in this quickstart uses Runtime Inputs. Runtime Inputs are placeholders for values that will be provided when you start a Pipeline execution. See [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). + +### Review: Templates + +* You can add Templates to Template Libraries at any [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md#rbac-scope). +* [Tags](../20_References/tags-reference.md) can be used to group Templates. You can search or filter Templates using these tags. +* You can have nested Templates. You can refer to a stage Template from your Pipeline Template. + +### Step 1: Create a Template + +First, we'll create a Project-level Template in the **Deployments** module. You can do this in any Project. + +Navigate to the **Deployments** module and in **Projects** select the desired project. + +![](./static/add-a-stage-template-48.png) + +Next select **Templates** under Project Setup. + +Click **New Template**. + +Select **Stage** to create a Stage Template**.** + +![](./static/add-a-stage-template-49.png) + +The **Create New Stage Template** settings appear. + +![](./static/add-a-stage-template-50.png) + +In **Name**, enter a name for the stage. You can enter **Quickstart**. + +In **Version Label**, enter the version of the stage. You can enter **v1**. + + Click **Save**. + +### Step 2: Add Stage Parameters + + **Select Stage Type** settings appear. + +![](./static/add-a-stage-template-51.png) + +Select **Deploy**. The Deploy stage type is a CD Stage that enables you to deploy any Service to your target environment. Other options include Build for CI, and Approval for Manual and Jira Approval Stages. More options will be added soon. + +The **About Your Stage** settings appear, select the type of deployment this Stage must perform. Service is selected by default. A Stage can deploy Services, and other workloads. + +![](./static/add-a-stage-template-52.png) + +Click **Set Up Stage**. The Template Studio page appears. + +In **Specify Service**, select **Runtime input**. + +![](./static/add-a-stage-template-53.png) +Harness Services represent your microservices or applications logically. You can propagate the same Service to as many stages as you need. + +**Use Runtime Inputs instead of variable expressions:** when you want to Template settings in a Stage or step template, use [Runtime Inputs](../20_References/runtime-inputs.md) instead of variable expressions. When Harness tries to resolve variable expressions to specific Stage-level settings using fully-qualified names, it can cause issues at runtime. Every Pipeline where the Stage or step Template is inserted must use the same names for fully-qualified name references to operate. With Runtime Inputs, you can supply values for a setting at deployment runtime.In **Deployment Type**, Kubernetes is selected by default. Deployment Type defines how your Service will be deployed. + +Click **Next**. + +In **Specify Environment**, select **Runtime input**. Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many stages are you need. + +In **Infrastructure Definition**, select **Kubernetes**. Infrastructure Definition represents your target infrastructure physically. They are the actual clusters, hosts, etc. By separating Environments and Infrastructure Definitions, you can use the same Environment in multiple stages while changing the target infrastructure settings with each stage. + +Under **Cluster Details**, select **Runtime input** in both **Connector** and **Namespace** fields. The namespace must already exist during deployment. Harness will not create a new namespace if you enter one here. + +Click **Next**. The Execution Strategies dialog box appears. + +![](./static/add-a-stage-template-54.png) + +Select **Rolling** and click **Use Strategy**. + +In **Execution**, you can see the **Rollout Deployment** step is added automatically. + +Your Template is now ready. + +Click **Save**, add a comment, and click **Save** again. + +The Template is published successfully. + +#### Option: Variables + +You can add variables to your Template as needed. + +![](./static/add-a-stage-template-55.png) + +You can add the following types of values to your variables: + +* **Fixed values** - These cannot be overridden. +* **Default values in the Template** - These can be overridden. +* **Expressions** - These can be provided during consumption or at runtime. +* **Combination of variables and fixed values** - These variables will be automatically created as part of the template. + +### Step 3: Add the Stage Template to a Pipeline + +You can use the CD Stage Template in any Pipeline in your Project now that you have it. + +To add a Stage Template to a Pipeline, open the Pipeline, and then click **Add Stage**. + +The **Select Stage Type** settings appear. + +![](./static/add-a-stage-template-56.png) + +Click **Use Template**. The next page lists all the Project-level Templates. + +Select the Quickstart Template that you created. + +![](./static/add-a-stage-template-57.png) + +Click the **Activity Log** to track all Template events. It shows you details like who created the Template and Template version changes. + +In **Details**, click **Version Label** and select **Always use the** **Stable** **version** of the Template. + +![](./static/add-a-stage-template-58.png) + +Selecting this option makes sure that any changes that you make to this version are propagated automatically to the Pipelines using this Template. + +Click **Use Template.** + +The **About your stage** dialog appears. Enter **Quickstart** and click **Set Up Stage**. + +![](./static/add-a-stage-template-59.png) + +The Template Stage is added to your Pipeline. + +The stage is added and not copied by the Template icon in the stage. + +![](./static/add-a-stage-template-60.png) + +If you had used **Copy to Pipeline**, this icon would not be there and you could change settings in the stage. + +You can now enter all the Runtime Inputs for this Pipeline execution. + +![](./static/add-a-stage-template-61.png) + +Click **Save**. + +You'll notice that you can Change and Remove the Template as needed. + +### Option: Copy to Pipeline + +You can copy the contents of a specific Template to your Pipeline using the **Copy to Pipeline** option. This does not add any reference to the Template. Copying a Template to a Pipeline is different from linking a Template to your Pipeline. You can't change any stage parameters when you link to a Template from your Pipeline. + +To copy a Template, go to your Pipeline. Click **Add Stage**. + +The **Select Stage Type** settings appear. + +![](./static/add-a-stage-template-62.png) + +Click **Use Template**. Select the Template you want to copy. + +![](./static/add-a-stage-template-63.png) + +Click **Copy to Pipeline**. + +Enter a name for your stage. Click **Set Up Stage**. + +![](./static/add-a-stage-template-64.png) + +The Template contents are now copied to your Pipeline stage. + +You can change any settings in the stage that you have copied from a Template. + +### Next steps + +* [Run Step Template Quickstart](run-step-template-quickstart.md) +* [HTTP Step Template Quickstart](harness-template-library.md) + diff --git a/docs/platform/13_Templates/create-a-remote-pipeline-template.md b/docs/platform/13_Templates/create-a-remote-pipeline-template.md new file mode 100644 index 00000000000..4996ed39f1a --- /dev/null +++ b/docs/platform/13_Templates/create-a-remote-pipeline-template.md @@ -0,0 +1,127 @@ +--- +title: Create a Remote Pipeline Template +description: This topic explains how to add a remote Pipeline Template in Harness. +# sidebar_position: 2 +helpdocs_topic_id: 0qu91h5rwu +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `NG_TEMPLATE_GITX`. Contact Harness Support to enable the feature.​​Harness enables you to add Templates to create re-usable logic and Harness entities (like Steps, Stages, and Pipelines) in your Pipelines.​ You can link these Templates in your Pipelines or share them with your teams for improved efficiency. + +Templates enhance developer productivity, reduce onboarding time, and enforce standardization across the teams that use Harness.​ + +A Pipeline Template lets you distribute reusable pipelines across your team or among multiple teams.​ Instead of building pipelines from scratch, Pipeline templates simplify the process by having parameters already built-in. + +For example, you can automate your build and deploy services by adding a Pipeline Template.​ You can link the following Templates to your Pipeline Template: + +* Build stage - To push the artifact to the registry, run tests, and security scans.​ +* Staging deploy stage - To deploy to Dev, QA.​ +* Approval stage - To add approval stages for PROD.​ +* Prod deploy stage - To deploy to Production.​ + +You can create a Template and save it either in Harness or in a Git repository using the Inline or Remote option respectively.​​ + +This topic walks you through the steps to create a Remote Pipeline Template.​​ + +### Before you begin + +* Review [Harness Key Concepts​​](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* See [Templates Overview​](template.md) +* See [CIE Quickstarts​​](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) + +### Permissions + +To create a Remote Pipeline Template make sure you have **Create/Edit** and **Access** permissions for Templates.​ + +### Remote Pipeline Template Overview + +Harness Pipeline Templates give you the ability to enforce consistency. You can save your Pipeline Templates in different Git repositories. These are called Remote Pipeline Templates. + +For example, if you have a core Pipeline that you want all of your teams to use, you can put the template in a core repo and then refer to it. Now you can reuse this Template.​ + +For information on inline Pipeline Templates, see [Create a Pipeline Template](create-pipeline-template.md). + +### Use a Remote Pipeline Template + +Harness Templates let you reuse a Pipeline Template to create a Pipeline, or share it with your teams for enhanced efficiency. + +Whenever you use a Remote Pipeline, Harness resolves the repositories when your Pipeline starts up. ​ + +You can have one of the following scenarios when using a Template in your Pipeline:​​ + +* ​Remote Pipeline Template and the Pipeline exist in the same Git repo.​ +* Remote Pipeline Template and the Pipeline exist in different Git repos​.​ + +Let us see how you can use Template in each of these situations.​​ + +#### Remote Pipeline Template and the Pipeline exist in the same Git repo + +In order to use the Template in your Pipeline if your remote Pipeline Template and Pipeline are both present in the same Git repository, make sure your Pipeline and Template are both present in the same branch.​​​ + +#### Remote Pipeline Template and the Pipeline exist in different Git repos + +In order to use the Template in your Pipeline if your remote Pipeline Template and Pipeline are present in different Git repositories,​ make sure your Template is present in the default branch of the specific repo.​​ + +### Step 1: Create a Remote Pipeline Template + +You can create a Stage Template from your Account, Org or Project. ​This topic explains the steps to create a Stage Template from the Project scope. + +1. In your Harness Account, go to your Project.​​ +2. In **Project SETUP** click **Templates**.​ +3. Click **New Template** and then click **Pipeline**.​ The **Create New Pipeline Template** settings appear. +4. In **Name**, enter a name for the Template.​​ +5. In **Version Label**, enter a version for the Template.​​ +6. Click **Remote**.​ +7. In **Git Connector**, select or create a Git Connector to the repo for your Project.​ For steps, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors).Important: Connector must use the Enable API access option and TokenThe Connector must use the Enable API access option and Username and Token authentication. ​Harness requires the token for API access. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.​​ +![](./static/create-a-remote-pipeline-template-24.png) +For GitHub, the token must have the following scopes:​ +![](./static/create-a-remote-pipeline-template-25.png) +8. In **Repository**, select your repository.​ If your repository isn't listed, enter its name since only a select few repositories are filled here.​Create the repository in Git before entering it in Select Repository. ​Harness does not create the repository for you.​ +9. In **Git Branch**, select your branch.​ If your branch isn't listed, enter its name since only a select few branches are filled here.Create the branch in your repository before entering it in Git Branch. ​Harness does not create the branch for you.​​ +10. ​Harness auto-populates the **YAML Path**.​ You can change this path and the file name. +11. Click **Start**.​​​ + +### Step 2: Add a Stage + +1. Click **Add Stage**. ​The **Select Stage Type** settings appear. +2. Select **Deploy**. ​The Deploy stage type is a CD Stage that enables you to deploy any Service to your target environment. +The **About Your Stage** settings appear.​ +3. In **Stage Name**, enter a name for your Stage.​ +Select the entity that this stage should deploy. +4. In **Deployment Type**, click **Kubernetes**. +5. Click **Set Up Stage**.​ + +### Step 3: Add Service details + +1. In **Select Service**, select an existing Service that you want to deploy from the Specify Service drop-down list or create a new one.​ You can also use [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). +2. Click **Continue**. +3. In **Specify Environment**, select an existing environment or add a new one.​ +4. In **Specify Infrastructure**, select an existing infrastructure or add a new one.​ Click **Continue.** +The **Execution Strategies** settings appear. + +### Step 5: Define Execution Strategies + +1. In **Execution Strategies**, select the deployment strategy for your Pipeline Template.​ +This topic uses the example of Rolling deployment.​ +For more information on different execution strategies, see [Deployment Concepts and Strategies.](https://docs.harness.io/article/0zsf97lo3c-deployment-concepts) +2. Click **Use Strategy**.​ +3. Click **Save**. The **Save Template to Git** settings appear.![](./static/create-a-remote-pipeline-template-26.png) + +### Step 6: Save Remote Pipeline Template to Git + +1. In **Select Branch to Commit**, You can select one of the following:​​ + 1. **Commit to an existing branch**: you can start a pull request if you like.​​​ + 2. **Commit to a new branch:​** enter the new branch name. You can start a pull request if you like.​​ +2. Click **Save**. ​Your Remote Pipeline Template is saved to the repo branch.​​![](./static/create-a-remote-pipeline-template-27.png) +3. Click the YAML file to see the YAML for the Stage Template.​​ +4. Edit the YAML. For example, change the name of the Template.​​​ +5. Commit your changes to Git.​​​ +6. Return to Harness and refresh the page.​​​​ +A **Template Updated** message appears.​​![](./static/create-a-remote-pipeline-template-28.png) + +### Next steps + +* [Use a Template](use-a-template.md) + diff --git a/docs/platform/13_Templates/create-a-remote-stage-template.md b/docs/platform/13_Templates/create-a-remote-stage-template.md new file mode 100644 index 00000000000..dd93acd13e7 --- /dev/null +++ b/docs/platform/13_Templates/create-a-remote-stage-template.md @@ -0,0 +1,119 @@ +--- +title: Create a Remote Stage Template +description: This topic explains how to add a remote Stage Template in Harness. +# sidebar_position: 2 +helpdocs_topic_id: e4xthq6sx0 +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `NG_TEMPLATE_GITX`. Contact Harness Support to enable the feature.​​Harness enables you to add Templates to create re-usable logic and Harness entities (like Steps, Stages, and Pipelines) in your Pipelines. You can link these Templates in your Pipelines or share them with your teams for improved efficiency. + +Templates enhance developer productivity, reduce onboarding time, and enforce standardization across the teams that use Harness. + +You can create a Template and save it either in Harness or in a Git repository using the Inline or Remote option respectively.​ + +![](./static/create-a-remote-stage-template-87.png) +This topic walks you through the steps to create a Remote Stage Template.​ + +### Objectives + +You will learn how to: + +* Create a Remote Stage Template +* Define Template parameters.​​ + +### Before you begin + +* Review [Harness Key Concepts​](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* See [Templates Overview](template.md) +* See [CIE Quickstarts​](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) + +### Permissions + +To create a Remote Stage Template make sure you have **Create/Edit** and **Access** permissions for Templates.​ + +### Remote Stage Template overview + +A Stage Template is a Harness CD, CI, or Approval Stage Template that can be used in any Pipeline in any Project. Harness lets you create Stage Templates that you can use when creating a new Pipeline or adding a stage to your exisitng one. + +All your templates can be seen in **Templates** based on their scope. ​We will also call this Template Library in this topic. + +A Remote Stage Template is one which you save in your Git repositories. + +For information on inline Stage Template, see [Create a Stage Template](add-a-stage-template.md). + +### Use Template in a Pipeline + +Harness resolves the repositories when your Pipeline starts up. ​After that, the same resource is used during the execution of the Pipeline. Whenever you use the templates in your Pipelines, once the templates are fully expanded, the final Pipeline runs as if it were defined entirely in the source repo.​ + +You can have one of the following scenarios when using a Template in your Pipeline:​ + +* ​Remote Stage Template and the Pipeline exist in the same Git repo. +* Remote Stage Template and the Pipeline exist in different Git repos​. +* Pipeline exists in Harness and the Stage Template exists in Git repo.​ + +Let us see how you can use Template in each of these situations.​ + +#### Remote Stage Template and the Pipeline exist in the same Git repo + +In order to use the Template in your Pipeline if your remote Stage Template and Pipeline are both present in the same Git repository, make sure your Pipeline and Template are both present in the same branch.​​ + +#### Remote Stage Template and the Pipeline exist in different Git repos + +In order to use the Template in your Pipeline if your remote Stage Template and Pipeline are present in different Git repositories,​ make sure your Template is present in the default branch of the specific repo.​ + +#### Pipeline exists in Harness and the Stage Template exists in Git repo + +In order to use the Template in your inline Pipeline​, make sure your Template is present in the default branch of your Git repository.​ + +### Step 1: Create a Remote Stage Template + +You can create a Stage Template from your Account, Org or Project. ​This topic explains the steps to create a Stage Template from the Project scope. + +1. In your Harness Account, go to your Project.​ +2. In **Project SETUP** click **Templates**. +3. Click **New Template** and then click **Stage**. ​The **Create New Stage Template** settings appear. +4. In **Name**, enter a name for the Template.​ +5. In **Version Label**, enter a version for the Template.​ +6. Click **Remote**. +7. In **Git Connector**, select or create a Git Connector to the repo for your Project.​ For steps, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors).Important: Connector must use the Enable API access option and TokenThe Connector must use the Enable API access option and Username and Token authentication. ​Harness requires the token for API access. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.​​ +![](./static/create-a-remote-stage-template-88.png) +For GitHub, the token must have the following scopes:​ +![](./static/create-a-remote-stage-template-89.png) +8. In **Repository**, select your repository. If your repository isn't listed, enter its name since only a select few repositories are filled here.​Create the repository in Git before entering it in Select Repository. Harness does not create the repository for you.​ +9. In **Git Branch**, select your branch. If your branch isn't listed, enter its name since only a select few branches are filled here.Create the branch in your repository before entering it in Git Branch. Harness does not create the branch for you.​​ +10. ​Harness auto-populates the **YAML Path**. You can change this path and the file name. +11. Click **Start**.​​ +Your Stage Template is created and the Select Stage Type settings appear. + +![](./static/create-a-remote-stage-template-90.png) +### Step 2: Add the Stage parameters + +1. Select **Deploy** in the **Select** **Stage Type** settings. +The Deploy stage type is a CD Stage that enables you to deploy any Service to your target environment.​ +The **About your Stage** settings appear.![](./static/create-a-remote-stage-template-91.png) +2. Select the type of deployment this Stage must perform.​ +A stage can deploy Services, and other workloads. The default selection is **Service**. +3. Click **Set Up Stage**. The Template Studio page appears. +4. In **Select Service**, select an existing service or add a new one. Click **Continue.**![](./static/create-a-remote-stage-template-92.png) +5. In **Specify** **Environment**, select an existing environment or add a new one. +6. In **Specify Infrastructure**, select an existing infrastructure or add a new one. Click **Continue**. +7. In **Execution Strategies**, select **Rolling** and click **Use Strategy**. +In Execution, you can see the **Rollout Deployment** step is added automatically. +8. Click **Save**. The **Save Template to Git** settings appear.![](./static/create-a-remote-stage-template-93.png) +9. In **Select Branch to Commit**, You can select one of the following:​ + 1. **Commit to an existing branch**: you can start a pull request if you like.​​ + 2. **Commit to a new branch**:​ enter the new branch name. You can start a pull request if you like.​ +10. Click **Save**. Your Remote Stage Template is saved to the repo branch.​​![](./static/create-a-remote-stage-template-94.png) +11. Click the YAML file to see the YAML for the Stage Template.​ +12. Edit the YAML. For example, change the name of the Template.​​ +13. Commit your changes to Git.​​ +14. Return to Harness and refresh the page.​​​ +A **Template Updated** message appears.​![](./static/create-a-remote-stage-template-95.png) + +### Next steps + +* [Use a Template](use-a-template.md) + diff --git a/docs/platform/13_Templates/create-a-remote-step-template.md b/docs/platform/13_Templates/create-a-remote-step-template.md new file mode 100644 index 00000000000..0003c7722f3 --- /dev/null +++ b/docs/platform/13_Templates/create-a-remote-step-template.md @@ -0,0 +1,117 @@ +--- +title: Create a Remote Step Template +description: This topic explains how to add a remote Step Template in Harness. +# sidebar_position: 2 +helpdocs_topic_id: u1ozbrk1rh +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `NG_TEMPLATE_GITX`. Contact Harness Support to enable the feature.​Harness enables you to add Templates to create re-usable logic and Harness entities (like Steps, Stages, and Pipelines) in your Pipelines. You can link these Templates in your Pipelines or share them with your teams for improved efficiency. + +Templates enhance developer productivity, reduce onboarding time, and enforce standardization across the teams that use Harness. + +You can create a Template and save it either in Harness or in a Git repository using the **Inline** or **Remote** option respectively. + +![](./static/create-a-remote-step-template-16.png) +This topic walks you through the steps to create a Remote Step Template. + +### Objectives + +You'll learn how to:  + +* Create a Remote Run Step Template.​ +* Define Template parameters.​ + +### Before you begin + +* Review [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* See [Templates Overview](template.md). +* See [CIE Quickstarts](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) + +### Permissions + +To create a Remote Step Template make sure you have **Create/Edit** and **Access** permissions for Templates. + +### Remote Step Template overview + +Harness' templates allow you to design reusable content, logic, and parameters, ensuring that the application is the major focus of your Pipelines.​ Instead of creating Pipelines from scratch each time, Harness lets you select from pre-built Templates and just link them to your Pipelines. The process of developing Pipelines thus becomes easier by reducing duplication and increasing reusability. + +You can reshare your work with your team and reuse it in your Pipelines.​ + +All your templates can be seen in **Templates** based on their scope. ​We will also call this Template Library in this topic. + +Harness lets you save your Templates in Git repositories.​ For example, if you have a core Step that you want all of your Pipelines to use, you can put the template in a core repo and then refer to it. Now you can reuse this Step Template in multiple Pipelines. + +For information on inline Step Template, see [Create a Step Template](run-step-template-quickstart.md). + +### Use Template in a Pipeline + +Harness resolves the repositories when your Pipeline starts up. After that, the same resource is used during the execution of the Pipeline. Whenever you use the templates in your Pipelines, once the templates are fully expanded, the final Pipeline runs as if it were defined entirely in the source repo.​ + +You can have one of the following scenarios when using a Template in your Pipeline: + +* Remote Step Template and the Pipeline exist in the same Git repo. +* Remote Step Template and the Pipeline exist in different Git repos. +* Pipeline exists in Harness and the Step Template exists in Git repo. + +Let us see how you can use Template in each of these situations. + +#### Remote Step Template and the Pipeline exist in the same Git repo + +In order to use the Template in your Pipeline if your remote Step Template and Pipeline are both present in the same Git repository, make sure your Pipeline and Template are both present in the same branch.​ + +#### Remote Step Template and the Pipeline exist in different Git repos + +In order to use the Template in your Pipeline if your remote Step Template and Pipeline are present in different Git repositories,​ make sure your Template is present in the default branch of the specific repo. + +#### Pipeline exists in Harness and the Step Template exists in Git repo + +In order to use the Template in your inline Pipeline​, make sure your Template is present in the default branch of your Git repository. + +### Step 1: Create a Remote Step Template + +You can create a Step Template from your Account, Org or Project. This topic explains the steps to create a Step Template from the Project scope. + +1. In your Harness Account, go to your Project. +2. In **Project SETUP** click **Templates**. +3. Click **New Template** and then click **Step**. The Create New Step Template settings appear.![](./static/create-a-remote-step-template-17.png) +4. In **Name**, enter a name for the Template. +5. In **Version Label**, enter a version for the Template. +6. Click **Remote**. +7. In **Git Connector**, select or create a Git Connector to the repo for your Project. For steps, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors).Important: Connector must use the Enable API access option and TokenImportant: The Connector must use the Enable API access option and Username and Token authentication. Harness requires the token for API access. Generate the token in your account on the Git provider and add it to Harness as a Secret. Next, use the token in the credentials for the Git Connector.​​ +![](./static/create-a-remote-step-template-18.png)For GitHub, the token must have the following scopes:​ +![](./static/create-a-remote-step-template-19.png) +8. In **Repository**, select your repository. If your repository isn't listed, enter its name since only a select few repositories are filled here.​Create the repository in Git before entering it in Select Repository. Harness does not create the repository for you. +9. In **Git Branch**, select your branch. If your branch isn't listed, enter its name since only a select few branches are filled here.​Create the branch in your repository before entering it in Git Branch. Harness does not create the branch for you.​ +10. Harness auto-populates the **YAML Path**. You can change this path and the file name. +11. Click **Start**.​ +Your Step Template is created and you can now add steps from the Step Library.![](./static/create-a-remote-step-template-20.png) + +### Step 2: Add Step Parameters + +1. In **Step Library**, select **Shell Script** under **Utilities**. +The **Step Parameters** settings appear.​ +2. ​In **Script**, enter your script. +3. Specify your **Input** **Variables** and **Output** **Variables**. +4. In **Execution Target**,​ specify where you want to execute the script. +You can select **Specify on Target Host** or **On Delegate**. +For more information, see [Using Shell Scripts in CD Stages](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts). +5. Click **Save**. The **Save Template to Git** settings appear.![](./static/create-a-remote-step-template-21.png) +6. In **Select Branch to Commit**, You can select one of the following: + 1. **Commit to an existing branch**: you can start a pull request if you like.​ + 2. **Commit to a new branch**:​ enter the new branch name. You can start a pull request if you like. +7. Click **Save**. Your Step Template is saved to the repo branch.​![](./static/create-a-remote-step-template-22.png) +8. Click the YAML file to see the YAML for the Step Template. +9. Edit the YAML. For example, change the name of the Template.​ +10. Commit your changes to Git.​ +11. Return to Harness and refresh the page.​​ +A **Template Updated** message appears.![](./static/create-a-remote-step-template-23.png) +12. Click **Update**. +The changes you made in Git are now applied to Harness.​​ + +### Next steps + +* [Use a Template](use-a-template.md) + diff --git a/docs/platform/13_Templates/create-a-secret-manager-template.md b/docs/platform/13_Templates/create-a-secret-manager-template.md new file mode 100644 index 00000000000..056253dd2d6 --- /dev/null +++ b/docs/platform/13_Templates/create-a-secret-manager-template.md @@ -0,0 +1,109 @@ +--- +title: Create a Secret Manager Template +description: This topic shows how to add a Secret Manager Template. +# sidebar_position: 2 +helpdocs_topic_id: n41cqkjrla +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness enables you to add Templates to create reusable logic and Harness entities like Steps, Stages, Pipelines, and Secret Managers. + +Harness Secret Manager Template lets you add a shell script that you can execute either on a Delegate or on a remote host which is connected to the Delegate. Harness fetches and reads your secrets from the third-party Secret Manager through this shell script. + +This topic explains how to create a Secret Manager Template in Harness. + +### Objectives + +You will learn how to: + +* Create a Secret Manager Template. +* Add a shell script to the Secret Manager Template. +* Configure Input Variables for the shell script. +* Use the Secret Manager Template in a Custom Secret Manager. + +### Before you begin + +* [Templates Overview](template.md) +* [Harness Secrets Management Overview](../6_Security/1-harness-secret-manager-overview.md) + +### Required permissions + +* Make sure you have **Create/Edit** permissions for Templates. +* Make sure you have **Create/Edit** permissions for Secrets. +* Make sure you have **Create/Edit** permissions for Connectors. + +### Templates Overview + +* You can add Secret Manager Templates to Template Libraries at any [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md#rbac-scope). +* [Tags](../20_References/tags-reference.md) can be used to group Templates. You can search or filter Templates using these tags. +* If you change the Template inputs, then you need to update the entities referencing the Template for the changes to get reflected. + +### Secret Manager Template scope + +You can add Secret ManagerTemplates at any [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md) in Harness. + +The following table shows what it means to add Templates at different scopes or hierarchies: + + + +| | | +| --- | --- | +| **Scope** | **When to add Templates?** | +| **Account** | To share Secret Manager Templates with users in the Account, as well as users within the Organizations and Projects created within this Account. | +| **Organization** | To share Secret Manager Templates with users in the Organization as well as within the Projects created within the Org. | +| **Project** | To share Secret Manager Templates with users within the Project. | + +### Step 1: Create a Secret Manager Template + +You can create a Secret Manager Template in Account, Org, or Project scope.​ + +This topic shows you how to create a Secret Manager Template at the Project scope.​ + +1. In your Harness Account, go to your Project.​ +2. In **Project Setup**, click **Templates** and then click **New Template**.​![](./static/create-a-secret-manager-template-29.png) +3. Click **Secret Manager**. The Secret Manager Template settings appear.​ +4. Enter a **Name** for your Secret Manager Template.​ +5. In **Version Label**, enter the version of the Secret Manager Template. +For example v1.  +[Versioning](template.md) a Template enables you to create a new Template without modifying the existing one. For more information, see [Versioning](template.md). +6. Click **Start**.​![](./static/create-a-secret-manager-template-30.png) + +### Step 2: Add a shell script to the Secret Manager Template + +1. Enter your shell script in **Script**.​![](./static/create-a-secret-manager-template-31.png) +Here is an example: +``` +curl -o secret.json -X GET https://vaultqa.harness.io/v1/<+spec.environmentVariables.engineName>/<+spec.environmentVariables.path> -H 'X-Vault-Token: <+secrets.getValue("vaultTokenOne")>' +secret=$(jq -r '.data."<+spec.environmentVariables.key>"' secret.json) +``` +In this example, this script assigns the secret variable to your final value. Here are the details of the entries in the script. + * This script makes a cURL call to the API URL of the third-party Secrets Manager and gets the output to the file secret.json. + * It includes some parameters such as engine name and path. + * It uses an existing, already configured Secrets Manager for API access. + * After getting the file, as shown in the example, it gets the secret by using a third-party tool to retrieve the key from the data object. The key is also a parameter that can be assigned later.In the script, make sure to include a variable to store the fetched secret, and make sure to name the variable `secret`. + +### Configure Input Variables for the shell script + +All the parameters (engine name, path, and key in this case) can be defined as Input Variables while creating or editing the Secret Manager Template. + +To do this, perform the following steps: + +1. Click **Configuration** and click **Add Input Variable**. +2. Add **Name**, **Type**, and **Value** for the Input Variables in your script. +Harness allows you to use [Fixed Values and Runtime Inputs](../20_References/runtime-inputs.md).![The image shows the configuration tab for creating a secrets manager template. The user has specified three variables whose data type is string and whose values are to be specified at run time](./static/create-a-secret-manager-template-32.png) +3. Select **Execution Target**. This is where you want to execute the script that you just added. +If you want to run the Shell Script on a target host and not on the Harness Delegate, you must first create the required connection attributes. +To access an SSH-based Custom Secrets Manager, create an SSH credential first. See [Add SSH Keys](../6_Security/4-add-use-ssh-secrets.md) for the procedure to create SSH credentials. +This does not apply if you want to run the Custom Secrets Manager on the Harness Delegate. + 1. Select **Specify Host** to execute the script on a specific host.![](./static/create-a-secret-manager-template-33.png)In **Target Host**, enter the host address. + In **SSH Connection Attribute**, create or select an existing secret that has the SSh credential as its value. + In **Working Directory**, enter the directory name. + 2. Select **Delegate**, to execute the script on a specific Delegate. +4. Click **Save**. Your Secret Manager Template is now listed in the Template Library. + +### See also + +* [Add a Custom Secret Manager](../6_Security/9-custom-secret-manager.md) + diff --git a/docs/platform/13_Templates/create-pipeline-template.md b/docs/platform/13_Templates/create-pipeline-template.md new file mode 100644 index 00000000000..d7954ab6413 --- /dev/null +++ b/docs/platform/13_Templates/create-pipeline-template.md @@ -0,0 +1,183 @@ +--- +title: Create a Pipeline Template +description: This quickstart walks you through the steps to create a Pipeline Template. +# sidebar_position: 2 +helpdocs_topic_id: gvbaldmib5 +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Templates let you standardize builds for all your services and distribute them across teams. Simply put, Templates are reusable builds with a common configuration that conforms to organizational standards, making maintenance easier and less prone to errors. + +A Pipeline Template lets you distribute reusable pipelines across your team or among multiple teams. Instead of building pipelines from scratch, Pipeline templates simplify the process by having parameters already built-in. + +For example, you can automate your build and deploy services by adding a Pipeline Template. You can link the following Templates to your Pipeline Template: + +* Build stage - To push the artifact to the registry, run tests, and security scans. +* Staging deploy stage - To deploy to Dev, QA. +* Approval stage - To add approval stages for PROD. +* Prod deploy stage - To deploy to Production. + +This topic walks you through the steps to create a Pipeline Template. + +### Before you begin + +* Review [Templates Overview](template.md) to understand different concepts of Templates. +* Review [Permissions Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) to know about the permissions required to create a Template at various scopes. +* Review [Pipelines and Stages](https://docs.harness.io/category/pipelines). + +### Limitations + +Failure strategy and notification settings can only be provided when you create a Template. + +### Review: Permissions Requirements + +You need Create/Edit, Delete, and Access permissions on Templates to create a Pipeline Template. See [Permissions Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md). + +### Review: Pipeline Template Scope + +You can add Templates at any [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md) in Harness. + +The following table shows what it means to add Templates at different scopes or hierarchies: + + + +| | | +| --- | --- | +| **Scope** | **When to add Templates?** | +| **Account** | To share Step/Stage/Pipeline Templates with users in the Account, as well as users within the Organizations and Projects created within this Account. | +| **Organization** | To share Step/Stage/Pipeline Templates with users in the Organization as well as within the Projects created within the Org. | +| **Project** | To share Step/Stage/Pipeline Templates with users within the Project. | + +### Visual Summary + +Here is a quick overview of Pipeline Templates: + +* You can add a Pipeline Template to Account, Org, or Project [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md#rbac-scope). +* You can either link an existing Stage Template or add a stage to your Pipeline Template. +* For any new step that you add to your Pipeline stage, you can either link to a Step Template or add a step. + +![](./static/create-pipeline-template-65.png) + +### Step 1: Add a Template + +First, we'll create a Project-level Template in the Deployments module. You can do this in any Project. + +Navigate to the **Deployments** module and in **Projects** select the desired project. + +Select **Templates** under Project Setup. + +![](./static/create-pipeline-template-66.png) + +In **Templates**, click **New Template**. + +Select **Pipeline** to create a Pipeline Template**.** + +![](./static/create-pipeline-template-67.png) + +The **Create New Pipeline Template** settings appear. + +In **Name**, enter a name for the Pipeline. For example Quickstart. + +In **Version Label**, enter the version of the stage. For example v1. [Versioning](template.md) a Template enables you to create a new Template without modifying the existing one. For more information, see Versioning. + +![](./static/create-pipeline-template-68.png) + +You'll see the Git Repository Details option only if you're creating a Template on a Git-enabled Project. For more information on Git Sync, see [Harness Git Sync](../10_Git-Experience/git-experience-overview.md).In **Git Repository Details**, in **Repository Display Name**, select your Git repository and Branch. + +Once you've entered all the details, click **Continue**. + +![](./static/create-pipeline-template-69.png) + +### Step 2: Add a Stage + +Click **Add Stage**. The **Select Stage Type** settings appear. + +![](./static/create-pipeline-template-70.png) + +Select **Deploy**. The Deploy stage type is a CD Stage that enables you to deploy any Service to your target environment. + +You can also select Build for CI, and Approval for Manual and Jira Approval Stages. More options will be added soon. This document uses the Deploy stage type.The **About Your Stage** settings appear. + +In **Stage Name**, enter a name for your Stage. + +Select the entity that this stage should deploy. Currently, for Deploy, only Service can be deployed and it is selected by default. + +Click **Set Up Stage**. + +![](./static/create-pipeline-template-71.png) + +### Step 3: Add Service Details + +In **About the Service**, select the Service that you want to deploy from the **Specify Service** drop-down list. You can also use [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). + +**Use Runtime Inputs instead of variable expressions:** when you update Template settings in a Stage or step Template, use [Runtime Inputs](../20_References/runtime-inputs.md) instead of variable expressions. When Harness tries to resolve variable expressions to specific Stage-level settings using fully-qualified names, it can cause issues at runtime. Every Pipeline where the Stage or step Template is inserted must use the same names for fully-qualified name references to operate. With Runtime Inputs, you can supply values for a setting at deployment runtime.In **Service Definition**, select the **Deployment Type**. Deployment Type defines how your Service will be deployed. + +![](./static/create-pipeline-template-72.png) + +### Step 4: Add Infrastructure Details + +In **Infrastructure**, in **Specify Environment**, select the setting for your Pipeline execution, for example, **Runtime input**. Harness Pipelines allow you to use [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many stages as you need. + +![](./static/create-pipeline-template-73.png) + +In **Infrastructure Definition**, select the method for Harness to reach your Kubernetes Cluster. Infrastructure Definitions represent the physical infrastructure of the Environment. They are the actual clusters, hosts, etc. For example, the target Infrastructure Definition for a Kubernetes deployment. By separating Environments and Infrastructure Definitions, you can use the same Environment in multiple stages while changing the target infrastructure settings with each stage. + +In **Cluster Details**, enter **Connector** and **Namespace** details. Harness Pipelines allow you to use [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md) also. + +In **Connector**, select a Connector from the drop-down list. To create a new Connector, see [Kubernetes Cluster Connector Settings Reference](../7_Connectors/ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md) and [Add a Kubernetes Cluster Connector](../7_Connectors/add-a-kubernetes-cluster-connector.md). + +In **Namespace**, enter the namespace. For example, default. + +![](./static/create-pipeline-template-74.png) + +Click **Next**. The **Execution Strategies** settings appear. + +![](./static/create-pipeline-template-75.png) + +### Step 5: Define Execution Strategies + +In **Execution Strategies**, select the deployment strategy for your Pipeline Template. We've used **Rolling** in this document. For more information on different execution strategies, see [Deployment Concepts and Strategies](https://docs.harness.io/article/0zsf97lo3c-deployment-concepts). + +Click **Use Strategy**. + +Click **Save**. The Pipeline Template is published successfully. + +![](./static/create-pipeline-template-76.png) + +If you're using a Project which is Git-enabled, you need to provide **Save Template to Git** settings. + +![](./static/create-pipeline-template-77.png) + +In **Harness Folder**, enter the name of the folder in your repo where you want to sync. The Harness Folder is the default folder in the repository where you are syncing your Project. + +In **File Path**, enter a name for the YAML file. For example, enter`Example.yaml`. Harness will generate one automatically from the Pipeline name, but you can add your own. + +In **Commit message**, enter a message for the commit that adds this Connector. + +Click **Save**, and click **Save** again. You can save the Pipeline in two ways: + +* As new version +* As a new Template + +![](./static/create-pipeline-template-78.png) + +Click **Save as new Template**. + +**Save as new Template** settings appear. + +![](./static/create-pipeline-template-79.png) + +Click **Continue**. The Template is published successfully. + +### Next Step + +* [Use a Template](use-a-template.md) + +### See also + +* [Create a Step Template](run-step-template-quickstart.md) +* [Create an HTTP Step Template](harness-template-library.md) +* [Create a Stage Template](add-a-stage-template.md) + diff --git a/docs/platform/13_Templates/harness-template-library.md b/docs/platform/13_Templates/harness-template-library.md new file mode 100644 index 00000000000..6e3e02d91d1 --- /dev/null +++ b/docs/platform/13_Templates/harness-template-library.md @@ -0,0 +1,99 @@ +--- +title: Create an HTTP Step Template +description: The Harness Template Library enables you to standardize and distribute reusable Step Templates across teams that use Harness. This topic walks you through the steps to create an HTTP Step template. O… +# sidebar_position: 2 +helpdocs_topic_id: zh49vfdy0a +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness Template Library enables you to standardize and distribute reusable Step Templates across teams that use Harness. + +This topic walks you through the steps to create an HTTP Step template. + +### Objectives + +You'll learn how to:  + +* Create an HTTP Step Template. +* Define Template parameters. +* Use the HTTP Step Template in a Pipeline. + +### Before you begin + +* Review [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) to establish a general understanding of Harness. +* The HTTP template in this quickstart is added to a CD Pipeline. If you are new to Harness CD, see [CD Quickstarts](https://ngdocs.harness.io/category/c9j6jejsws-cd-quickstarts). +* See ​[CIE Quickstarts](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md). + +### Step 1: Create a Template + +First, we'll create a Project-level Template in the Deployments module. You can do this in any Project. + +Navigate to the **Deployments** module and in **Projects** select the desired project. + +![](./static/harness-template-library-34.png) +Next select **Templates** under Project Setup. + +Click **New Template**. + +Select **Step** to create a Step Template**.** + +The **Create New Step Template** settings appear. + +![](./static/harness-template-library-35.png) +In **Name**, enter a name for the template. You can enter Quickstart. + +In **Version Label**, enter a name for the version of the template. You can enter V1. + +Click **Save**. The **Step Library** panel appears. + +### Step 2: Add Step Parameters + +In **Step Library,** select **HTTP** under **Utilities**. + +![](./static/harness-template-library-36.png) +The **Step Parameters** settings appear. + +![](./static/harness-template-library-37.png) +In **Timeout**, enter a timeout value for this step. You can enter 10s. + +In **URL**, enter the URL for the Http call. + +In **Method**, select GET. + +Click **Save**. The new Template appears under the **Templates** list. + +### Step 3: Add the HTTP Step Template to a Pipeline + +To add a step template in a Pipeline Execution select the step and click **Add Step**. + +The **Step Library** panel appears. + +In **Step Library,** select **HTTP** under **Utilities**. The **HTTP Step** settings appear. + +![](./static/harness-template-library-38.png) +Click **Use Template.** The next page lists all the Project-level templates. + +Select the Template that you created. + +![](./static/harness-template-library-39.png) +Click the **Activity Log** to track all Template events. It shows you details like who created the Template and Template version changes. + +Click **Version Label.** + +Select **Stable** version of the template. This ensures that any changes that you make to this version are propagated  automatically to the Pipelines using this template. + +Click **Use Template.** + +![](./static/harness-template-library-40.png) +In **Name**, enter Quickstart. + +Under **Template Inputs**, click **Timeout** and select **Runtime input**. + +Click URL and select **Runtime input**. + +**Use Runtime Inputs instead of variable expressions:** when you want to template settings in a Stage or step template, use [Runtime Inputs](../20_References/runtime-inputs.md) instead of variable expressions. When Harness tries to resolve variable expressions to specific Stage-level settings using fully-qualified names, it can cause issues at runtime. Every Pipeline where the Stage or step template is inserted must utilize the exact same names for fully-qualified name references to operate. With Runtime Inputs, you can supply values for a setting at deployment runtime.Click **Apply Changes**. + +Click **Save**. + diff --git a/docs/platform/13_Templates/run-step-template-quickstart.md b/docs/platform/13_Templates/run-step-template-quickstart.md new file mode 100644 index 00000000000..2ce5794fdf5 --- /dev/null +++ b/docs/platform/13_Templates/run-step-template-quickstart.md @@ -0,0 +1,101 @@ +--- +title: Create a Step Template +description: The Harness Template Library enables you to standardize and create step templates that can be re-used across Pipelines and teams that use Harness. This topic walks you through the steps to create a R… +# sidebar_position: 2 +helpdocs_topic_id: 99y1227h13 +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness Template Library enables you to standardize and create step templates that can be re-used across Pipelines and teams that use Harness. + +This topic walks you through the steps to create a Run Step template. + +### Objectives + +You'll learn how to:  + +* Create a Run Step Template. +* Define Template parameters. +* Use the Run Step Template in a Pipeline. + +### Before you begin + +* Review [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) to establish a general understanding of Harness. +* The Run template in this quickstart is added to a CD Pipeline. If you are new to Harness CD, see [CD Quickstarts](https://ngdocs.harness.io/category/c9j6jejsws-cd-quickstarts). +* See ​[CIE Quickstarts](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md). + +### Step 1: Create a Template + +First, we'll create a Project-level Template in the **Builds** module. You can do this in any Project. + +Navigate to the **Builds** module and in **Projects** select the desired project. + +Next select **Templates** under Project Setup. + +Click **New Template**. + +![](./static/run-step-template-quickstart-80.png) +Select **Step** to create a Step Template**.** + +The **Create New Step Template** settings appear. + +![](./static/run-step-template-quickstart-81.png) +In **Name**, enter a name for the template. You can enter Quickstart. + +In **Version Label**, enter a name for the version of the template. You can enter V1. + +Click **Save**. The **Step Library** panel appears. + +### Step 2: Add Step Parameters + +In **Step Library,** select **Run** under **Build**. + +![](./static/run-step-template-quickstart-82.png) +The **Step Parameters** settings appear. + +![](./static/run-step-template-quickstart-83.png) +Click **Container Registry** and select **Runtime input** which lets you add values when you start a pipeline execution. + +In **Image,** select **Runtime input**. You can use any docker image from any docker registry, including docker images from private registries. + +In **Command**, select **Runtime input.** + +Click **Save**. The new Template appears under the **Templates** list. + +**Use Runtime Inputs instead of variable expressions:** when you want to template settings in a Stage or step template, use [Runtime Inputs](../20_References/runtime-inputs.md) instead of variable expressions. When Harness tries to resolve variable expressions to specific Stage-level settings using fully-qualified names, it can cause issues at runtime. Every Pipeline where the Stage or step template is inserted must utilize the exact same names for fully-qualified name references to operate. With Runtime Inputs, you can supply values for a setting at deployment runtime. + +### Step 3: Add the Run Step Template to a Pipeline + +To add a Run Step Template in a Pipeline Execution select the step and click **Add Step**. + +The **Step Library** panel appears. + +In **Step Library,** select **Run** under **Build**. The **Configure Run** **Step** settings appear. + +![](./static/run-step-template-quickstart-84.png) +Click **Use Template.** The next page lists all the Project-level templates. + +Select the Template that you created. + +![](./static/run-step-template-quickstart-85.png) +Click the **Activity Log** to track all Template events. It shows you details like who created the Template and Template version changes. + +Click **Version Label.** + +Select **Always use the** **Stable** **version** of the template. This ensures that any changes that you make to this version are propagated automatically to the Pipelines using this template. + +Click **Use Template.** + +![](./static/run-step-template-quickstart-86.png) +In **Container Registry**, select **Runtime input**. + +In **Image**, select **Runtime input**. + +In Command, select **Runtime input.** + +Click **Apply Changes**. + +Click **Save**. + diff --git a/docs/platform/13_Templates/static/add-a-stage-template-48.png b/docs/platform/13_Templates/static/add-a-stage-template-48.png new file mode 100644 index 00000000000..c4334f74e8f Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-48.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-49.png b/docs/platform/13_Templates/static/add-a-stage-template-49.png new file mode 100644 index 00000000000..6482550d37d Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-49.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-50.png b/docs/platform/13_Templates/static/add-a-stage-template-50.png new file mode 100644 index 00000000000..13b86f39fa8 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-50.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-51.png b/docs/platform/13_Templates/static/add-a-stage-template-51.png new file mode 100644 index 00000000000..b6a290c1211 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-51.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-52.png b/docs/platform/13_Templates/static/add-a-stage-template-52.png new file mode 100644 index 00000000000..d5e116d1249 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-52.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-53.png b/docs/platform/13_Templates/static/add-a-stage-template-53.png new file mode 100644 index 00000000000..9bb693d9455 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-53.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-54.png b/docs/platform/13_Templates/static/add-a-stage-template-54.png new file mode 100644 index 00000000000..d9b0b3d0a00 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-54.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-55.png b/docs/platform/13_Templates/static/add-a-stage-template-55.png new file mode 100644 index 00000000000..5bc5cef5df7 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-55.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-56.png b/docs/platform/13_Templates/static/add-a-stage-template-56.png new file mode 100644 index 00000000000..a3104ee02bb Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-56.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-57.png b/docs/platform/13_Templates/static/add-a-stage-template-57.png new file mode 100644 index 00000000000..422f52cdbd7 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-57.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-58.png b/docs/platform/13_Templates/static/add-a-stage-template-58.png new file mode 100644 index 00000000000..66fa2234f3c Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-58.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-59.png b/docs/platform/13_Templates/static/add-a-stage-template-59.png new file mode 100644 index 00000000000..66eb90ca47c Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-59.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-60.png b/docs/platform/13_Templates/static/add-a-stage-template-60.png new file mode 100644 index 00000000000..d1d9052b594 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-60.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-61.png b/docs/platform/13_Templates/static/add-a-stage-template-61.png new file mode 100644 index 00000000000..1aead9a1c2d Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-61.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-62.png b/docs/platform/13_Templates/static/add-a-stage-template-62.png new file mode 100644 index 00000000000..59ae16336da Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-62.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-63.png b/docs/platform/13_Templates/static/add-a-stage-template-63.png new file mode 100644 index 00000000000..08f90edec13 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-63.png differ diff --git a/docs/platform/13_Templates/static/add-a-stage-template-64.png b/docs/platform/13_Templates/static/add-a-stage-template-64.png new file mode 100644 index 00000000000..c8802e0b108 Binary files /dev/null and b/docs/platform/13_Templates/static/add-a-stage-template-64.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-pipeline-template-24.png b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-24.png new file mode 100644 index 00000000000..36e33e3ddba Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-24.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-pipeline-template-25.png b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-25.png new file mode 100644 index 00000000000..7cc582a3dc1 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-25.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-pipeline-template-26.png b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-26.png new file mode 100644 index 00000000000..3e5b8d11321 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-26.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-pipeline-template-27.png b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-27.png new file mode 100644 index 00000000000..00fe58c464f Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-27.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-pipeline-template-28.png b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-28.png new file mode 100644 index 00000000000..c68008ce8f7 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-pipeline-template-28.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-87.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-87.png new file mode 100644 index 00000000000..34d7e9c23a0 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-87.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-88.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-88.png new file mode 100644 index 00000000000..36e33e3ddba Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-88.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-89.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-89.png new file mode 100644 index 00000000000..7cc582a3dc1 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-89.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-90.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-90.png new file mode 100644 index 00000000000..b5d28aee50c Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-90.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-91.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-91.png new file mode 100644 index 00000000000..7ab74ddba5f Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-91.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-92.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-92.png new file mode 100644 index 00000000000..04512cd4874 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-92.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-93.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-93.png new file mode 100644 index 00000000000..c867008054e Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-93.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-94.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-94.png new file mode 100644 index 00000000000..347a1905b91 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-94.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-stage-template-95.png b/docs/platform/13_Templates/static/create-a-remote-stage-template-95.png new file mode 100644 index 00000000000..7ef9592422d Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-stage-template-95.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-16.png b/docs/platform/13_Templates/static/create-a-remote-step-template-16.png new file mode 100644 index 00000000000..0454fdaace3 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-16.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-17.png b/docs/platform/13_Templates/static/create-a-remote-step-template-17.png new file mode 100644 index 00000000000..7f91ad02128 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-17.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-18.png b/docs/platform/13_Templates/static/create-a-remote-step-template-18.png new file mode 100644 index 00000000000..36e33e3ddba Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-18.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-19.png b/docs/platform/13_Templates/static/create-a-remote-step-template-19.png new file mode 100644 index 00000000000..7cc582a3dc1 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-19.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-20.png b/docs/platform/13_Templates/static/create-a-remote-step-template-20.png new file mode 100644 index 00000000000..49b1197b675 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-20.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-21.png b/docs/platform/13_Templates/static/create-a-remote-step-template-21.png new file mode 100644 index 00000000000..c0a96bf8d60 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-21.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-22.png b/docs/platform/13_Templates/static/create-a-remote-step-template-22.png new file mode 100644 index 00000000000..b9f5f2be495 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-22.png differ diff --git a/docs/platform/13_Templates/static/create-a-remote-step-template-23.png b/docs/platform/13_Templates/static/create-a-remote-step-template-23.png new file mode 100644 index 00000000000..044ff570695 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-remote-step-template-23.png differ diff --git a/docs/platform/13_Templates/static/create-a-secret-manager-template-29.png b/docs/platform/13_Templates/static/create-a-secret-manager-template-29.png new file mode 100644 index 00000000000..2af05637d36 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-secret-manager-template-29.png differ diff --git a/docs/platform/13_Templates/static/create-a-secret-manager-template-30.png b/docs/platform/13_Templates/static/create-a-secret-manager-template-30.png new file mode 100644 index 00000000000..b474e064a86 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-secret-manager-template-30.png differ diff --git a/docs/platform/13_Templates/static/create-a-secret-manager-template-31.png b/docs/platform/13_Templates/static/create-a-secret-manager-template-31.png new file mode 100644 index 00000000000..b09989631c2 Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-secret-manager-template-31.png differ diff --git a/docs/platform/13_Templates/static/create-a-secret-manager-template-32.png b/docs/platform/13_Templates/static/create-a-secret-manager-template-32.png new file mode 100644 index 00000000000..857604018ea Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-secret-manager-template-32.png differ diff --git a/docs/platform/13_Templates/static/create-a-secret-manager-template-33.png b/docs/platform/13_Templates/static/create-a-secret-manager-template-33.png new file mode 100644 index 00000000000..a751bc001ac Binary files /dev/null and b/docs/platform/13_Templates/static/create-a-secret-manager-template-33.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-65.png b/docs/platform/13_Templates/static/create-pipeline-template-65.png new file mode 100644 index 00000000000..ffebf08ad23 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-65.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-66.png b/docs/platform/13_Templates/static/create-pipeline-template-66.png new file mode 100644 index 00000000000..4a2270acf18 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-66.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-67.png b/docs/platform/13_Templates/static/create-pipeline-template-67.png new file mode 100644 index 00000000000..fd3261a4a4f Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-67.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-68.png b/docs/platform/13_Templates/static/create-pipeline-template-68.png new file mode 100644 index 00000000000..4c1c295dad8 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-68.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-69.png b/docs/platform/13_Templates/static/create-pipeline-template-69.png new file mode 100644 index 00000000000..4a6b666c153 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-69.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-70.png b/docs/platform/13_Templates/static/create-pipeline-template-70.png new file mode 100644 index 00000000000..f0bc95ec16f Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-70.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-71.png b/docs/platform/13_Templates/static/create-pipeline-template-71.png new file mode 100644 index 00000000000..cac72a0d4cf Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-71.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-72.png b/docs/platform/13_Templates/static/create-pipeline-template-72.png new file mode 100644 index 00000000000..edbda83265a Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-72.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-73.png b/docs/platform/13_Templates/static/create-pipeline-template-73.png new file mode 100644 index 00000000000..f4eb725b0d8 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-73.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-74.png b/docs/platform/13_Templates/static/create-pipeline-template-74.png new file mode 100644 index 00000000000..ba22e3db980 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-74.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-75.png b/docs/platform/13_Templates/static/create-pipeline-template-75.png new file mode 100644 index 00000000000..ef444f860d3 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-75.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-76.png b/docs/platform/13_Templates/static/create-pipeline-template-76.png new file mode 100644 index 00000000000..6788153d1b6 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-76.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-77.png b/docs/platform/13_Templates/static/create-pipeline-template-77.png new file mode 100644 index 00000000000..828054a01c7 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-77.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-78.png b/docs/platform/13_Templates/static/create-pipeline-template-78.png new file mode 100644 index 00000000000..9508cf61302 Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-78.png differ diff --git a/docs/platform/13_Templates/static/create-pipeline-template-79.png b/docs/platform/13_Templates/static/create-pipeline-template-79.png new file mode 100644 index 00000000000..1a7a05a209a Binary files /dev/null and b/docs/platform/13_Templates/static/create-pipeline-template-79.png differ diff --git a/docs/platform/13_Templates/static/harness-template-library-34.png b/docs/platform/13_Templates/static/harness-template-library-34.png new file mode 100644 index 00000000000..c4334f74e8f Binary files /dev/null and b/docs/platform/13_Templates/static/harness-template-library-34.png differ diff --git a/docs/platform/13_Templates/static/harness-template-library-35.png b/docs/platform/13_Templates/static/harness-template-library-35.png new file mode 100644 index 00000000000..da895ec069d Binary files /dev/null and b/docs/platform/13_Templates/static/harness-template-library-35.png differ diff --git a/docs/platform/13_Templates/static/harness-template-library-36.png b/docs/platform/13_Templates/static/harness-template-library-36.png new file mode 100644 index 00000000000..f9788067438 Binary files /dev/null and b/docs/platform/13_Templates/static/harness-template-library-36.png differ diff --git a/docs/platform/13_Templates/static/harness-template-library-37.png b/docs/platform/13_Templates/static/harness-template-library-37.png new file mode 100644 index 00000000000..31bba7cc085 Binary files /dev/null and b/docs/platform/13_Templates/static/harness-template-library-37.png differ diff --git a/docs/platform/13_Templates/static/harness-template-library-38.png b/docs/platform/13_Templates/static/harness-template-library-38.png new file mode 100644 index 00000000000..11c67822ec5 Binary files /dev/null and b/docs/platform/13_Templates/static/harness-template-library-38.png differ diff --git a/docs/platform/13_Templates/static/harness-template-library-39.png b/docs/platform/13_Templates/static/harness-template-library-39.png new file mode 100644 index 00000000000..496f070f53d Binary files /dev/null and b/docs/platform/13_Templates/static/harness-template-library-39.png differ diff --git a/docs/platform/13_Templates/static/harness-template-library-40.png b/docs/platform/13_Templates/static/harness-template-library-40.png new file mode 100644 index 00000000000..659550813a0 Binary files /dev/null and b/docs/platform/13_Templates/static/harness-template-library-40.png differ diff --git a/docs/platform/13_Templates/static/run-step-template-quickstart-80.png b/docs/platform/13_Templates/static/run-step-template-quickstart-80.png new file mode 100644 index 00000000000..9bc34380dd4 Binary files /dev/null and b/docs/platform/13_Templates/static/run-step-template-quickstart-80.png differ diff --git a/docs/platform/13_Templates/static/run-step-template-quickstart-81.png b/docs/platform/13_Templates/static/run-step-template-quickstart-81.png new file mode 100644 index 00000000000..da895ec069d Binary files /dev/null and b/docs/platform/13_Templates/static/run-step-template-quickstart-81.png differ diff --git a/docs/platform/13_Templates/static/run-step-template-quickstart-82.png b/docs/platform/13_Templates/static/run-step-template-quickstart-82.png new file mode 100644 index 00000000000..7b8efec7b10 Binary files /dev/null and b/docs/platform/13_Templates/static/run-step-template-quickstart-82.png differ diff --git a/docs/platform/13_Templates/static/run-step-template-quickstart-83.png b/docs/platform/13_Templates/static/run-step-template-quickstart-83.png new file mode 100644 index 00000000000..2e176ab1d24 Binary files /dev/null and b/docs/platform/13_Templates/static/run-step-template-quickstart-83.png differ diff --git a/docs/platform/13_Templates/static/run-step-template-quickstart-84.png b/docs/platform/13_Templates/static/run-step-template-quickstart-84.png new file mode 100644 index 00000000000..1d9c1989d35 Binary files /dev/null and b/docs/platform/13_Templates/static/run-step-template-quickstart-84.png differ diff --git a/docs/platform/13_Templates/static/run-step-template-quickstart-85.png b/docs/platform/13_Templates/static/run-step-template-quickstart-85.png new file mode 100644 index 00000000000..00a98a20a2f Binary files /dev/null and b/docs/platform/13_Templates/static/run-step-template-quickstart-85.png differ diff --git a/docs/platform/13_Templates/static/run-step-template-quickstart-86.png b/docs/platform/13_Templates/static/run-step-template-quickstart-86.png new file mode 100644 index 00000000000..7e16334fd71 Binary files /dev/null and b/docs/platform/13_Templates/static/run-step-template-quickstart-86.png differ diff --git a/docs/platform/13_Templates/static/template-00.png b/docs/platform/13_Templates/static/template-00.png new file mode 100644 index 00000000000..6b47ca7fa71 Binary files /dev/null and b/docs/platform/13_Templates/static/template-00.png differ diff --git a/docs/platform/13_Templates/static/template-01.png b/docs/platform/13_Templates/static/template-01.png new file mode 100644 index 00000000000..72589da7a53 Binary files /dev/null and b/docs/platform/13_Templates/static/template-01.png differ diff --git a/docs/platform/13_Templates/static/template-02.png b/docs/platform/13_Templates/static/template-02.png new file mode 100644 index 00000000000..61f9ad1f702 Binary files /dev/null and b/docs/platform/13_Templates/static/template-02.png differ diff --git a/docs/platform/13_Templates/static/template-03.png b/docs/platform/13_Templates/static/template-03.png new file mode 100644 index 00000000000..e701b1e947a Binary files /dev/null and b/docs/platform/13_Templates/static/template-03.png differ diff --git a/docs/platform/13_Templates/static/template-04.png b/docs/platform/13_Templates/static/template-04.png new file mode 100644 index 00000000000..b95e76b1f1f Binary files /dev/null and b/docs/platform/13_Templates/static/template-04.png differ diff --git a/docs/platform/13_Templates/static/template-05.png b/docs/platform/13_Templates/static/template-05.png new file mode 100644 index 00000000000..673312cf7f6 Binary files /dev/null and b/docs/platform/13_Templates/static/template-05.png differ diff --git a/docs/platform/13_Templates/static/template-06.png b/docs/platform/13_Templates/static/template-06.png new file mode 100644 index 00000000000..2897bc51fc1 Binary files /dev/null and b/docs/platform/13_Templates/static/template-06.png differ diff --git a/docs/platform/13_Templates/static/template-07.png b/docs/platform/13_Templates/static/template-07.png new file mode 100644 index 00000000000..81a0b7dd5a4 Binary files /dev/null and b/docs/platform/13_Templates/static/template-07.png differ diff --git a/docs/platform/13_Templates/static/template-08.png b/docs/platform/13_Templates/static/template-08.png new file mode 100644 index 00000000000..4a6cad10c9c Binary files /dev/null and b/docs/platform/13_Templates/static/template-08.png differ diff --git a/docs/platform/13_Templates/static/template-09.png b/docs/platform/13_Templates/static/template-09.png new file mode 100644 index 00000000000..3a592309865 Binary files /dev/null and b/docs/platform/13_Templates/static/template-09.png differ diff --git a/docs/platform/13_Templates/static/template-10.png b/docs/platform/13_Templates/static/template-10.png new file mode 100644 index 00000000000..9870c0d8d88 Binary files /dev/null and b/docs/platform/13_Templates/static/template-10.png differ diff --git a/docs/platform/13_Templates/static/template-11.png b/docs/platform/13_Templates/static/template-11.png new file mode 100644 index 00000000000..6fa98980e4b Binary files /dev/null and b/docs/platform/13_Templates/static/template-11.png differ diff --git a/docs/platform/13_Templates/static/template-12.png b/docs/platform/13_Templates/static/template-12.png new file mode 100644 index 00000000000..98fc8c7a10f Binary files /dev/null and b/docs/platform/13_Templates/static/template-12.png differ diff --git a/docs/platform/13_Templates/static/template-13.png b/docs/platform/13_Templates/static/template-13.png new file mode 100644 index 00000000000..b6b49511d85 Binary files /dev/null and b/docs/platform/13_Templates/static/template-13.png differ diff --git a/docs/platform/13_Templates/static/template-14.png b/docs/platform/13_Templates/static/template-14.png new file mode 100644 index 00000000000..9ba70abb24b Binary files /dev/null and b/docs/platform/13_Templates/static/template-14.png differ diff --git a/docs/platform/13_Templates/static/template-15.png b/docs/platform/13_Templates/static/template-15.png new file mode 100644 index 00000000000..9deac90741c Binary files /dev/null and b/docs/platform/13_Templates/static/template-15.png differ diff --git a/docs/platform/13_Templates/static/use-a-template-41.png b/docs/platform/13_Templates/static/use-a-template-41.png new file mode 100644 index 00000000000..1e7b85fb6be Binary files /dev/null and b/docs/platform/13_Templates/static/use-a-template-41.png differ diff --git a/docs/platform/13_Templates/static/use-a-template-42.png b/docs/platform/13_Templates/static/use-a-template-42.png new file mode 100644 index 00000000000..3c231e2b0fe Binary files /dev/null and b/docs/platform/13_Templates/static/use-a-template-42.png differ diff --git a/docs/platform/13_Templates/static/use-a-template-43.png b/docs/platform/13_Templates/static/use-a-template-43.png new file mode 100644 index 00000000000..a5ae8a58208 Binary files /dev/null and b/docs/platform/13_Templates/static/use-a-template-43.png differ diff --git a/docs/platform/13_Templates/static/use-a-template-44.png b/docs/platform/13_Templates/static/use-a-template-44.png new file mode 100644 index 00000000000..8f372f31647 Binary files /dev/null and b/docs/platform/13_Templates/static/use-a-template-44.png differ diff --git a/docs/platform/13_Templates/static/use-a-template-45.png b/docs/platform/13_Templates/static/use-a-template-45.png new file mode 100644 index 00000000000..64af723fda0 Binary files /dev/null and b/docs/platform/13_Templates/static/use-a-template-45.png differ diff --git a/docs/platform/13_Templates/static/use-a-template-46.png b/docs/platform/13_Templates/static/use-a-template-46.png new file mode 100644 index 00000000000..210f65992a1 Binary files /dev/null and b/docs/platform/13_Templates/static/use-a-template-46.png differ diff --git a/docs/platform/13_Templates/static/use-a-template-47.png b/docs/platform/13_Templates/static/use-a-template-47.png new file mode 100644 index 00000000000..377d379c50d Binary files /dev/null and b/docs/platform/13_Templates/static/use-a-template-47.png differ diff --git a/docs/platform/13_Templates/template.md b/docs/platform/13_Templates/template.md new file mode 100644 index 00000000000..c2ce440ca93 --- /dev/null +++ b/docs/platform/13_Templates/template.md @@ -0,0 +1,216 @@ +--- +title: Templates Overview +description: Harness enables you to add Templates to create re-usable logic and Harness entities (like Steps, Stages, and Pipelines) in your Pipelines. You can link these Templates in your Pipelines or share them… +sidebar_position: 10 +helpdocs_topic_id: 6tl8zyxeol +helpdocs_category_id: sy6sod35zi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness enables you to add Templates to create re-usable logic and Harness entities (like Steps, Stages, and Pipelines) in your Pipelines. You can link these Templates in your Pipelines or share them with your teams for improved efficiency. + +Templates enhance developer productivity, reduce onboarding time, and enforce standardization across the teams that use Harness. + +This topic provides an overview of Templates in Harness. + +### Limitations + +Harness Templates have the following temporary limitations at this point: + +* Deleting an existing Template with active Pipeline references, deletes the references too. +* If you convert a runtime input in a Template to a fixed value, the input type does not change in the linked Pipeline. You must manually edit the linked Pipeline YAML and give the fixed values. +* If you convert a fixed type input to a runtime input in your Template, the input type does not change in the linked Pipeline. You must click the Template in the linked Pipeline to refresh it and save the Pipeline again. + +### What is a Template in Harness? + +Harness' templates allow you to design reusable content, logic, and parameters, ensuring that the application is the major focus of your Pipelines. Instead of creating Pipelines from scratch each time, Harness lets you select from pre-built Templates and just link them to your Pipelines. The process of developing Pipelines thus becomes easier by reducing duplication and increasing reusability. + +You can reshare your work with your team and reuse it in your Pipelines. + +You can add Templates to Harness CI and CD modules. All your Templates can be seen in **Templates** based on their scope. We will also call this **Template Library** in this topic. + +![](./static/template-00.png) + +You can do the following with Templates in Harness: + +* Add multiple versions for a specific Template. +* Preview, Copy, Edit, and Delete a specific Template. +* Create nested Templates. For example, you can link a Step Template to a Stage Template and link this Stage Template to a Pipeline template. +* Keep track of all Template events with the **Activity Log** option. It shows you details like who created the Template and Template version changes. +* Clone Templates in Git and then sync them with Harness using [Harness Git Experience](../10_Git-Experience/git-experience-overview.md). + +### Why Should You Use Templates? + +* Templates are a very convenient way of sharing common logic in a centralized way without duplicating it on multiple Pipelines. +For example, if you have some tasks or operations that every Pipeline must do, then make them a part of a Template. Now use this Template in your Pipelines. +* Reduce the complexity and size of creating a single Pipeline. +* Set a pattern that you and your team can follow throughout your Pipelines. +* Save time and create generic templates that you can use across the scopes in your Harness Account. +* Add or remove a change in one file rather than a lot of stages. + +### Templates at Scopes + +You can add Templates at any [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md) in Harness. + +The following table shows what it means to add Templates at different scopes or hierarchies: + + + +| | | +| --- | --- | +| **Scope** | **When to add Templates?** | +| **Account** | To share Step/Stage/Pipeline Templates with users in the Account, as well as users within the Organizations, and Projects created within this Account. | +| **Organization** | To share Step/Stage/Pipeline Templates with users in the Organization as well as within the Projects created within the Org. | +| **Project** | To share Step/Stage/Pipeline Templates with users within the Project. | + +### What Are The Types of Templates in Harness? + +You can add the following types of Templates to your Harness Account/Org/Project: + +* Step +* Stage +* Pipeline + +#### Step Template + +To define a linear sequence of operations for a job. + +![](./static/template-01.png) +For detailed steps to add a Step Template, see [Create a Step Template](run-step-template-quickstart.md). + +#### Stage Template + +To define a set of stages of related jobs. + +![](./static/template-02.png) +For detailed steps to create a Stage Template, see [Create a Stage Template](add-a-stage-template.md). + +#### Pipeline Template + +You can create your own Pipeline Templates to standardize and distribute reusable Pipelines across your team or among multiple teams. The underlying structure of a Pipeline Template is the same as that of a Pipeline YAML. + +![](./static/template-03.png) +With Pipeline Templates you can: + +* Create a Template based on an existing Pipeline. +* Share the Template across scopes in Harness. + +For detailed steps to create a Pipeline Template, see [Create a Pipeline Template](create-pipeline-template.md). + +### Versioning + +Versioning a Template enables you to create a new Template without modifying the existing one. When you plan to introduce a major change in a Project that depends on an existing Template, you can use versioning. You can create multiple versions of a Template. + +You can make changes to the same version of the Template as long as the Template's inputs remain unaltered. You must create a new version of the Template for any changes in the inputs. + +#### Stable Version + +A stable version is a template that only introduces breaking changes in major release milestones. + +When using a Template, you can either link to a specific version or always use the stable version. When you mark a new version of the template as stable, it is automatically picked up to link to the Pipeline. + +You can set any version of your Template as the stable version using the **Set as Stable** option. + +![](./static/template-04.png) +### Preview a Template + +You can view the **Details** and **Activity Log** of your Template by clicking **Preview Template**. + +![](./static/template-05.png) +Activity Log enables you to view and track all the events corresponding to your Template. + +![](./static/template-06.png) +### Open/Edit a Template + +You can use the **Open/Edit Template** option and navigate to the Template Studio to edit the Template as per your needs. + +![](./static/template-07.png) +You can perform the following actions while editing a Template: + +* Modify the name and version details of the Template +* Set the Template version ( to stable or any other version) +* View the YAML file for the Template +* Modify Step or Stage configurations + +You can edit any version of your Template. + +![](./static/template-08.png) +Harness enables you to choose any one of the following: + +* **Save** - Save the updates in the selected version where you made the changes.![](./static/template-09.png) +* **Save as new version** - Create a new version of the selected Template and save with the changes you just made.![](./static/template-10.png) +* **Save as new Template** - Create a new Template from the selected Template and save the changes you just made.![](./static/template-11.png) + +### Template Settings + +You can set a specific version of your Template as the stable version by clicking on **Template Settings**. + +![](./static/template-12.png) +### Delete a Template + +You can delete your Templates at any point. Deletion of a Template will also remove any of its references in your Pipelines. + +![](./static/template-13.png) +### Template Inputs + +You can customize Templates by using placeholder expressions and [Runtime Inputs](../20_References/runtime-inputs.md) for their parameters and data types. Each time you run a Pipeline that uses the Template, users can provide values for these inputs. + +![](./static/template-14.png) +See [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). + +### Template YAML + +You can use the Harness visual or YAML editors to create your Templates. + +![](./static/template-15.png) +Here's an example of the YAML for a CD Stage template: + + +``` +template: + name: Quickstart + identifier: Quickstart + versionLabel: v1 + type: Stage + projectIdentifier: CD_Examples + orgIdentifier: default + tags: {} + spec: + type: Deployment + spec: + serviceConfig: + serviceDefinition: + type: Kubernetes + spec: + variables: [] + serviceRef: nginx + infrastructure: + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: docbuilds + namespace: default + releaseName: release-<+INFRA_KEY> + allowSimultaneousDeployments: false + environmentRef: helmchart + execution: + steps: + - step: + type: K8sRollingDeploy + name: Rolling + identifier: Rolling + spec: + skipDryRun: false + timeout: 10m + rollbackSteps: [] + serviceDependencies: [] + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: StageRollback +``` +The YAML editor is a full-fledged YAML IDE with autocomplete and other features. See [Harness YAML Quickstart](../8_Pipelines/harness-yaml-quickstart.md). + diff --git a/docs/platform/13_Templates/use-a-template.md b/docs/platform/13_Templates/use-a-template.md new file mode 100644 index 00000000000..c2a944b372e --- /dev/null +++ b/docs/platform/13_Templates/use-a-template.md @@ -0,0 +1,90 @@ +--- +title: Use a Template +description: This topic describes how to use an existing Template in a Pipeline. +# sidebar_position: 2 +helpdocs_topic_id: 1re7pz9bj8 +helpdocs_category_id: m8tm1mgn2g +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness enables you to add Templates to create reusable logic and Harness entities (like Steps, Stages, and Pipelines) in your Pipelines. You can link these Templates in your Pipelines or share them with your teams for improved efficiency. Templates enhance developer productivity, reduce onboarding time, are enforce standardization across the teams that use Harness. + +Harness Templates also give you the option of reusing a Pipeline Templates to create a Pipeline. You do not have to create a Template all the time. Once you've [created a Template](create-pipeline-template.md), you can reuse it to create multiple Pipelines. + +This topic explains how to use an existing Pipeline Template in a Pipeline. + +### Before you begin + +* [Templates Overview](template.md) +* [Create a Pipeline Template](create-pipeline-template.md) + +### Step: Use a Template + +To use a Pipeline Template, navigate to the **Deployments** module, and select **Pipelines**. + +Click **New Pipeline**. + +![](./static/use-a-template-41.png) +Enter **Name** for your Pipeline and click **Start with Template**. + +The next page lists all the available Pipeline Templates. + +Select the Template that you want to use. + +![](./static/use-a-template-42.png) +You can filter the Templates by scope. You can also use the search bar to search the Template that you want to use. + +![](./static/use-a-template-43.png) +In **Details**, you can see the following details about the selected Template: + +* Type +* Description +* Tags +* Verison Label: Select the version label. Harness recommends using the **Stable** version of the Template. This ensures that any changes that you make to this version are propagated automatically to the Pipelines using this Template. + +In **Template Inputs**, you can view the number of Step or Stage inputs in that Template. + +Click **YAML** to view the YAML details of the Template. + +Click the **Activity Log** to track all Template events. It shows you details, like who created the Template and Template version changes. + +Click **Use Template** to use this Template to create your Pipeline. + +Add the runtime input values (if required), and click **Save**. The **Pipeline is published successfully** message appears. + +You can also perform the following actions: + +* Change Template +* Remove Template +* Open Template in new tab +* Preview Template YAML + +![](./static/use-a-template-44.png) +Once you've made all the changes, click **Run** and then click **Run Pipeline**. The Template is deployed. + +![](./static/use-a-template-45.png) +### Option: Copy to Pipeline + +You can also copy the contents of a specific Template to your Pipeline using the **Copy to Pipeline** option. This doesn't add any reference to the Template. Copying a Template to a Pipeline is different from using a Template for your Pipeline. You can't change any step or stage parameters when you link to a Template from your Pipeline. + +Select the Pipeline Template that you want to copy. + +In **Template Inputs**, click **Copy to Pipeline**. + +![](./static/use-a-template-46.png) +In **Create new Pipeline**, enter a name and click **Start**. + +Add a Stage (if required). + +Once you've made all the changes, click **Save**. You can **Save Pipeline** or **Save as Template**. + +![](./static/use-a-template-47.png) +Click **Run** to deploy the Template. + +### See also + +* [Create a Step Template](run-step-template-quickstart.md) +* [Create an HTTP Step Template](harness-template-library.md) +* [Create a Stage Template](add-a-stage-template.md) + diff --git a/docs/platform/14_Policy-as-code/_category_.json b/docs/platform/14_Policy-as-code/_category_.json new file mode 100644 index 00000000000..fdfaae7d2d8 --- /dev/null +++ b/docs/platform/14_Policy-as-code/_category_.json @@ -0,0 +1 @@ +{"label": "Policy as Code", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Policy as Code"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "zoc8fpiifm"}} \ No newline at end of file diff --git a/docs/platform/14_Policy-as-code/add-a-governance-policy-step-to-a-connector.md b/docs/platform/14_Policy-as-code/add-a-governance-policy-step-to-a-connector.md new file mode 100644 index 00000000000..a0eeaa1865f --- /dev/null +++ b/docs/platform/14_Policy-as-code/add-a-governance-policy-step-to-a-connector.md @@ -0,0 +1,104 @@ +--- +title: Use Harness Policy As Code For Connectors +description: Describes steps to add policies to Connectors. +# sidebar_position: 2 +helpdocs_topic_id: 4kuokatvyw +helpdocs_category_id: zoc8fpiifm +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness provides governance using Open Policy Agent (OPA), Policy Management, and Rego policies. + +You can create the policy and apply it to all Connectors in your Account, Org, and Project. The policy is evaluated on Connector-level event like On Save which occurs during Connector Creation and Updates. See [Harness Governance Quickstart](harness-governance-quickstart.md). + +### Before you begin + +* [Harness Governance Overview](harness-governance-overview.md) +* [Harness Governance Quickstart](harness-governance-quickstart.md) +* Policies use OPA authoring language Rego. For more information, see [OPA Policy Authoring](https://academy.styra.com/courses/opa-rego). + +### Step 1: Add a Policy + +In Harness, go to **Account Settings**. + +Click **Policies**. + +![](./static/add-a-governance-policy-step-to-a-connector-14.png) +Click **Policies**, and then click **New Policy**. + +![](./static/add-a-governance-policy-step-to-a-connector-15.png) +The **New Policy** settings appear. + +Enter a **Name** for your policy and click **Apply**. + +![](./static/add-a-governance-policy-step-to-a-connector-16.png) +Next you need to add your policy. Enter your own Rego policy. For example: + + +``` +package docexamplepolicy + +deny ["Connector of type vault cannot be created"] { + input.entity.type == "Vault" + }​ +``` +Click **Save**. + +### Step 2: Add the Policy to a Policy Set + +After you create your policy, you must add it to a Policy Set before applying it to your Connectors. + +In **Policies**, click **Policy Sets**, then click **New Policy Set**. + +The **Policy Set** settings appear. + +In **Name**, enter the name of the Policy Set. + +In **Description**, enter a description of the Policy Set. + +In **Entity type**, select **Connector**. + +In **On what event should the Policy Set be evaluated**, select **On save.** + +Click **Continue**. + +![](./static/add-a-governance-policy-step-to-a-connector-17.png) +Existing Connectors are not automatically updated with policies. Policies can be applied to Connectors only on a save when they are created or updated.In **Policy evaluation criteria**, click **Add Policy.** + +The **Select Policy** settings appear. Select the policy you want to use from the list. + +![](./static/add-a-governance-policy-step-to-a-connector-18.png)  + +Select the severity and action you want to apply when the policy isn’t adhered to. You can select one of the following + +* **Warn & continue** - You will receive a warning if the policy is not met when the Connector is evaluated, but the Connector will be saved and you may proceed. +* **Error and exit** - You'll get an error and be exited without saving the Connector if the policy isn't met when the Connector is examined. + +![](./static/add-a-governance-policy-step-to-a-connector-19.png) +Click **Apply**, and then click **Finish**. + +Now your Policy Set is automatically set to Enforced, to make it unenforced, toggle off the **Enforced** button. + +![](./static/add-a-governance-policy-step-to-a-connector-20.png) +### Step 3: Apply a Policy to a Connector + +After you have created your Policy Set, and added your policies to it, apply the policy to a Connector.  + +Let us take the example of a [GitHub Connector](../7_Connectors/add-a-git-hub-connector.md). + +You can add a Connector from any module in your Project in Project setup, or in your Organization, or Account Resources. + +In Account Resources, click **Connectors**. + +Click **New Connector**, and then click **GitHub**. + +Enter all the required fields and click **Save and Continue**. + +Based on your selection in the Policy Evaluation criteria, you will either receive a warning or an error. + +![](./static/add-a-governance-policy-step-to-a-connector-21.png) +### See also + +* [Harness Policy As Code Overview](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md) + diff --git a/docs/platform/14_Policy-as-code/add-a-governance-policy-step-to-a-pipeline.md b/docs/platform/14_Policy-as-code/add-a-governance-policy-step-to-a-pipeline.md new file mode 100644 index 00000000000..dde4ba8db87 --- /dev/null +++ b/docs/platform/14_Policy-as-code/add-a-governance-policy-step-to-a-pipeline.md @@ -0,0 +1,219 @@ +--- +title: Add a Policy Step to a Pipeline +description: Add a Policy step to your Stage. +# sidebar_position: 2 +helpdocs_topic_id: xy8zsn8fa3 +helpdocs_category_id: zoc8fpiifm +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flags `OPA_PIPELINE_GOVERNANCE` and `CUSTOM_POLICY_STEP`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Harness provides governance using Open Policy Agent (OPA), Policy Management, and Rego policies. + +You can enforce policies in two ways: + +* **Account, Org, and** **Project-specific:** you can create the policy and apply it to all Pipelines in your Account, Org, and Project. The policy is evaluated on Pipeline-level events like On Run and On Save. See [Harness Governance Quickstart](harness-governance-quickstart.md). +* **Stage-specific:** you can add a Policy step, add a new/existing Policy Set to it, and then provide a JSON payload to evaluate. + + The policy is evaluated whenever the Pipeline reaches the Policy step. + + Policy evaluation can be performed on data generated when the Pipeline is running, such as resolved expressions. + + Policy evaluation can fail Pipeline execution. + +This topic describes how to add a Policy step to a Stage. + +### Before you begin + +* If you are new to Harness Governance, see [Harness Governance Overview](harness-governance-overview.md) and [Harness Governance Quickstart](harness-governance-quickstart.md). + +### Limitations + +* The policies that can be enforced are currently restricted by the Harness entities supported by the OPA service. +* Currently, the Policy Step is only available in Continuous Delivery Stages. +* Currently, only the **Custom** entity type is supported for the Policy step. + + A Custom entity type allows flexibility to enforce policy evaluations during Pipeline execution with different input data. For example, Terraform plans and deployment Environment details. A Policy Set with a Custom type does not have an event configured. +* [Runtime Inputs](../20_References/runtime-inputs.md) are expanded before evaluation. You cannot perform checks to ensure a setting is always a Runtime Input, Expression, or Fixed Value. + +### Visual Summary + +Here's a quick video showing you how to use the Policy step to evaluate a custom JSON payload. + +### Step 1: Add the Policy Step + +Open a Harness Pipeline, and then add or open a new CD Stage. + +In the **Execution** phase of the Stage, click **Add Step**. + +In **Governance**, click the **Policy** step. + +![](./static/add-a-governance-policy-step-to-a-pipeline-00.png) +The Policy step is added the to Stage. + +Enter a name and timeout for the step. + +Next you will specify the Entity Type and then add the Policy and Payload to the step. + +### Step 2: Select Entity Type + +In **Entity Type**, select the Harness entity type for the step. For example, **Custom**. + +Currently, only the **Custom** entity type is supported. Additional entity types such as Pipeline will be added soon.A **Custom** entity type allows flexibility to enforce policy evaluations during Pipeline execution with different input data. For example, Terraform plans and deployment Environment details. + +A **Custom** type does not have an event configured. It is triggered when the Pipeline step is reached during Pipeline execution. + +Next, you can add the Policy Set to the step. + +### Step 3: Add Policy Sets + +A Policy Set is a set of rules (policies) that are evaluated together. + +Policy Sets are stored to the Harness OPA server for a given entity type and event in Harness. + +Policy Sets are saved at the Harness account, Organization, or Project level, and where they are saved determines the scope of the Policy Set. + +A Policy Set at the account level can be used in any Policy Step in the Orgs and Projects in the account. A Policy Set at the Project level can only be used in steps in that Project alone. + +In **Policy Set**, click **Add/Modify Policy Set**. + +In this example, we'll use an existing policy set. For details on creating a policy set, see [Harness Governance Quickstart](harness-governance-quickstart.md). + +![](./static/add-a-governance-policy-step-to-a-pipeline-01.png) +Navigate to a **Policy Set**, select it, and click **Apply**. + +The Policy Set you select must be evaluated **On Step**.Currently, only the **Custom** entity type is supported. The Policy Set you select must have the **Custom** entity type selected.![](./static/add-a-governance-policy-step-to-a-pipeline-02.png) +Also, select how you want the Pipeline to handle policy evaluation failures: + +![](./static/add-a-governance-policy-step-to-a-pipeline-03.png) +The Policy Set is added. + +### Step 4: Add Payload + +Currently, only the **Custom** entity type is supported. The JSON payload you add is a free form payload that can be evaluated by your Policy Set at runtime.In **Payload**, enter the payload to be evaluated by the Policy Set(s) you selected in **Policy Set**. + +### Option: Using Fixed Values, Runtime Inputs, and Expressions in Policy Steps + +The **Policy Set** and **Payload** settings allow for Fixed Values, Runtime Inputs, and Expressions. + +For details on how these work in Harness, See [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). + +#### Fixed Values + +Fixed Values is show in the **Policy Set** and **Payload** settings earlier in this topic. + +#### Runtime Inputs + +You can select Runtime Inputs for one or both settings and when the Pipeline is executed you can provide the Policy Set and/or Payload for the step. + +![](./static/add-a-governance-policy-step-to-a-pipeline-04.png) +#### Expressions + +You can select Expressions for one or both settings and when the Pipeline is executed Harness will resolve the expressions for the Policy Set and/or Payload for the step. + +![](./static/add-a-governance-policy-step-to-a-pipeline-05.png) +### Step 5: Test the Policy Step + +New to policies and Policy Sets? See [Harness Governance Quickstart](harness-governance-quickstart.md).Let's look at an example of the Policy step. + +We'll use an HTTP step to do a REST GET and get the Harness SaaS version number and then use the Policy step to evaluate the response to see if it match a version number check policy. + +The policy checks to see if a version is greater than v0.200.0. + + +``` +package pipeline_environment + +deny[sprintf("version must be greater than v0.200.0 but is currently '%s'", [input.version])] { + version := trim(input.version, "v") + semver.compare(version, "0.200.0") < 0 +} +``` +Currently, only the **Custom** entity type is supported. The JSON payload you add is a free form payload that can be evaluated by your Policy Set at runtime. It does not need to be a Harness entity.Next, in our Pipeline we'll add an [HTTP step](https://docs.harness.io/article/0aiyvs61o5-using-http-requests-in-cd-pipelines) to check the version at the HTTP endpoint `https://app.harness.io/prod1/pm/api/v1/system/version`, and a **Policy** step to that uses our policy to check the version returned from the HTTP step: + +Here's the YAML for a Pipeline that uses the step: +``` +pipeline: + name: Policy + identifier: Policy + allowStageExecutions: false + projectIdentifier: CD_Examples + orgIdentifier: default + tags: {} + stages: + - stage: + name: Test + identifier: Test + description: "" + type: Approval + spec: + execution: + steps: + - step: + type: Http + name: Get version + identifier: Get_version + spec: + url: https://app.harness.io/prod1/pm/api/v1/system/version + method: GET + headers: [] + outputVariables: [] + timeout: 10s + - step: + type: Policy + name: Version Policy + identifier: Version_Policy + spec: + policySets: + - Version + type: Custom + policySpec: + payload: <+pipeline.stages.Test.spec.execution.steps.Get_version.output.httpResponseBody> + timeout: 10m + - step: + type: ShellScript + name: Pass or Fail + identifier: Pass_or_Fail + spec: + shell: Bash + onDelegate: true + source: + type: Inline + spec: + script: echo <+pipeline.stages.Test.spec.execution.steps.Version_Policy.output.status> + environmentVariables: [] + outputVariables: [] + executionTarget: {} + timeout: 10m + serviceDependencies: [] + tags: {} +``` +The Pipeline YAML also includes a Shell Script step that displays an output expression for the Policy step.As you can see in the above **Policy** step, in **Payload**, we reference the output from the HTTP step: + + +``` +<+pipeline.stages.Test.spec.execution.steps.Get_version.output.httpResponseBody> +``` +Now when we run the Pipeline, the Policy Step will evaluate the JSON in Payload and see that it passes. + +### Policy Step Expressions + +You can use the following Harness expressions to output Policy Step status in a [Shell Script](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) step: + +* `<+execution.steps.[policy step Id].output.status>` +* `<+execution.steps.[policy step Id].output.policySetDetails.Example.status>` + +![](./static/add-a-governance-policy-step-to-a-pipeline-06.png) +For example, if the Policy Step [Id](../20_References/entity-identifier-reference.md) is `Check`, you would reference it like this: + + +``` +echo "status: "<+execution.steps.Check.output.status> + +echo "projectPolicySetDetails: "<+execution.steps.Check.output.policySetDetails.Example.status> +``` +The output would be something like this: + + +``` +Executing command ... +status: pass +projectPolicySetDetails: pass +Command completed with ExitCode (0) +``` diff --git a/docs/platform/14_Policy-as-code/add-a-policy-engine-step-to-a-secret.md b/docs/platform/14_Policy-as-code/add-a-policy-engine-step-to-a-secret.md new file mode 100644 index 00000000000..a9ea18df62a --- /dev/null +++ b/docs/platform/14_Policy-as-code/add-a-policy-engine-step-to-a-secret.md @@ -0,0 +1,108 @@ +--- +title: Use Harness Policy As Code For Secrets +description: Add a Policy step to your Secret. +# sidebar_position: 2 +helpdocs_topic_id: ozw30qez44 +helpdocs_category_id: zoc8fpiifm +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness provides governance using Open Policy Agent (OPA), Policy Management, and Rego policies. + +You can create the policy and apply it to all Secrets in your Account, Org, and Project. The policy is evaluated on Secret-level events like **On Save,** which occurs during Secret Creation and Updates. See [Harness Governance Quickstart](harness-governance-quickstart.md).​ + +This topic describes how to add a Policy step to a Secret. + +### Before you begin + +* [Harness Governance Overview](harness-governance-overview.md) +* [Harness Governance Quickstart](harness-governance-quickstart.md). +* Policies use OPA authoring language Rego. For more information, see [OPA Policy Authoring](https://academy.styra.com/courses/opa-rego). + +### Limitations + +* The policies that can be enforced are currently restricted by the Harness entities supported by the OPA service. + +### Step 1: Add a Policy + +In Harness, go to **Account Settings**. + +Click **Policies**. + +![](./static/add-a-policy-engine-step-to-a-secret-46.png) +Click **Policies**, and then click **New Policy**. + +![](./static/add-a-policy-engine-step-to-a-secret-47.png) +**New Policy** settings appear. + +Enter a **Name** for your policy and click **Apply**. + +![](./static/add-a-policy-engine-step-to-a-secret-48.png) +Next, you need to add your policy. Enter your own Rego policy. For example: + + +``` +package docexamplepolicy + +deny { + input.secret.description = "Secret description" + } +``` +Click **Save**. + +### Step 2: Add the Policy to a Policy Set + +After you create your policy, you must add it to a Policy Set before applying it to your Secrets. + +In **Policies**, click **Policy Sets**, then click **New Policy Set**. + +The **Policy Set** settings appear. + +In **Name**, enter the name of the Policy Set. + +In **Description**, enter a description of the Policy Set. + +In **Entity type**, select **Secret**. + +In **On what event should the Policy Set be evaluated**, select **On save.** + +Click **Continue**. + +![](./static/add-a-policy-engine-step-to-a-secret-49.png) +Existing Secrets are not automatically updated with policies. Policies can be applied to Secrets only on a save when they are created or updated.In **Policy evaluation criteria**, click **Add Policy.** + +**Select Policy** settings appear. Select the policy you want to use from the list. + +![](./static/add-a-policy-engine-step-to-a-secret-50.png)  + +Select the severity and action you want to apply when the policy isn’t adhered to. You can select one of the following + +* **Warn & continue** - You will receive a warning if the policy is not met when the Secret is evaluated, but the Secret will be saved and you may proceed. +* **Error and exit** - You'll get an error and be exited without saving the Secret if the policy isn't met when the Secret is examined.![](./static/add-a-policy-engine-step-to-a-secret-51.png) + +Click **Apply**, and then click **Finish**. + +To enforce your Policy Set, toggle on the **Enforced** button. + +![](./static/add-a-policy-engine-step-to-a-secret-52.png) +### Step 3: Apply the Policy to a Secret + +After you have created your Policy Set, and added your policies to it, apply the policy to a Secret.  + +Let us look at an example. + +You can add a Secret from any module in your Project in Project setup, or in your Organization, or Account Resources. + +In Account Resources, click **Secrets**. + +Click **New Secret**, and then click **Text**. + +Enter all the required fields and click **Save and Continue**. + +Based on your selection in the Policy Evaluation criteria, you will either receive a warning or an error. + +### See also + +* [​Harness Policy As Code Overview​](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md) + diff --git a/docs/platform/14_Policy-as-code/disable-a-policy-set.md b/docs/platform/14_Policy-as-code/disable-a-policy-set.md new file mode 100644 index 00000000000..4a401eae82d --- /dev/null +++ b/docs/platform/14_Policy-as-code/disable-a-policy-set.md @@ -0,0 +1,40 @@ +--- +title: Enable or Disable a Policy Set +description: Disable a Policy Set by locating the Policy Set and toggling the Enforced setting to off. +# sidebar_position: 2 +helpdocs_topic_id: 6lxxd5j8j5 +helpdocs_category_id: zoc8fpiifm +helpdocs_is_private: false +helpdocs_is_published: true +--- + +By default, a new Harness Policy Set is disabled. This default prevents someone from enforcing a Policy Set and accidentally impacting Pipelines and other Harness resources. + +In some cases, you might have an enabled Policy Set and need to disable it. For example, if a Policy Set is enabled and an unintended Pipeline or other resource does not pass the Policy Set's evaluation, you will receive a failure, like this: + +![](./static/disable-a-policy-set-53.png) +You can contact your Harness account admin to resolve the issue or, if the Policy Set is in error, you can disable it by locating the Policy Set and toggling the **Enforced** setting to off: + +![](./static/disable-a-policy-set-54.png) +### Before you begin + +* [Harness Policy As Code Overview](harness-governance-overview.md) +* [Harness Policy As Code Quickstart](harness-governance-quickstart.md) + +### Step 1: Locate the Policy Set + +In your Harness account/Org/Project, click **Policies**. + +Click **Policy Sets**. + +Toggle the **Enforced** setting to off. + +![](./static/disable-a-policy-set-55.png) +### Notes + +* To prevent issues with team members Pipelines and resources when creating a new Policy Set, a new Policy Set is disabled by default. + +### See also + +* [Add a Policy Step to a Pipeline](add-a-governance-policy-step-to-a-pipeline.md) + diff --git a/docs/platform/14_Policy-as-code/harness-governance-overview.md b/docs/platform/14_Policy-as-code/harness-governance-overview.md new file mode 100644 index 00000000000..9f6e4b35318 --- /dev/null +++ b/docs/platform/14_Policy-as-code/harness-governance-overview.md @@ -0,0 +1,237 @@ +--- +title: Harness Policy As Code Overview +description: Harness uses Open Policy Agent (OPA) to store and enforce policies for the Harness platform. +sidebar_position: 10 +helpdocs_topic_id: 1d3lmhv4jl +helpdocs_category_id: zoc8fpiifm +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is behind the Feature Flag `OPA_PIPELINE_GOVERNANCE`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +::: + +This topic provides an overview of how Harness Policy As Code implemented governance. + + +:::note +Looking for the quickstart? See [Harness Policy As Code Quickstart](harness-governance-quickstart.md). + +::: + +### Before you begin + +Before learning about Harness Policy As Code, you should have an understanding of the following: + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### How does Harness use OPA? + +Harness Policy As Code uses [Open Policy Agent (OPA)](https://www.openpolicyagent.org/) as the central service to store and enforce policies for the different entities and processes across the Harness platform. + +You can centrally define and store policies and then select where (which entities) and when (which events) they will be applied. + +Currently, you can define and store policies directly in the OPA service in Harness. + +Soon, you will be able to use remote Git or other repos (e.g. OCI-compatible registries) to define and store the policies used in Harness. + +### Governance Examples with Harness OPA + +#### Example A: Pipeline > On Save + + +> When a Pipeline is saved, there needs to be an Approval step before deploying to a production environment. + +* **Success:** you configure an Approval Step in the Pipeline and then proceed to configure a prod stage. When you save the Pipeline, the policy rule is evaluated and returns `success`. +* **Warning:** a warning message appears: `You need an Approval step. If you save the Pipeline and deploy, Harness will throw an error.` +* **Failure:** you configure a Pipeline with a Deploy stage that deploys to a prod environment without an Approval stage before it. When you save the Pipeline, Harness throws an error message indicating the rule was enforced and the Pipeline fails validation. + +#### Example B: Pipeline > On Run + + +> On deployment, I need my pod CPU and memory to be pre-defined. + +* **Success:** you deploy the Pipeline and during the dry run the pod CPU and memory have been defined and populated in the deployment manifest. As a result, the dry run progresses. Harness indicates that the rule was evaluated and the action was valid. +* **Failure:** pod CPU and memory were not defined in the deployment manifest. As a result, the dry run fails. Harness indicates that a rule was enforced and the deployment is prevented. + +### Harness OPA Server + +The Harness OPA server is an OPA server managed by Harness. + +In Harness, you add Rego policies to a Policy Set and select the Harness entities (e.g. Pipelines) for evaluation. At that point, the policies are configured on the Harness OPA Server via a Kubernetes ConfigMap. + +When certain events happen (e.g. saving or running a Pipeline), Harness reaches out to the Harness OPA server to evaluate the action using the Policy Set. + +### Harness Policies + +A policy is a single rule. Policies are written as code in the OPA Rego policy language. + +A policy itself is just the rule and it's not enforced anywhere. When a policy is added to a Policy Set, it is associated with the entity event on which it will be enforced (On Save, On Run, etc). + +Policies are written against an input payload. The input payload is the JSON of the entity that the policy is being enforced against (Pipeline, Feature Flag, etc). + +Policies are saved within the hierarchy in the Harness platform: Account > Organizations > Projects. + +Policy scope is determined by the whether the policy is created at the account, Organization, or Project level. A policy added at the account level can be applied to all entities in the Orgs and Projects in the account. A policy added at the Project level can be applied to entities in that Project alone. + +Polices can be tested individually, but they are not applied individually. To enforce a policy, it must be in a Policy Set. + +Policies are written in the OPA policy language, Rego. + +**New to OPA Policy Authoring?** Use the following resources to learn Rego: + +* **Highly recommend:** Free online course on Rego from Styra founder and OPA co-creator Tim Hendricks: [OPA Policy Authoring](https://academy.styra.com/courses/opa-rego). +* See [Policy Language](https://www.openpolicyagent.org/docs/latest/policy-language/) from OPA. The [Rego Cheatsheet](https://dboles-opa-docs.netlify.app/docs/v0.10.7/rego-cheatsheet/) is also helpful to have on hand. + +#### Policy Editor + +Harness policies are written and tested using the built-in policy editor. + +![](./static/harness-governance-overview-07.png) + +For an example of how to use the policy editor, see [Harness Policy As Code Quickstart](harness-governance-quickstart.md). + +#### Policy Library + +The Policy Editor includes a library of policies that cover many common governance scenarios. + +Sample policies are also useful references while writing your policy. When you import an example, a sample payload is also loaded for testing the policy. + +![](./static/harness-governance-overview-08.png) + +You can simply use the library policies to quickly generate the policy you want to create. + +#### Select Input + +In the Policy Editor, you can select sample entities to test your policy on. For example, Pipelines. + +![](./static/harness-governance-overview-09.png) +#### Testing Terminal + +The Testing Terminal lets you test the policy against real inputs while you're developing it. You can select input payloads from previous evaluations to test what will happen when your policy is evaluated. + +![](./static/harness-governance-overview-10.png) + +#### Policy Input Payload User Metadata + +The input payload contains user metadata for the user that initiated the event. Metadata includes roles, groups, etc, and is added to every evaluation automatically. For example: + + +``` +{ + "action": null, + "date": "2022-05-05T20:41:23.538+0000", + "metadata": { + "action": "onsave", + "roleAssignmentMetadata": [ + { + "identifier": "role_assignment_NsFQM43RqnfQJmtPWx7s", + "managedRole": true, + "managedRoleAssignment": true, + "resourceGroupIdentifier": "_all_project_level_resources", + "resourceGroupName": "All Project Level Resources", + "roleIdentifier": "_project_viewer", + "roleName": "Project Viewer" + } + ], + "timestamp": 1651783283, + "type": "pipeline", + "user": { + "disabled": false, + "email": "john.doe@harness.io", + "externallyManaged": false, + "locked": false, + "name": "john.doe@harness.io", + "uuid": "U6h_smb9QTGimsYfNdv6VA" + }, + "userGroups": [] + }, +... +``` +This enables enforcing policies with advanced and attribute-based access control use cases. + +See [Harness Role-Based Access Control Overview](../4_Role-Based-Access-Control/1-rbac-in-harness.md). + +### Harness Policy Set + +You define a set of rules (policies) that are evaluated together in a Policy Set. + +Policies are only enforced once they are added to a Policy Set. In the Policy Set, policies are grouped and associated with a Harness entity and the event that will initiate evaluation. + +Each policy in the set is also assigned a severity that determines what will happen if the policy evaluation fails (Error and Exit, Warn and Continue). + +![](./static/harness-governance-overview-11.png) + +Policy Sets are stored to the Harness OPA server for a given entity type and event in Harness. The entity (Pipelines, etc) and event (On Save, On Run, etc) associated with a Policy Set determine when the policies in that set are evaluated. + +Policy Sets are saved at the Harness account, Organization, or Project level, and where they are saved determines the scope of the Policy Set. + +A Policy Set at the account level applies to all entities in the Orgs and Projects in the account. A Policy Set at the Project level only applies to entities in that Project alone. + +### Entities and Events + +When you create a policy, you identify the Harness entities were the policy is applied. + +For example, here's a policy that applies the [Harness Approval](https://docs.harness.io/article/43pzzhrcbv-using-harness-approval-steps-in-cd-stages) steps: + +![](./static/harness-governance-overview-12.png) + +Currently, governance can be applied to the following Harness entities and events. + +Soon, policies can be applied to more entities, such as Connectors, Services, Environments, Cloud Cost Management, Infrastructure Provisioners. + +#### Pipelines + +Policies are evaluated against Harness Pipelines. The input payload is an expanded version of the Pipeline YAML, including expanded references and parameters at runtime.  + +Policy Sets can be configured to be enforced automatically on these Pipeline events: + +* **On Save:** Policies are evaluated when the Pipeline is saved. +* **On Run:** Policy sets are evaluated after the preflight checks. + +Severities: + +* **On error (Error and Exit):** a message is shown and the action does not complete. +* **On warning (Warn and Continue):** a message is shown and the action is completed. + +The Policy step in a Pipeline also enables evaluating policies during Pipeline execution. See [Add a Governance Policy Step to a Pipeline](add-a-governance-policy-step-to-a-pipeline.md). + +#### Feature Flags + +Policies are evaluated against Harness [Feature Flags](../../feature-flags/1-ff-onboarding/1-cf-feature-flag-overview.md).   + +Policy Sets can be configured to evaluate policies on these Feature Flag events: + +* Feature Flag is saved. +* Flag is created. +* Flag is toggled on or off. + +See [Use Harness Policy As Code for Feature Flags](using-harness-policy-engine-for-feature-flags.md). + +#### Custom + +You can define a policy with the entity type Custom. + +The Custom entity type provides flexibility to enforce policy evaluations against any input payload during Pipeline execution. This is done using the Policy step. See [Add a Governance Policy Step to a Pipeline](add-a-governance-policy-step-to-a-pipeline.md). + +Custom entity types are open ended. There is no pre-set JSON schema that is used for Custom policies. The payload that the policy is evaluated against is determined by you (defined in the Policy step). + +### Policy and Policy Set Hierarchy and Inheritance + +Policies and Policy Sets are saved at the Harness Account, Organization, or Project level in the Harness. Where the Policy or Policy set is saved determines its scope.  + +* Policies saved at the Account level can be added to Policy Sets in the Account, or Orgs and Projects within that Account. +* A policy at the Org level can only be added to Policy Sets in that Org and its Project. +* A policy at the Project level can only be added to Policy Sets in that Project. + +![](./static/harness-governance-overview-13.png) + +### See also + +* [Harness Policy As Code Quickstart](harness-governance-quickstart.md) +* [Add a Policy Step to a Pipeline](add-a-governance-policy-step-to-a-pipeline.md) +* [Harness Policy As Code Overview for Feature Flags](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md) + diff --git a/docs/platform/14_Policy-as-code/harness-governance-quickstart.md b/docs/platform/14_Policy-as-code/harness-governance-quickstart.md new file mode 100644 index 00000000000..5686dc870dc --- /dev/null +++ b/docs/platform/14_Policy-as-code/harness-governance-quickstart.md @@ -0,0 +1,436 @@ +--- +title: Harness Policy As Code Quickstart +description: Learn how to use OPA policies in Harness to enforce governance across your DevOps processes. +# sidebar_position: 2 +helpdocs_topic_id: jws2znftay +helpdocs_category_id: w6r9f17pk3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is behind the Feature Flag `OPA_PIPELINE_GOVERNANCE`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +::: + +Harness Policy As Code provides governance using Open Policy Agent (OPA), Policy Management, and Rego policies. You can use Harness Policy As Code to ensure that Harness entities like Pipelines meet specific compliance requirements when specific events happen (On Save, On Run, etc). + +This quickstart shows you how to use Harness OPA integration to enforce Pipeline governance. + +We'll show you how to use OPA's Rego policy language to create policies, test them against Pipelines, enforce them when saving and running Pipelines, and reviewing all of the policy evaluations for a Pipeline. + +Let's get started. + +### Objectives + +You'll learn how to: + +1. Create and test Rego policies in Harness. +2. Create a Policy Set using your new policy. +3. Run a Pipeline that fails a policy evaluation. +4. Run a Pipeline that passes a policy evaluation. +5. Review policy evaluations for a Pipeline. + +### Before you begin + +* **What you don't need:** this quickstart is only intended to show you how Pipeline governance works and so we use a simple Pipeline that only contains an Approval stage. You do not need a Kubernetes cluster or other host as a CD deployment target or CI build farm. You do not need a running Harness Delegate. +* Review [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) to establish a general understanding of Harness. +* The [Harness Policy As Code Overview](harness-governance-overview.md) provides a concise overview of Harness Policy As Code. +* **New to OPA Policy Authoring?** OPA policies are written in OPA's Rego policy language. We'll provide the policy you need for this quickstart, but it's also helpful to have some familiarity with Rego before writing and reading policies. + + **Highly recommend:** Free online course on Rego from Styra founder and OPA co-creator Tim Hendricks: [OPA Policy Authoring](https://academy.styra.com/courses/opa-rego). + + See [Policy Language](https://www.openpolicyagent.org/docs/latest/policy-language/) from OPA. The [Rego Cheatsheet](https://dboles-opa-docs.netlify.app/docs/v0.10.7/rego-cheatsheet/) is also helpful to have on hand. + + +:::warning +When you create Policy Sets they are applied to all matching entities (for example, Pipelines). Be careful that you do not create a Policy Set that might impact existing Pipelines unintentionally. + +For this quickstart, we'll create a new Harness Project and only apply Policy Sets to its Pipelines. We will not impact Pipelines outside of this Project. + +::: + +#### How does Harness use OPA? + +The Harness OPA server is an OPA server managed by Harness. + +In Harness, you add Rego policies to a Policy Set and select the Harness entities for evaluation (e.g. Pipelines). At that point, the policies are configured on the Harness OPA Server via a Kubernetes ConfigMap. + +When certain events happen (e.g. saving or running a Pipeline), Harness reaches out to the Harness OPA server to evaluate the action using the Policy Set. + +For more details, see [Harness Policy As Code Overview](harness-governance-overview.md). + +### Step 1: Create a Project + +In your Harness account, click **Home**. + +Click **Projects**, and then click **New Project**. + +![](./static/harness-governance-quickstart-56.png) + +Name the Project **Quickstart**, and click **Save and Continue**. + +In **Invite Collaborators**, click **Save and Continue**. You automatically be added as a Project Admin. + +Your new Project is created. + +![](./static/harness-governance-quickstart-57.png) + +Next we'll add a Pipeline that we'll evaluate later using OPA policies. + +### Step 2: Create a Pipeline + +For this quickstart, we'll use a very simple Pipeline that only contains an [Approval stage](../9_Approvals/adding-harness-approval-stages.md). + +Open the new Harness Project you created and click **Deployments**. + +Click **Pipelines**, and then click **Create a Pipeline**. + +Name the Pipeline **Policy Example** and click **Start**. + +![](./static/harness-governance-quickstart-58.png) + +In Pipeline Studio, click **YAML** to switch to the YAML editor. + +![](./static/harness-governance-quickstart-59.png) + +Click **Edit YAML**. + +Replace the existing YAML with the following YAML: + + +``` +pipeline: + name: Policy Example + identifier: Policy_Example + projectIdentifier: Quickstart + orgIdentifier: default + tags: {} + stages: + - stage: + name: Test + identifier: Test + description: "" + type: Approval + spec: + execution: + steps: + - step: + type: ShellScript + name: Echo + identifier: Echo + spec: + shell: Bash + onDelegate: true + source: + type: Inline + spec: + script: echo "hello" + environmentVariables: [] + outputVariables: [] + executionTarget: {} + timeout: 10m + serviceDependencies: [] + tags: {} +``` + + +:::note +We use the **Quickstart** `projectIdentifier` and the **default** `orgIdentifier`. If you are in a different org, you can replace **default** with the current org Id. You can get Ids from the URL in your browser: `.../orgs//projects//...`. + +::: + +Click **Save**. The Pipeline is now saved. + +Click **Visual** and you can see it's a simple Pipeline with a manual [Approval stage](../9_Approvals/adding-harness-approval-stages.md) and one [Shell Script](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) step that echoes `hello`. + +![](./static/harness-governance-quickstart-60.png) + +Next, we'll create a policy that requires any Pipeline with an Approval stage to also contain an Approval step in that stage. + +### Step 2: Create and Test a Policy + +In this step, we'll quickly review a Rego policy, add the policy in Harness, and then test our Pipeline using the policy. + +#### Review: Rego Policies + +Harness uses the [Rego policy language](https://www.openpolicyagent.org/docs/latest/policy-language/) for defining rules that are evaluated by the OPA engine. Basically, you use Rego to answer queries such as "does the Pipeline have an Approval step?" and so on. + +Your Harness Pipelines and entities might be created using the Harness UI or YAML, but your Rego policies will validate the JSON of your Pipelines or other entities. + +Let's look at a simple example: + + +``` +package pipeline + +# Deny pipelines that don't have an approval step +# NOTE: Try removing the HarnessApproval step from your input to see the policy fail +deny[msg] { + # Find all stages that are Deployments ... + input.pipeline.stages[i].stage.type == "Approval" + + # ... that are not in the set of stages with HarnessApproval steps + not stages_with_approval[i] + + # Show a human-friendly error message + msg := sprintf("deployment stage '%s' does not have a HarnessApproval step", [input.pipeline.stages[i].stage.name]) +} + +# Find the set of stages that contain a HarnessApproval step +stages_with_approval[i] { + input.pipeline.stages[i].stage.spec.execution.steps[_].step.type == "HarnessApproval" +} +``` +Basically, this policy checks whether a Pipeline has an Approval stage containing Approval steps. If the Pipeline does not, then the policy `deny` is enforced. + +The Pipeline fails the policy if: + +* `deny` is true. +* `deny` is a non-empty string. +* `deny` is a non-empty array of strings. + +The Pipeline passes the policy if: + +* `deny` is undefined. +* `deny` is false. +* `deny` is an empty string. +* `deny` is an empty array of strings. + +You also must consider severity, but that is discussed later. + +#### Create the Policy + +In the Harness Project, in **Project Setup**, click **Policies**. + +Click **Policies**, and then click **New Policy**. + +![](./static/harness-governance-quickstart-61.png) + +Name the new policy **Quickstart**, and click **Apply**. The policy editor appears. + +![](./static/harness-governance-quickstart-62.png) + +In **Library**, in **Sample Policies**, click **Pipeline - Approval**. + +![](./static/harness-governance-quickstart-63.png) + +The policy appears: + + +``` +package pipeline + +# Deny pipelines that don't have an approval step +# NOTE: Try removing the HarnessApproval step from your input to see the policy fail +deny[msg] { + # Find all stages that are Deployments ... + input.pipeline.stages[i].stage.type == "Deployment" + + # ... that are not in the set of stages with HarnessApproval steps + not stages_with_approval[i] + + # Show a human-friendly error message + msg := sprintf("deployment stage '%s' does not have a HarnessApproval step", [input.pipeline.stages[i].stage.name]) +} + +# Find the set of stages that contain a HarnessApproval step +stages_with_approval[i] { + input.pipeline.stages[i].stage.spec.execution.steps[_].step.type == "HarnessApproval" +} +``` +Click **Use this Sample**. + +The policy is selected and sample input is provided. + +![](./static/harness-governance-quickstart-64.png) + +Let's edit this policy to test the Pipeline we created. + +The Pipeline uses an Approval stage, not a Deployment stage, so we need to change the policy. + +On line 7 and 13, replace `Deployment` with `Approval`. + + +``` +... + input.pipeline.stages[i].stage.type == "Approval" +... + msg := sprintf("Approval stage '%s' does not have a HarnessApproval step", [input.pipeline.stages[i].stage.name]) +``` +Now we can test the policy using our Pipeline. + +#### Test the Policy against you Pipeline + +In **Input**, delete the sample and paste the YAML for the Pipeline we created. You can copy it from earlier in this doc. + +Since **Input** only uses JSON and we pasted in YAML, you will see an error like `Unexpected token p in JSON at line 1`. + +Click the format button (</>) to convert the YAML to JSON. + +![](./static/harness-governance-quickstart-65.png) + +Harness Pipelines can be created in YAML, but Rego evaluates JSON.In the JSON, you can see the Fully Qualified Names (FQNs) of the labels your Rego references. + +![](./static/harness-governance-quickstart-66.png) + +The input payload contains user `metadata` for the user that initiated the event. Metadata includes roles, groups, etc, and is added to every evaluation automatically. This can be used for policies where you want to evaluate users.Click **Test**. + +In **Output**, you can see that the Pipeline failed the policy because it is missing an Approval step. + +![](./static/harness-governance-quickstart-67.png) + +We'll fix the Pipeline later in this quickstart. The important thing is we know it works. + +Click **Save**. + +We tested the policy, but we still need to enforce it. To enforce a policy, you add it to a Policy Set. + +### Step 3: Create a Policy Set + +OPA evaluates rules using Policy Sets. Policy Sets are groups of related policies. A single policy can be a member of many Policy Sets. + +For this quickstart, we'll just create a Policy Set using our single policy. + +Click **Policy Sets**. + +![](./static/harness-governance-quickstart-68.png) + +In **Policy Sets**, click **New Policy Set**. + +![](./static/harness-governance-quickstart-69.png) + +Name the new Policy Set **Quickstart**. + +Now we can select the Harness entity for the Policy Set, and the event that triggers evaluation. + +In **Entity Type that this policy set applies to**, select **Pipeline**. + +In **On what event should the policy set be evaluated**, select **On Run**. + +![](./static/harness-governance-quickstart-70.png) + +Click **Continue**. + +Now we can select the policies for this Policy Set. + +In **Policy to Evaluate**, click **Add Policy**. + +In **Select** **Policy**, click **Project Quickstart**, and select the **Quickstart** policy you created. + +![](./static/harness-governance-quickstart-71.png) + +Be sure to select **Error and exit**. + +Click **Apply**. + +![](./static/harness-governance-quickstart-72.png) + +Click **Finish**. The new Policy Set is listed. + +![](./static/harness-governance-quickstart-73.png) + +This Policy Set will be evaluated on every Pipeline. + + +:::warning +When you create Policy Sets they are applied to all matching entities (for example, Pipelines). Be careful that you do not create a Policy Set that might impact existing Pipelines unintentionally. + +::: + +### Step 4: Evaluate a Pipeline on Run + +Now that we have a Policy Set, let's see it in action. + +Click **Pipelines** to navigate back to the Pipeline you created earlier. Remember, it does not have an Approval step and it will fail the Policy Set evaluation. + +Click **Run**, and then **Run Pipeline**. + +The Policy Set is evaluated and the Pipeline execution fails. + +In the Pipeline execution, click **Policy Evaluations**. + +![](./static/harness-governance-quickstart-74.png) + +You can see the Policy Set that failed the Pipeline, and the reason for the failure. Clicking the Policy Set name will take you to the Policy Set. + +Let's fix the Pipeline and try again. + +### Step 5: Conform a Failed Pipeline and Rerun + +Open the Pipeline in Pipeline Studio. Click the Pipeline name in the breadcrumbs and click **Pipeline Studio**. + +Switch to the YAML editor and click **Edit YAML**. + +Add a new line before the `- step:` for the **Shell Script** step. + +![](./static/harness-governance-quickstart-75.png) + +On the new line, paste the YAML for a [Manual Approval](https://docs.harness.io/article/43pzzhrcbv-using-harness-approval-steps-in-cd-stages) step: + + +``` + - step: + type: HarnessApproval + name: Approval + identifier: Approval + spec: + approvalMessage: Please review the following information and approve the pipeline progression + includePipelineExecutionHistory: true + approvers: + userGroups: + - account.admin + minimumCount: 1 + disallowPipelineExecutor: false + approverInputs: [] + timeout: 1d +``` +Note the `userGroups` setting. This is the Harness User Group that is allowed to approve this step. You can edit this in the Visual editor if you want to add your User Group(s), but it's not necessary for this quickstart.Click **Save**. + +Click **Run**, and then **Run Pipeline**. + +The Pipeline runs. It passed the Policy Set successfully. + +The new Approval step appears during execution. + +![](./static/harness-governance-quickstart-76.png) + +Click **Approve** to finish running the Pipeline. + +### Step 6: Review Policy Evaluations + +You can review policy evaluations in a few places. + +#### Review in Pipeline Execution + +On the Pipeline deployment summary (**Execution History**), click **Policy Evaluations**. + +![](./static/harness-governance-quickstart-77.png) + +You can see Policy Set evaluations listed. + +![](./static/harness-governance-quickstart-78.png) + +#### Review in Governance Overview + +Click **Project Setup**, and then click **Policies**. + +Click **Evaluations**. + +You can see the evaluation you just performed. + +### Summary + +In this tutorial, you: + +1. Created and tested a Rego policy in Harness. +2. Created a Policy Set from your new policy. +3. Ran a Pipeline that failed a policy evaluation. +4. Ran a Pipeline that passed a policy evaluation. +5. Reviewed policy evaluations for a Pipeline. + +### See also + +* [Add a Policy Engine Step to a Pipeline](add-a-governance-policy-step-to-a-pipeline.md) +* [Harness Policy As Code Overview](harness-governance-overview.md) +* [Harness Policy As Code Overview for Feature Flags](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md) + diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-14.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-14.png new file mode 100644 index 00000000000..8f8c2497a99 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-14.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-15.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-15.png new file mode 100644 index 00000000000..fc03366de4a Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-15.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-16.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-16.png new file mode 100644 index 00000000000..7655f12a6f9 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-16.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-17.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-17.png new file mode 100644 index 00000000000..62e0f985207 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-17.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-18.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-18.png new file mode 100644 index 00000000000..f4c946462ad Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-18.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-19.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-19.png new file mode 100644 index 00000000000..987601d7458 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-19.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-20.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-20.png new file mode 100644 index 00000000000..9255dfaeefb Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-20.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-21.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-21.png new file mode 100644 index 00000000000..ab423f44325 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-connector-21.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-00.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-00.png new file mode 100644 index 00000000000..76fc3906ee9 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-00.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-01.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-01.png new file mode 100644 index 00000000000..68803ee5bf7 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-01.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-02.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-02.png new file mode 100644 index 00000000000..092373498be Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-02.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-03.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-03.png new file mode 100644 index 00000000000..7812eb53043 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-03.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-04.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-04.png new file mode 100644 index 00000000000..05316ad202c Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-04.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-05.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-05.png new file mode 100644 index 00000000000..f998e758888 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-05.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-06.png b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-06.png new file mode 100644 index 00000000000..515615560db Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-governance-policy-step-to-a-pipeline-06.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-46.png b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-46.png new file mode 100644 index 00000000000..8f8c2497a99 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-46.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-47.png b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-47.png new file mode 100644 index 00000000000..fc03366de4a Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-47.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-48.png b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-48.png new file mode 100644 index 00000000000..7655f12a6f9 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-48.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-49.png b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-49.png new file mode 100644 index 00000000000..384092f1f19 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-49.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-50.png b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-50.png new file mode 100644 index 00000000000..2f5949fc31f Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-50.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-51.png b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-51.png new file mode 100644 index 00000000000..dbd784e87bc Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-51.png differ diff --git a/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-52.png b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-52.png new file mode 100644 index 00000000000..db6b54542d1 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/add-a-policy-engine-step-to-a-secret-52.png differ diff --git a/docs/platform/14_Policy-as-code/static/disable-a-policy-set-53.png b/docs/platform/14_Policy-as-code/static/disable-a-policy-set-53.png new file mode 100644 index 00000000000..0fb5b4bd68a Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/disable-a-policy-set-53.png differ diff --git a/docs/platform/14_Policy-as-code/static/disable-a-policy-set-54.png b/docs/platform/14_Policy-as-code/static/disable-a-policy-set-54.png new file mode 100644 index 00000000000..12ea69daeac Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/disable-a-policy-set-54.png differ diff --git a/docs/platform/14_Policy-as-code/static/disable-a-policy-set-55.png b/docs/platform/14_Policy-as-code/static/disable-a-policy-set-55.png new file mode 100644 index 00000000000..76610703c90 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/disable-a-policy-set-55.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-overview-07.png b/docs/platform/14_Policy-as-code/static/harness-governance-overview-07.png new file mode 100644 index 00000000000..d34341e416c Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-overview-07.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-overview-08.png b/docs/platform/14_Policy-as-code/static/harness-governance-overview-08.png new file mode 100644 index 00000000000..1b8d90c200b Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-overview-08.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-overview-09.png b/docs/platform/14_Policy-as-code/static/harness-governance-overview-09.png new file mode 100644 index 00000000000..a09e2824564 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-overview-09.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-overview-10.png b/docs/platform/14_Policy-as-code/static/harness-governance-overview-10.png new file mode 100644 index 00000000000..621ab521b58 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-overview-10.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-overview-11.png b/docs/platform/14_Policy-as-code/static/harness-governance-overview-11.png new file mode 100644 index 00000000000..86a27ab03e6 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-overview-11.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-overview-12.png b/docs/platform/14_Policy-as-code/static/harness-governance-overview-12.png new file mode 100644 index 00000000000..c1bb9b075d4 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-overview-12.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-overview-13.png b/docs/platform/14_Policy-as-code/static/harness-governance-overview-13.png new file mode 100644 index 00000000000..9af9151a931 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-overview-13.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-56.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-56.png new file mode 100644 index 00000000000..cfb0caac9d2 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-56.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-57.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-57.png new file mode 100644 index 00000000000..57cb2dbe837 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-57.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-58.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-58.png new file mode 100644 index 00000000000..29867ca991a Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-58.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-59.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-59.png new file mode 100644 index 00000000000..9090684d726 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-59.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-60.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-60.png new file mode 100644 index 00000000000..85114daaf13 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-60.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-61.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-61.png new file mode 100644 index 00000000000..d14317bfa32 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-61.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-62.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-62.png new file mode 100644 index 00000000000..6a9f3c8ee23 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-62.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-63.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-63.png new file mode 100644 index 00000000000..654c2dc68d2 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-63.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-64.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-64.png new file mode 100644 index 00000000000..821e9f5a968 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-64.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-65.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-65.png new file mode 100644 index 00000000000..cc23f92fed4 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-65.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-66.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-66.png new file mode 100644 index 00000000000..5931f8f9fef Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-66.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-67.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-67.png new file mode 100644 index 00000000000..b4fdde1a7ff Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-67.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-68.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-68.png new file mode 100644 index 00000000000..07ebf40998d Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-68.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-69.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-69.png new file mode 100644 index 00000000000..3f5c98173dd Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-69.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-70.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-70.png new file mode 100644 index 00000000000..72c4f65b8df Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-70.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-71.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-71.png new file mode 100644 index 00000000000..76c3de5fc52 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-71.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-72.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-72.png new file mode 100644 index 00000000000..5c962229f7c Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-72.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-73.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-73.png new file mode 100644 index 00000000000..90bed8cc897 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-73.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-74.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-74.png new file mode 100644 index 00000000000..a087e1f872d Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-74.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-75.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-75.png new file mode 100644 index 00000000000..d5834e89cda Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-75.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-76.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-76.png new file mode 100644 index 00000000000..303e96cf722 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-76.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-77.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-77.png new file mode 100644 index 00000000000..f8ff5fc8143 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-77.png differ diff --git a/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-78.png b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-78.png new file mode 100644 index 00000000000..ae65f619002 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/harness-governance-quickstart-78.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-22.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-22.png new file mode 100644 index 00000000000..7c1824fa215 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-22.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-23.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-23.png new file mode 100644 index 00000000000..f74802250af Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-23.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-24.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-24.png new file mode 100644 index 00000000000..c440a28237a Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-24.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-25.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-25.png new file mode 100644 index 00000000000..89b95c31d0f Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-25.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-26.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-26.png new file mode 100644 index 00000000000..741ddbefbd5 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-26.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-27.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-27.png new file mode 100644 index 00000000000..d8852ace761 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-27.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-28.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-28.png new file mode 100644 index 00000000000..f8008346dee Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-28.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-29.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-29.png new file mode 100644 index 00000000000..7c656644045 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-29.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-30.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-30.png new file mode 100644 index 00000000000..c0b90866cfb Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-30.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-31.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-31.png new file mode 100644 index 00000000000..fbb2be6568e Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-31.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-32.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-32.png new file mode 100644 index 00000000000..2ad632bf79d Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-32.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-33.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-33.png new file mode 100644 index 00000000000..8d70445b8d1 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-33.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-34.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-34.png new file mode 100644 index 00000000000..fb320bbbe77 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-34.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-35.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-35.png new file mode 100644 index 00000000000..48a1ac24464 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-35.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-36.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-36.png new file mode 100644 index 00000000000..06efde103fe Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-36.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-37.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-37.png new file mode 100644 index 00000000000..2f798d6ea53 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-37.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-38.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-38.png new file mode 100644 index 00000000000..cdabc512120 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-38.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-39.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-39.png new file mode 100644 index 00000000000..01dd5a30c2e Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-39.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-40.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-40.png new file mode 100644 index 00000000000..a79e03acf75 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-40.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-41.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-41.png new file mode 100644 index 00000000000..bc40ac87fb7 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-41.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-42.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-42.png new file mode 100644 index 00000000000..5cafc12f97b Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-42.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-43.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-43.png new file mode 100644 index 00000000000..c3aa1cdf357 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-43.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-44.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-44.png new file mode 100644 index 00000000000..c7c6de6a749 Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-44.png differ diff --git a/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-45.png b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-45.png new file mode 100644 index 00000000000..a443f70e7ae Binary files /dev/null and b/docs/platform/14_Policy-as-code/static/using-harness-policy-engine-for-feature-flags-45.png differ diff --git a/docs/platform/14_Policy-as-code/using-harness-policy-engine-for-feature-flags.md b/docs/platform/14_Policy-as-code/using-harness-policy-engine-for-feature-flags.md new file mode 100644 index 00000000000..43d44e68735 --- /dev/null +++ b/docs/platform/14_Policy-as-code/using-harness-policy-engine-for-feature-flags.md @@ -0,0 +1,252 @@ +--- +title: Use Harness Policy As Code for Feature Flags +description: This topic gives steps to create, update, and view policies and policy sets for Feature Flags. +# sidebar_position: 2 +helpdocs_topic_id: vb6ilyz194 +helpdocs_category_id: zoc8fpiifm +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is behind the Feature Flags `OPA_PIPELINE_GOVERNANCE`, `CUSTOM_POLICY_STEP`, and `OPA_FF_GOVERNANCE`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +::: + +This topic describes how to create policies using the Harness Policy As Code and apply them to your Feature Flags. Harness Policy As Code uses the Open Policy Agency (OPA) to store policies on the Harness platform. For more information about how OPA and Harness Policy As Code work, see [Harness Policy As Code Overview](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md). + +### Before you begin + +* Ensure you have read and understood [Harness Policy As Code Overview](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md). +* Ensure you have [created your project and environment](../../feature-flags/2-ff-using-flags/1-ff-creating-flag/4-create-a-feature-flag.md) in the Harness platform. +* Policies use OPA authoring language Rego. New to Rego? Use the following resources to learn it: + + Free online course on Rego from Styra founder and OPA co-creator Tim Hendricks: [OPA Policy Authoring](https://academy.styra.com/courses/opa-rego). + + See [Policy Language](https://www.openpolicyagent.org/docs/latest/policy-language/) from OPA. The [Rego Cheat Sheet](https://dboles-opa-docs.netlify.app/docs/v0.10.7/rego-cheatsheet/) is also helpful to have on hand. + +### Step: Create and Apply a Policy + +To create and apply a policy, follow the steps below: + +#### Step 1: Create a Policy + +The first step of using policies with your Feature Flags is creating a policy. + +1. In Harness Platform, click **Feature Flags** and select your project. + +![](./static/using-harness-policy-engine-for-feature-flags-22.png) + +2. In **Project Setup**, click **Policies**. + + +:::note +You can view an overview of your policies and how many times they have been evaluated on the [Overview](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md) page. + +::: + +![Screenshot of the Policies Overview page on the Harness Platform](./static/using-harness-policy-engine-for-feature-flags-23.png) + +3. Click **Policies**, then click **New Policy**. + + ![](./static/using-harness-policy-engine-for-feature-flags-24.png) + +4. In the **New Policy** page, enter the **Name** of the Policy and click **Apply**. This is the Policy name that appears on the Policy Overview page. + +5. Then, you can either enter your own Rego policy or use a pre-existing policy from the policy library. + + +:::note +Policies are written in Rego, for more information about Rego, see the [OPA documentation for Policy Language](https://www.openpolicyagent.org/docs/latest/policy-language/).  + +::: + +##### Use Your Own Rego Policy + +To use your own Rego policy: + +1. Enter your Rego policy into the policy editor. For example: + + +``` +package feature_flags + +# Deny flags that aren't booleans +deny[sprintf("feature flag '%s' isn't of type boolean", [input.flag.identifier])] { + input.flag.kind != "boolean" +} +``` +2. Click **Save**. + +![](./static/using-harness-policy-engine-for-feature-flags-25.png) + +##### Use an Existing Rego Policy from the Harness Policy Library + +To select a pre-existing policy: + +1. In the right-hand panel, click the **Library**. +2. In the **Entity** drop-down menu, select **Flags**. + + ![](./static/using-harness-policy-engine-for-feature-flags-26.png) + +3. Select a pre-existing flag policy from the list. The Rego code will populate in the **Library** editor. +4. Click **Use this Sample**. + + ![](./static/using-harness-policy-engine-for-feature-flags-27.png) + +5. In **File Overwrite**, click **Confirm** to add the sample to your editor. + + ![](./static/using-harness-policy-engine-for-feature-flags-28.png) + +6. Click **Save**. + +##### Use the Testing Terminal to Check Your Code + +1. To check your policy code is valid, test your policy against a previous **Policy Evaluation** in the **Testing Terminal**: + + +:::note +You can only test a policy in the Testing Terminal if you have previously run a Policy Evaluation. If you are creating your first ever policy for the Project, continue to [Step 2: Add the Policy to a Policy Set](#step-2-add-the-policy-to-a-policy-set). After you have applied your first policy to a Feature Flag, you can then use the Testing Terminal. + +::: + +2. In the **Testing Terminal**, click **Select Input.** + + ![](./static/using-harness-policy-engine-for-feature-flags-29.png) + +3. Select **Feature Flag** as the **Entity Type**. **Event Type** and **Action** are automatically completed. + +4. Select the **Feature Flag** you want to test, then click **Apply**. This will automatically populate the **Testing Terminal** using the details of the Feature Flag you selected. + + ![](./static/using-harness-policy-engine-for-feature-flags-30.png) + +5. Click **Test**. Depending on whether the updated policy successfully applies to the existing Feature Flag, you receive one of the following: +* **Input failed Policy Evaluation**: The Feature Flag doesn’t adhere to the updated policy. + + ![](./static/using-harness-policy-engine-for-feature-flags-31.png) + +* **Input succeeded Policy Evaluation**: The Feature Flag adheres to the updated policy. + +![](./static/using-harness-policy-engine-for-feature-flags-32.png) + +#### Step 2: Add the Policy to a Policy Set + +After you create an individual policy, you must add it to a Policy Set before you can apply it to your Feature Flags. + +1. In **Policies**, click **Policy Sets**, then click **New Policy Set**. + +![](https://lh4.googleusercontent.com/f2GbzvnKR5dw5iVaHRfr695eq16qFYya38-I9tSzDH37UZRPljOzGaLmGuGBLdtsWvtQzWDgL8uNRfmLjy-gsWepN1HKw8XXrgpAFo71o13aT0VAp-JJ3noiRvPlumo_-NfG0crI) + +2. In **Name**, enter the name of the Policy Set. +3. (Optional) In **Description**, enter a description of the Policy Set. +4. In **Entity type** that this policy applies to, select **Feature Flag**. +5. In **On what event should the Policy Set be evaluated**, select **On save**, then click **Continue**. + +![](./static/using-harness-policy-engine-for-feature-flags-33.png) + + +:::note +Policies are not automatically applied to existing Feature Flags. Policies can be applied to Feature Flags only on a save when they are created, updated, or switched on or off. + +::: + +6. In Policy evaluation criteria, click **Add Policy**, then click your Project to display all the policies you created for that project. +7. Select the policy you want to use. In the drop down menu next to the policy name, select the severity and action you want to apply when the policy isn’t adhered to: +* **Warn & continue**: If a policy isn’t met when the Feature Flag is evaluated, you receive a warning but the flag is saved and you can continue. +* **Error and exit:** If a policy isn’t met when the Feature Flag is evaluated, you receive an error and are exited without saving the flag. + +![](./static/using-harness-policy-engine-for-feature-flags-34.png) + +8. Click **Apply**, then click **Finish**. +9. The Policy Set is automatically set to Enforced, to make it unenforced, toggle off the **Enforced** button. + + +:::note +You need to enforce the policy before it evaluates your Feature Flags. + +::: + +![](./static/using-harness-policy-engine-for-feature-flags-35.png) + +### Step: Apply a Policy to a Feature Flag + +After you have created your Policy Set and added your policies to it, apply the policy to a Feature Flag.  + +1. In Harness Platform, click **Feature Flags**. +2. Click **+ Flag**. +3. [Create a new Feature Flag](../../feature-flags/2-ff-using-flags/1-ff-creating-flag/4-create-a-feature-flag.md#before-you-begin). Make sure the flag [adheres to the policy you are testing.](using-harness-policy-engine-for-feature-flags.md#step-1-create-a-policy) +4. Click **Save and Close**. The result is one of the following: +* **Success**: When you save the flag, the policy rule is evaluated, returns Flag created, and the flag is saved. +* **Failure**: + + If you selected **Warn and continue** when creating the policy, the flag is saved but you receive the following warning message: + ![](./static/using-harness-policy-engine-for-feature-flags-36.png) + + + If you selected **Error and exit** when creating the policy, the flag doesn’t save and you receive the following error message: + +![](./static/using-harness-policy-engine-for-feature-flags-37.png) +After you have successfully created a Policy Set and applied it to your feature flags, you can: + +* [Edit a Policy](using-harness-policy-engine-for-feature-flags.md#edit-a-policy) +* [Edit a Policy Set](using-harness-policy-engine-for-feature-flags.md#edit-a-policy-set) +* [View a History of Policy Evaluations](using-harness-policy-engine-for-feature-flags.md#view-a-history-of-policy-evaluations) + +### Edit a Policy + +After you have created a policy, you can edit it by renaming it or updating its rules in the policy editor. + +1. In Feature Flags, click **Policies**. +2. Click **Policies**, then click the three dots next to the policy you want to change, then click **Edit**. + + ![](./static/using-harness-policy-engine-for-feature-flags-38.png) + +3. To update the policy name, click **Edit Policy**. + +![](./static/using-harness-policy-engine-for-feature-flags-39.png) + +4. Enter the new name and click **Apply**. + +![](./static/using-harness-policy-engine-for-feature-flags-40.png) + +5. To update the policy rules, edit the Rego code in the policy editor. + +![](./static/using-harness-policy-engine-for-feature-flags-41.png) + +6. Test the updated policy in the Testing Terminal against a previous Policy Evaluation to ensure it is valid. For more information about how to do this, see [Step 9 in Create a Policy](using-harness-policy-engine-for-feature-flags.md#step-1-create-a-policy). +7. When you've made all the changes, click **Save**. + +![](./static/using-harness-policy-engine-for-feature-flags-42.png) + +### Edit a Policy Set + +You can edit a Policy Set to amend the name or add a new policy.  + +1. In Feature Flags, click **Policies**. +2. Click **Policy Sets**, then click the three dots next to the Policy Set you want to change, then click **Edit**. + +![](./static/using-harness-policy-engine-for-feature-flags-43.png) + +3. The Policy Set's settings are displayed. Follow the steps in [Add the Policy to a Policy Set](using-harness-policy-engine-for-feature-flags.md#step-2-add-the-policy-to-a-policy-set) to edit the details. + +4. Click **Apply**, then click **Finish**. + +### View a History of Policy Evaluations + +You can view all failures, warnings, and successes of evaluations for each of your Policy Sets.  + +1. In Feature Flags, click **Policies**. On the Overview page, you can view the total number of: +* Policy Sets. +* Policy Sets in effect. +* Number of policies across all Policy Sets. +* Policy evaluations. +* Passed, failed, and warning results from evaluations. +2. Click **Evaluation**. + +![](./static/using-harness-policy-engine-for-feature-flags-44.png) + +3. To view further details of a particular evaluation, click on it and expand the relevant evaluation. + +![](./static/using-harness-policy-engine-for-feature-flags-45.png) + +### See also + +* [Harness Policy As Code Overview](../../feature-flags/2-ff-using-flags/8-harness-policy-engine.md) + diff --git a/docs/platform/15_Audit-Trail/_category_.json b/docs/platform/15_Audit-Trail/_category_.json new file mode 100644 index 00000000000..3f6b059edc7 --- /dev/null +++ b/docs/platform/15_Audit-Trail/_category_.json @@ -0,0 +1 @@ +{"label": "Audit Trail", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Audit Trail"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "cgjcdl8vdn"}} \ No newline at end of file diff --git a/docs/platform/15_Audit-Trail/audit-trail.md b/docs/platform/15_Audit-Trail/audit-trail.md new file mode 100644 index 00000000000..6d9aa862b5e --- /dev/null +++ b/docs/platform/15_Audit-Trail/audit-trail.md @@ -0,0 +1,87 @@ +--- +title: View Audit Trail +description: Describes how to use the Audit Trail feature to track/debug/investigate changes to your resources in your Harness accounts. +# sidebar_position: 2 +helpdocs_topic_id: r5ytrnpcgr +helpdocs_category_id: cgjcdl8vdn +helpdocs_is_private: false +helpdocs_is_published: true +--- + +With Audit Trail in Harness, you can view and track changes to your Harness resources within your Harness account. + +The audit data retention period is 2 years. Harness reserves the right to delete audit data after 2 years. You can request a longer retention period by contacting Harness. For example, if you require audit data for legal discoveries, etc, contact Harness and we can help. + +This topic shows you how to view Audit Trails for your Harness account. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Access Management (RBAC) Overview](../4_Role-Based-Access-Control/1-rbac-in-harness.md) + +### Step: View an Audit Trail + +You can view Audit Trail data at Account/Org/Project scope. This topic explains how to view Audit Trail data at the account scope. + +The events in the scope of accounts will not be displayed if you are viewing the Audit Trail at the Org or Project scope.In Harness, go to **Account Settings**. + +Click **Audit Trail**. + +The **Audit Trail** page appears, displaying a record for each event that changed the setup of your Harness account, Modules, or Harness entities. By default, Harness shows the Audit logs for the previous 7 days. + +It may take a few minutes for events to appear on the Audit Trail. Wait a minute and then refresh your browser if you don't notice an event right away. + +![](./static/audit-trail-00.png) +For each event record, this view shows the: + +* Date and time (**Time**). +* The user who made the change (**User)**. +* **Action** by the user like create, update, delete. +* Harness entity affected (**Resource**). +* **Organization** corresponding to the affected entity. +* **Project** corresponding to the affected entity. +* **Module** corresponding to the affected entity. +* **Event Summary** with YAML difference.![](./static/audit-trail-01.png) + +From here, you have multiple options to [Modify the Audit Trail View](#modify_the_audit_trail_view). + +#### Exclude Audit Trail Records + +You can view all the records or filter the displayed records by selecting one of the following: + +* **Exclude Login Events** - For excluding login events like successful, or unsuccessful logins, 2FA, etc, from the displayed records. +* **Exclude System Events** - For excluding system events from the displayed records.![](./static/audit-trail-02.png) +These can be applied with or without your [custom filters](#option-add-a-filter) for Audit Trails. + +### Step: Set Date/Time Range + +You can restrict the Audit Trail's displayed events by date and time. + +Use the Date Picker to restrict events to a predefined date range, or to a custom date/time range: + +![](./static/audit-trail-03.png) +Selecting **Custom Date Range** enables you to set arbitrary limits by date and time of day. + +### Option: Add a Filter + +To add a Filter, perform the following steps: + +1. In Harness, in **Account Settings** click **Audit Trail**. +2. In **Account Audit Trail**, click the filter icon.![](./static/audit-trail-04.png) +3. In the **New Filter** settings, select the filters to scope down the viewable audit events. +You can scope down the viewable audit events by adding Filters and selecting: + * User + * Organization + * Project + * Resource Type + * Action +4. In **Filter** **Name**, enter a name for your filter. +5. In **Who can view and edit the filter?** select **Only Me** or **Everyone** based on the visibility you want to set for this filter. +6. Click **Save**. Your filter is now created.![](./static/audit-trail-05.png) + +Click **Apply** to view the Audit Events as per the filter you just created. + +![](./static/audit-trail-06.png) +By default, the events of the last 7 days are returned for the filter. To view more results, you can select the date range accordingly. + +![](./static/audit-trail-07.png) \ No newline at end of file diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-00.png b/docs/platform/15_Audit-Trail/static/audit-trail-00.png new file mode 100644 index 00000000000..d2af875ccdd Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-00.png differ diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-01.png b/docs/platform/15_Audit-Trail/static/audit-trail-01.png new file mode 100644 index 00000000000..7e1bd9e522f Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-01.png differ diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-02.png b/docs/platform/15_Audit-Trail/static/audit-trail-02.png new file mode 100644 index 00000000000..6a33ccd23b5 Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-02.png differ diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-03.png b/docs/platform/15_Audit-Trail/static/audit-trail-03.png new file mode 100644 index 00000000000..48681e5faff Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-03.png differ diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-04.png b/docs/platform/15_Audit-Trail/static/audit-trail-04.png new file mode 100644 index 00000000000..b552dbdf037 Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-04.png differ diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-05.png b/docs/platform/15_Audit-Trail/static/audit-trail-05.png new file mode 100644 index 00000000000..6e7f56fb2d9 Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-05.png differ diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-06.png b/docs/platform/15_Audit-Trail/static/audit-trail-06.png new file mode 100644 index 00000000000..84d604bb22f Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-06.png differ diff --git a/docs/platform/15_Audit-Trail/static/audit-trail-07.png b/docs/platform/15_Audit-Trail/static/audit-trail-07.png new file mode 100644 index 00000000000..48681e5faff Binary files /dev/null and b/docs/platform/15_Audit-Trail/static/audit-trail-07.png differ diff --git a/docs/platform/16_APIs/_category_.json b/docs/platform/16_APIs/_category_.json new file mode 100644 index 00000000000..73b09750e76 --- /dev/null +++ b/docs/platform/16_APIs/_category_.json @@ -0,0 +1 @@ +{"label": "APIs", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "APIs"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "pm96bpz4kf"}} \ No newline at end of file diff --git a/docs/platform/16_APIs/api-quickstart.md b/docs/platform/16_APIs/api-quickstart.md new file mode 100644 index 00000000000..13a22342ebc --- /dev/null +++ b/docs/platform/16_APIs/api-quickstart.md @@ -0,0 +1,296 @@ +--- +title: Harness API Quickstart +description: This document explains the steps to get started with Harness NG APIs. +# sidebar_position: 2 +helpdocs_topic_id: f0aqiv3td7 +helpdocs_category_id: pm96bpz4kf +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Everything you can create in the Harness Manager you can create using our REST APIs. + +This quickstart shows how to onboard Harness resources using the APIs. We'll add a new Project, Connector, and Pipeline using curl and the APIs. + + +:::note +**Looking for the API reference docs?** See [Harness API Reference Docs](https://harness.io/docs/api/). + +::: + +### Objectives + +You'll learn how to: + +* Authenticate with Harness via API using API keys. +* Onboard Harness Projects, Connectors, and Pipelines using the Harness API. + + +:::note +The API requests in this topic use curl, but Harness supports multiple languages, such as Go, Java, and Node.js. The [Harness API Reference Docs](https://harness.io/docs/api/) provides examples for all supported languages. + +::: + +### Before you begin + +* [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) +* [Access Management(RBAC) Overview](../4_Role-Based-Access-Control/1-rbac-in-harness.md) +* This quickstart walks you through adding the Harness API keys needed to authenticate with the API. To review Harness API keys, see [Add and Manage API Keys](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md). +* Make sure your Harness account has the required permissions to Create, Edit, Delete, and View the Harness resources you are creating via API. Any of the following default roles are sufficient: Account Administrator, Organization Admin, Project Admin. For more, see [Permissions Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md). + +### Step 1: Create a Harness API Key and PAT + +The Harness API uses API keys to authenticate requests. You create the API key in your Harness Manager User Profile, add a Personal Access Token (PAT) to the key, and then use the PAT in your API requests. + + +:::note +For an overview of Harness API keys, see [Add and Manage API Keys](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md). + +::: + +Let's create the API key and its Personal Access Token. + +Here's a quick visual summary: + +![](./static/api-quickstart-00.gif) + +#### Create API Key + +In Harness, navigate to your **Profile**. + +![](./static/api-quickstart-01.png) + +In **My API Keys**, Click **API Key**. The API Key settings appear. + +![](./static/api-quickstart-02.png) + +Enter **Name, Description,** and **Tags** for your API. + +Click **Save**. The new API Key is created. + +#### Create Personal Access Token + +Next, we'll add the Personal Access Token (PAT) that you will use when you make API requests. + +Click **Token** below the API Key you just created. + +![](./static/api-quickstart-03.png) + +In the **New Token** settings, enter a Name, Description, and Tags. + +To set an expiration date for this token, select **Set Expiration Date** and enter the date in **Expiration Date (mm/dd/yyyy)**. + +Click **Generate Token**. + +Your new token is generated. + +![](./static/api-quickstart-04.png) + + +:::warning +Please copy and store your token value somewhere safe. You won't be able to see it again. + +Your API keys carry many privileges, so be sure not to share them in publicly accessible areas. Make sure you always use the updated API Key value after you rotate the token. For more details, see [Rotate Token](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md#rotate-token). +::: + + +#### Service Account Tokens + +You can also use a Service Account Tokens instead of PAT. See [Add and Manage Service Accounts](../4_Role-Based-Access-Control/6-add-and-manage-service-account.md). + +### Step 2: Create a Project via API + +Now that we have our token, we can create a Harness Project. A Harness [Project](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts#organizations_and_projects) is a group of Harness modules and their Pipelines. + +To send the API request, you will need your Harness account Id and the token you created. + +The account Id can be found in every URL when using Harness following `account`: + + +``` +https://app.harness.io/ng/#/account/{accountid}/home/get-started +``` +Open a terminal to run the API request. + +We're going to create this Project in the built-in **default** Organization. If you want to use a different Org, just replace the instances of **default** in the command. + +Copy the following curl command and paste it into a text editor: + + +``` +curl --location --request POST 'https://app.harness.io/gateway/ng/api/projects?accountIdentifier={accountIdentifier}&orgIdentifier=default' \ +--header 'Content-Type: application/yaml' \ +--header 'x-api-key: {api-key}' \ +--data-raw 'project: + name: APISample + orgIdentifier: default + color: "#0063F7" + description: '' + identifier: APISample + tags: {} + modules: []' +``` +Replace `{accountidentifier}` with your Harness account Id and `{api-key}` with the PAT you created. + +Paste the updated curl command into a terminal and run it. + +The successful response will be something like this: + + +``` +{"status":"SUCCESS","data":{"project":{"orgIdentifier":"default","identifier":"APISample","name":"APISample","color":"#0063F7","modules":["CD","CI","CV","CF","CE","CORE","PMS","TEMPLATESERVICE"],"description":"","tags":{}},"createdAt":1636410020671,"lastModifiedAt":1636410020671},"metaData":null,"correlationId":"3aa01bdd-e45c-4eb2-a65d-7673ec287fcc"} +``` +Open Harness to see your new Project (you might need to refresh the Project page): + +![](./static/api-quickstart-05.png) + +### Step 3: Create a Connector via API + +A Harness Connector contains the information necessary to integrate and work with 3rd party tools. + +Harness uses Connectors at Pipeline runtime to authenticate and perform operations with a 3rd party tool. + +Let's create a [Docker Registry Connector](../7_Connectors/ref-cloud-providers/docker-registry-connector-settings-reference.md) that connects to DockerHub anonymously. + +Copy the following curl command: + + +``` +curl --location --request POST 'https://app.harness.io/gateway/ng/api/connectors?accountIdentifier=H5W8iol5TNWc4G9h5A2MXg' \ +--header 'Content-Type: text/yaml' \ +--header 'x-api-key: pat.6186f434cce56f2962ae9bbf.HpGoIx7O4ZvFSE4rEuuF' \ +--data-raw 'connector: + name: dockerhub + identifier: dockerhub + description: "" + tags: {} + orgIdentifier: default + projectIdentifier: APISample + type: DockerRegistry + spec: + dockerRegistryUrl: https://index.docker.io/v2/ + providerType: DockerHub + auth: + type: Anonymous' +``` +Replace `{accountidentifier}` with your Harness account Id and `{api-key}` with the PAT you created. + +Paste the updated curl command into a terminal and run it. + +The successful response will be something like this: + + +``` +{"status":"SUCCESS","data":{"connector":{"name":"dockerhub","identifier":"dockerhub","description":"","orgIdentifier":"default","projectIdentifier":"APISample","tags":{},"type":"DockerRegistry","spec":{"dockerRegistryUrl":"https://index.docker.io/v2/","providerType":"DockerHub","auth":{"type":"Anonymous"},"delegateSelectors":[]}},"createdAt":1636476303660,"lastModifiedAt":1636476303657,"status":null,"activityDetails":{"lastActivityTime":1636476303657},"harnessManaged":false,"gitDetails":{"objectId":null,"branch":null,"repoIdentifier":null,"rootFolder":null,"filePath":null},"entityValidityDetails":{"valid":true,"invalidYaml":null}},"metaData":null,"correlationId":"fab579bc-bc6f-46d1-95be-d6ed02844cd4"} +``` +Take a look at your new Connector in Harness: + +![](./static/api-quickstart-06.png) +### Step 4: Create a Pipeline + +A CD Pipeline is an end-to-end process that delivers a new version of your software. + +Let's create a simple CD Pipeline that contains a Shell Script step that echoes "hello world!". + +The Pipeline uses [Runtime Inputs](../20_References/runtime-inputs.md) (`<+input>`) for most settings.Copy the following curl command: + + +``` +curl --location --request POST 'https://app.harness.io/gateway/pipeline/api/pipelines?accountIdentifier={accountidentifier}&orgIdentifier=default&projectIdentifier=APISample' \ +--header 'Content-Type: application/yaml' \ +--header 'x-api-key: {api-key}' \ +--data-raw 'pipeline: + name: apiexample + identifier: apiexample + projectIdentifier: APISample + orgIdentifier: default + tags: {} + stages: + - stage: + name: demo + identifier: demo + description: "" + type: Deployment + spec: + serviceConfig: + serviceRef: <+input> + serviceDefinition: + type: Kubernetes + spec: + variables: [] + infrastructure: + environmentRef: <+input> + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: <+input> + namespace: <+input> + releaseName: release-<+INFRA_KEY> + allowSimultaneousDeployments: false + execution: + steps: + - step: + type: ShellScript + name: shell + identifier: shell + spec: + shell: Bash + onDelegate: true + source: + type: Inline + spec: + script: echo "hello world!" + environmentVariables: [] + outputVariables: [] + executionTarget: {} + timeout: 10m + rollbackSteps: [] + tags: {} + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: StageRollback' +``` +This sample using [Runtime Inputs](../20_References/runtime-inputs.md) for many settings.Replace `{accountidentifier}` with your Harness account Id and `{api-key}` with the PAT you created. + +Paste the updated curl command into a terminal and run it. + +The successful response will be something like this: + + +``` +{"status":"SUCCESS","data":"apiexample","metaData":null,"correlationId":"6375a5cc-f1ce-4a82-9428-0cf6ead6140c"} +``` +Take a look at your new Pipeline in Harness: + +![](./static/api-quickstart-07.png) +You're all done. + +In this tutorial, you learned how to: + +* Authenticate with Harness via API using API keys. +* Onboard Harness Projects, Connectors, and Pipelines using the Harness API. + +To explore the Harness API, see [Harness API Reference Docs](https://harness.io/docs/api/). + +### Notes + +* **Rate Limiting:** the Harness API does not impose any rate limits per account. Harness reserves the right to change these limits, to optimize performance for all API consumers. +* **Cross-origin Resource Sharing (CORS):** Harness APIs support CORS. This allows interactions between resources from different origins, which is normally prohibited to prevent malicious behavior. Each request must provide credentials (personal access tokens and service access tokens are both supported options). +* **Errors:** Harness uses conventional HTTP response codes to indicate the success or failure of an API request. + + + +| | | +| --- | --- | +| **HTTPS Status Code** | **Summary** | +| 200 - OK | The request has been processed successfully on the server. | +| 400 - Bad Request | The request was not processed successfully due to incorrect syntax or missing parameters. | +| 401 - Unauthorized | The request was unauthorized due to an invalid API Key. | +| 402 - Request Failed | The request cannot be processed. | +| 403 - Forbidden | The API Key does not have permission to perform the request. | +| 404 - Not Found | The requested resource does not exist. | +| 500, 502, 503, 504 - Server Errors | The Harness server encountered an unexpected error. | + diff --git a/docs/platform/16_APIs/harness-rest-api-reference.md b/docs/platform/16_APIs/harness-rest-api-reference.md new file mode 100644 index 00000000000..baa308930da --- /dev/null +++ b/docs/platform/16_APIs/harness-rest-api-reference.md @@ -0,0 +1,71 @@ +--- +title: Use the Harness REST API +description: Use the Harness REST API to automate Harness operations. +# sidebar_position: 2 +helpdocs_topic_id: bn72tvbj6r +helpdocs_category_id: pm96bpz4kf +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness REST API lets you automate Harness operations, including your builds, deployments, feature flags, etc. + +The Harness REST API reference docs are located at [https://harness.io/docs/api](https://harness.io/docs/api/). + +![](./static/harness-rest-api-reference-08.png) +You can try the API within the reference docs, or anywhere else (Postman, etc), but you'll need an API key from your Harness account first. + +When using the API key within the API reference docs, your credentials are saved until the end of the browser session. + +### Step 1: Create a Harness API Key and PAT + +The Harness API uses API keys to authenticate requests. You create the API key in your Harness Manager User Profile, add a Personal Access Token (PAT) to the key, and then use the PAT in your API requests. + +For an overview of Harness API keys, see [Add and Manage API Keys](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md).Let's create the API key and its Personal Access Token. + +Here's a quick visual summary: + +![](./static/harness-rest-api-reference-09.gif) + +#### Create API Key + +In Harness, navigate to your **Profile**. + +![](./static/harness-rest-api-reference-10.png) +In **My API Keys**, Click **API Key**. The API Key settings appear. + +![](./static/harness-rest-api-reference-11.png) +Enter **Name, Description,** and **Tags** for your API. + +Click **Save**. The new API Key is created. + +#### Create Personal Access Token + +Next, we'll add the Personal Access Token (PAT) that you will use when you make API requests. + +Click **Token** below the API Key you just created. + +![](./static/harness-rest-api-reference-12.png) +In the **New Token** settings, enter a Name, Description, and Tags. + +To set an expiration date for this token, select **Set Expiration Date** and enter the date in **Expiration Date (mm/dd/yyyy)**. + +Click **Generate Token**. + +Your new token is generated. + +![](./static/harness-rest-api-reference-13.png) +Please copy and store your token value somewhere safe. You won't be able to see it again. + +Your API keys carry many privileges, so be sure not to share them in publicly accessible areas. Make sure you always use the updated API Key value after you rotate the token. For more details, see [Rotate Token](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md#rotate-token). + +#### Service Account Tokens + +You can also use a Service Account Tokens instead of PAT. See [Add and Manage Service Accounts](../4_Role-Based-Access-Control/6-add-and-manage-service-account.md). + +### Step 2: Use the API + +Now you're ready to use the Harness API in the reference docs or anywhere else. + +See [https://harness.io/docs/api](https://harness.io/docs/api/). + diff --git a/docs/platform/16_APIs/static/api-quickstart-00.gif b/docs/platform/16_APIs/static/api-quickstart-00.gif new file mode 100644 index 00000000000..5f972aa4301 Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-00.gif differ diff --git a/docs/platform/16_APIs/static/api-quickstart-01.png b/docs/platform/16_APIs/static/api-quickstart-01.png new file mode 100644 index 00000000000..7ba1cd1ceff Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-01.png differ diff --git a/docs/platform/16_APIs/static/api-quickstart-02.png b/docs/platform/16_APIs/static/api-quickstart-02.png new file mode 100644 index 00000000000..5700db382bd Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-02.png differ diff --git a/docs/platform/16_APIs/static/api-quickstart-03.png b/docs/platform/16_APIs/static/api-quickstart-03.png new file mode 100644 index 00000000000..c3102e03904 Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-03.png differ diff --git a/docs/platform/16_APIs/static/api-quickstart-04.png b/docs/platform/16_APIs/static/api-quickstart-04.png new file mode 100644 index 00000000000..308cb34100b Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-04.png differ diff --git a/docs/platform/16_APIs/static/api-quickstart-05.png b/docs/platform/16_APIs/static/api-quickstart-05.png new file mode 100644 index 00000000000..d92010b7da6 Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-05.png differ diff --git a/docs/platform/16_APIs/static/api-quickstart-06.png b/docs/platform/16_APIs/static/api-quickstart-06.png new file mode 100644 index 00000000000..a8b92ac1791 Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-06.png differ diff --git a/docs/platform/16_APIs/static/api-quickstart-07.png b/docs/platform/16_APIs/static/api-quickstart-07.png new file mode 100644 index 00000000000..df0c308bee9 Binary files /dev/null and b/docs/platform/16_APIs/static/api-quickstart-07.png differ diff --git a/docs/platform/16_APIs/static/harness-rest-api-reference-08.png b/docs/platform/16_APIs/static/harness-rest-api-reference-08.png new file mode 100644 index 00000000000..88455a5f24b Binary files /dev/null and b/docs/platform/16_APIs/static/harness-rest-api-reference-08.png differ diff --git a/docs/platform/16_APIs/static/harness-rest-api-reference-09.gif b/docs/platform/16_APIs/static/harness-rest-api-reference-09.gif new file mode 100644 index 00000000000..5f972aa4301 Binary files /dev/null and b/docs/platform/16_APIs/static/harness-rest-api-reference-09.gif differ diff --git a/docs/platform/16_APIs/static/harness-rest-api-reference-10.png b/docs/platform/16_APIs/static/harness-rest-api-reference-10.png new file mode 100644 index 00000000000..7ba1cd1ceff Binary files /dev/null and b/docs/platform/16_APIs/static/harness-rest-api-reference-10.png differ diff --git a/docs/platform/16_APIs/static/harness-rest-api-reference-11.png b/docs/platform/16_APIs/static/harness-rest-api-reference-11.png new file mode 100644 index 00000000000..5700db382bd Binary files /dev/null and b/docs/platform/16_APIs/static/harness-rest-api-reference-11.png differ diff --git a/docs/platform/16_APIs/static/harness-rest-api-reference-12.png b/docs/platform/16_APIs/static/harness-rest-api-reference-12.png new file mode 100644 index 00000000000..c3102e03904 Binary files /dev/null and b/docs/platform/16_APIs/static/harness-rest-api-reference-12.png differ diff --git a/docs/platform/16_APIs/static/harness-rest-api-reference-13.png b/docs/platform/16_APIs/static/harness-rest-api-reference-13.png new file mode 100644 index 00000000000..308cb34100b Binary files /dev/null and b/docs/platform/16_APIs/static/harness-rest-api-reference-13.png differ diff --git a/docs/platform/17_Settings/_category_.json b/docs/platform/17_Settings/_category_.json new file mode 100644 index 00000000000..306a54afe0b --- /dev/null +++ b/docs/platform/17_Settings/_category_.json @@ -0,0 +1 @@ +{"label": "Settings", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Settings"}, "customProps": {"position": 30, "helpdocs_category_id": "fe0577j8ie"}} \ No newline at end of file diff --git a/docs/platform/17_Settings/default-settings.md b/docs/platform/17_Settings/default-settings.md new file mode 100644 index 00000000000..1546a989e86 --- /dev/null +++ b/docs/platform/17_Settings/default-settings.md @@ -0,0 +1,52 @@ +--- +title: Default Settings +description: This topic explains the default settings. +# sidebar_position: 2 +helpdocs_topic_id: k6ib32mh82 +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `NG_SETTINGS`. Contact Harness Support to enable the feature.Harness Default Settings lets you configure parameters in your Account, Org, or Project scope for specific Harness modules. + +This topic explains how to view and edit Default Settings for your modules. + +Harness supports the configuration of Default Settings for the following modules: + +* Core +* Deployments +* Builds + +### Required Permissions + +* Make sure you have the **view** and **edit** permissions for Default Settings. + +### Review + +Default Settings include module-specific configurable parameters that you can customize as per your needs. + +For example, enabling/disabling a feature at a specific scope. + +You can group a set of parameters into settings. + +### View and Edit Default Settings + +This topic explains how to view and edit Default Settings at the Account scope. + +1. In your Harness Account, go to Account Resources.![](./static/default-settings-00.png) +2. Click **Default Settings**. The **Account Default Settings** appear. +Harness onboards the module-specific settings in **Account Default Settings**.![](./static/default-settings-01.png) + +#### Allow Override + +* If you select Allow Override at the parent scope, your setting can be overridden at the child scope.​ +* To have the settings of a child scope the same as that of a parent scope, disable **Allow Override**. +For example, if you want to have the same settings for all the Organizations and Projects within an Account. + +#### Restore to Default + +Harness has default values for the parameters in Default Settings. You can change these values as per your needs. + +When you change any default setting, you have the option to change it back to the default value for that scope, using the **Restore to Default** option. + diff --git a/docs/platform/17_Settings/static/default-settings-00.png b/docs/platform/17_Settings/static/default-settings-00.png new file mode 100644 index 00000000000..6d79ca1683f Binary files /dev/null and b/docs/platform/17_Settings/static/default-settings-00.png differ diff --git a/docs/platform/17_Settings/static/default-settings-01.png b/docs/platform/17_Settings/static/default-settings-01.png new file mode 100644 index 00000000000..99c615e2695 Binary files /dev/null and b/docs/platform/17_Settings/static/default-settings-01.png differ diff --git a/docs/platform/18_Dashboards/_category_.json b/docs/platform/18_Dashboards/_category_.json new file mode 100644 index 00000000000..2d384c2bfa1 --- /dev/null +++ b/docs/platform/18_Dashboards/_category_.json @@ -0,0 +1 @@ +{"label": "Dashboards", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Dashboards"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "id0hnxv6sg"}} \ No newline at end of file diff --git a/docs/platform/18_Dashboards/add-custom-fields.md b/docs/platform/18_Dashboards/add-custom-fields.md new file mode 100644 index 00000000000..c6acfceef51 --- /dev/null +++ b/docs/platform/18_Dashboards/add-custom-fields.md @@ -0,0 +1,111 @@ +--- +title: Add Custom Fields to Custom Dashboards +description: This topic talks about how to add custom fields (dimensions and measures) to your dashboard. +# sidebar_position: 2 +helpdocs_topic_id: i4mtqea5es +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Dashboards provide the flexibility to add custom fields to the query. Using custom fields you can build new ad hoc custom dimensions and measures in Explore. Adding custom fields to the query allows to gain deeper insights into your data. + +This topic talks about how to add custom fields (dimensions and measures) to your dashboard query and how to utilize them to improve the data analysis. + +### Before you begin + +* [Create Dashboards](create-dashboards.md) + +### Review: Scope of Custom Fields + +Custom fields are a combination of first-class dimensions and predefined functions like concat, contains, case, and so on. They can't be used to pull data from any third party or external source. + +### Step: Add Custom Fields + +To get started with the custom fields, you need to select an Explore for your tile. + +1. Create a Dashboard. See [Step 1: Create a Dashboard](create-dashboards.md#step-1-create-a-dashboard). +2. Add tiles to your Dashboard. See [Step 2: Add Tiles to a Dashboard](create-dashboards.md#step-2-add-tiles-to-a-dashboard). +3. Select an Explore for your tile. +4. Give your tile a name. This will be the name of the tile on the dashboard. +5. In **Custom Fields**, click **Add**.![](./static/add-custom-fields-27.png) +6. You can create the following types of custom fields: + + * Custom Dimension + * Custom Measure + * Table Calculation + * For more information, see [Custom Field Types](https://connect.looker.com/library/document/adding-custom-fields?version=22.2#custom_field_types). + +### Create Custom Dimension + +Perform the following steps to create a Custom Dimension. + +1. In **Custom Fields**, click **Add**, and then click **Custom Dimension**. +2. In **Edit custom dimension**, in **Expression**, enter the expression for your dimension. For supported functions and operators see, [Functions and operators](https://docs.looker.com/exploring-data/creating-looker-expressions/looker-functions-and-operators). +3. (Optional) Select the format for your dimension. +4. In **Name**, enter a name for your dimension. The name will appear in Custom Fields to identify your dimension.![](./static/add-custom-fields-28.png) +5. Once you're done, click **Save**. +6. Once you have set up your query, click **Run**. +7. Click **Save** to save the query as a tile on your dashboard. + +#### Examples: Custom Dimension + +Let’s take a look at some of the custom dimension examples with the corresponding visualizations. + +##### Example 1: Filter Based on the String Data Type + +This custom field is a regex search on all cloud cost management accounts for the string `ce`. Any account, project, or subscription without this string is bucketed under **Others**. You can use custom fields to get more specific and granular data for your analysis. + + +``` +case(when(matches_filter(${unified_table.aws_gcp_azure_account_project_subscription}, `%ce%`), + ${unified_table.aws_gcp_azure_account_project_subscription}), + "Others" +) +``` +![](./static/add-custom-fields-29.png) +##### Example 2: Group Resources Across Your Environment + +This example shows how you can group resources across your cloud environment. + + +``` +case( + when(${unified_table.product} = "Amazon Simple Storage Service" OR ${unified_table.product} = "Cloud Storage", "Storage"), + when(${unified_table.product} = "Amazon DynamoDB" OR ${unified_table.product} = "Cloud SQL", "DB"), + when(${unified_table.product} = "Amazon Elastic Compute Cloud" OR ${unified_table.product} = "Compute Engine", "Compute"), + "Other" +) +``` +![](./static/add-custom-fields-30.png) +### Create Custom Measure + +Perform the following steps to create a Custom Measure. + +1. In **Custom Fields**, click **Add**, and then click **Custom Measure**. +2. In **Edit custom measure**, in **Field to measure** select the field for which you want to create measure. For example, Resource ID. +3. Select the **Measure type**. For example, **Count distinct** or **List of unique values**. +4. In **Name**, enter a name for your custom measure. The name will appear in Custom Fields to identify your measure.![](./static/add-custom-fields-31.png) +5. (Optional) You can add filters to further narrow the results. +6. Once you're done, click **Save**. +7. Once you have set up your query, click **Run**.![](./static/add-custom-fields-32.png) +8. Click **Save** to save the query as a tile on your dashboard. + +### Use Table Calculation + +Table calculations make it easy to create ad hoc metrics. They are similar to formulas that are found in spreadsheet tools like Excel. Table calculations appear as green columns in the data table, rather than as blue columns (dimensions) or orange columns (measures). + +Table calculations can perform mathematical, logical (true/false), lexical (text-based), and date-based calculations on the dimensions, measures, and other table calculations in your query. + +Perform the following steps to use Table Calculation for your custom fields. + +In **Custom Fields**, click **Add**, and then click **Table Calculation**. For details, see [Using Table Calculation](https://connect.looker.com/library/document/using-table-calculations?version=22.2). + +### Next steps + +* [Create Visualizations and Graphs](create-visualizations-and-graphs.md) +* [Create Conditional Alerts](create-conditional-alerts.md) +* [Schedule and Share Dashboards](share-dashboards.md) +* [Use Dashboard Actions](use-dashboard-actions.md) +* [Download Dashboard Data](download-dashboard-data.md) + diff --git a/docs/platform/18_Dashboards/create-conditional-alerts.md b/docs/platform/18_Dashboards/create-conditional-alerts.md new file mode 100644 index 00000000000..b0e21054d27 --- /dev/null +++ b/docs/platform/18_Dashboards/create-conditional-alerts.md @@ -0,0 +1,62 @@ +--- +title: Create Conditional Alerts +description: This topic describes how to create conditional alerts for your dashboards. +# sidebar_position: 2 +helpdocs_topic_id: ro0i58mvby +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Conditional alerts let you trigger notifications when the specific conditions are met or exceeded. Notifications can be sent to specific recipients at a desired frequency. The alert conditions use the dashboard filters that exist when the alert is created. + +* Alerts are set on dashboard tiles. +* Dashboards check whether each alert’s conditions have been met or exceeded based on the alert’s frequency, and then notifies users of this change. +* To create alerts, your dashboard must be out of edit mode. + +### Before you begin + +* [Create Dashboards](create-dashboards.md) + +### Visual Summary + +The following video explains how to create conditional alerts for a dashboard: + +### Step: Create Conditional Alerts + +To create an alert on a dashboard tile, perform the following steps: + +1. Click the tile’s bell icon.![](./static/create-conditional-alerts-16.png) +2. The default alert title indicates which conditions need to be true for the alert to be triggered. If you want to rename your alert, enter a custom title. +3. In the **Condition** drop-down, set the components that tell dashboard how to check the tile data for changes and the kinds of changes that trigger an alert notification. +The alert condition is represented by these components: + * The list of fields or table calculations that appear in the dashboard tile’s visualization + * The change that the selected field, fields, table calculation, or table calculations must undergo to trigger the alert notification + * The magnitude of the change that would trigger the alert notificationThese conditions include: + * **Is greater than** + * **Is less than** + * **Is equal to** + * **Is greater than or equal to** + * **Is less than or equal to**If the query contains a date or time field, additional conditions are available: + * **Increases by** + * **Decreases by** + * **Changes by** (a combination of **Increases by** and **Decreases by**) +4. In **Where to send it**, enter the email address(es) of the recepients. +5. Set the frequency at which dashboard will check your data for changes to send an alert notification (if the alert conditions are met). These are the available frequency options: + * **Monthly** on a specified **Day** of the month (the default is the **1st** of the month) at a specified **Time** (the default is **05:00**) + * **Weekly** on a specified **Day** of the week (the default is **Sun** for Sunday) at a specified **Time** (the default is **05:00**) + * **Daily** at a specified **Time** (the default is **05:00**) + * **Hourly** at a specified interval (the default is to check the data every hour) with specified **Start** and **End** times (the default is **05:00** and **17:00**). With hourly intervals you can have Looker check the data at these intervals: + + **Hour** + + **2 hours** + + **3 hours** + + **4 hours** + + **6 hours** + + **8 hours** + + **12 hours** + * **Minutes** at a specified interval (the default is to check the data every 15 minutes) with specified **Start** and **End** times (the default is **05:00** and **17:00**). With minute-based intervals, you can have Looker check the data at these intervals: + + **15 minutes** + + **30 minutes****Start** and **End** times are inclusive. For example, if you set **Check every** to **12 hours** with a **Start** time of **05:00** and an **End** time of **17:00**, Dashboard will check the data at 05:00 *and* 17:00 +6. Click **Save Alert**.![](./static/create-conditional-alerts-17.png) +7. Hover over the bell icon that appears on the dashboard tile. A numeric indicator shows how many alerts you have created for that tile.![](./static/create-conditional-alerts-18.png) + diff --git a/docs/platform/18_Dashboards/create-dashboards.md b/docs/platform/18_Dashboards/create-dashboards.md new file mode 100644 index 00000000000..c8f545e7698 --- /dev/null +++ b/docs/platform/18_Dashboards/create-dashboards.md @@ -0,0 +1,94 @@ +--- +title: Create Dashboards +description: This topic describes how to create a Dashboard. +# sidebar_position: 2 +helpdocs_topic_id: ardf4nbvcy +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Dashboard allows you to organize, explore, and present structured data logically. You can use this data to improve deployments and inform operations and business decisions. + +This topic describes how to create your own Dashboard. + +### Visual Summary + +The following video explains how to create a Dashboard: + +### Step 1: Create a Dashboard + +To create a Dashboard you first need to first create a Folder. The Dashboards are created inside a folder. + +1. In Harness, click **Dashboards**. +2. In **Dashboards**, click **+ Dashboard**. + + ![](./static/create-dashboards-06.png) + +3. In **About the Dashboard**, in **Folder**, select **Organization Shared Folder**. +4. In **Name**, enter a name for your dashboard. For example, GCP. +5. (Optional) In **Tags**, type a name for your tag and press enter to create a tag, and click **Continue**. +6. Click **Edit Dashboard**. + + ![](./static/create-dashboards-07.png) + +7. Click **Add Tile**. + + ![](./static/create-dashboards-08.png) + +### Step 2: Add Tiles to a Dashboard + +Once you create a dashboard, the next step is to add tiles and text to the dashboard. As you add tiles to a dashboard, CCM Dashboards automatically sizes them and places them at the bottom of the dashboard, but you can move and resize tiles however you like. You can also edit tiles after you’ve created them to adjust the names of the tiles, the visualizations, or the underlying queries. + +1. Click **Add Tile** from the top left of the dashboard pane and then click **Visualization**, or click the **Add Tile** button in the center of the dashboard pane. + + ![](./static/create-dashboards-09.png) + +2. Select an Explore to get started. + * An Explore is a starting point for a query, designed to explore a particular subject area. + * The data shown in an Explore is determined by the dimensionsand measures you select from the field picker. + + **Dimension**: A dimension can be thought of as a group or bucket of data. + + **Measure**: A measure is information aboutthat bucket of data. + + ![](./static/create-dashboards-10.png) + +3. Click the Explore that corresponds to the fields you want to include in your dashboard. For example, AWS. + +### Step 3: Choose an Explore to Build Your Query + +The next step is to understand how to build a query and how to pull data in the dashboard to see the details and gain deeper insights into your data. To get started, you need to select an Explore for your tile. + +1. Give your query a name. This will be the name of the tile on the dashboard. +2. Select the filters for your query. +3. Select the dimensions and measures for your query. In this example, AWS Account, AWS Region, and AWS Total Cost is selected. +4. Configure your visualization options. For more information, see [Create Visualizations and Graphs](create-visualizations-and-graphs.md). +5. Once you have set up your query, click **Run**. +6. Click **Save** to save the query as a tile on your dashboard. + + ![](./static/create-dashboards-11.png) + +7. Once you're done adding all the required tiles to your dashboard, click **Save**.![](./static/create-dashboards-12.png) + +### Option: Working with Folders + +You can create folders for organizing your dashboards. + +![](./static/create-dashboards-13.png) + +You can edit the names of the folders in the **Folders** page. + +![](./static/create-dashboards-14.png) + +You can move a dashboard between folders. + +![](./static/create-dashboards-15.png) + +### Next steps + +* [Create Visualizations and Graphs](create-visualizations-and-graphs.md) +* [Create Conditional Alerts](create-conditional-alerts.md) +* [Schedule and Share Dashboards](share-dashboards.md) +* [Use Dashboard Actions](use-dashboard-actions.md) +* [Download Dashboard Data](download-dashboard-data.md) +* [Add Custom Fields](add-custom-fields.md) + diff --git a/docs/platform/18_Dashboards/create-visualizations-and-graphs.md b/docs/platform/18_Dashboards/create-visualizations-and-graphs.md new file mode 100644 index 00000000000..84874b722ba --- /dev/null +++ b/docs/platform/18_Dashboards/create-visualizations-and-graphs.md @@ -0,0 +1,61 @@ +--- +title: Create Visualizations and Graphs +description: This topic explains how to create visualizations that best show off your data. +# sidebar_position: 2 +helpdocs_topic_id: n2jqctdt7c +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add an eye-catching chart to any query result set on the Explore. The Dashboard keeps query details and visualization configuration data together, so when you share a query, recipients get the picture as well as the data. + +This topic explains how to create visualizations that best show off your data. + +### Before you begin + +* [Create Dashboards](create-dashboards.md) + +### Step: Choose a Visualization Type + +Once you’ve created and run your query, click the Visualization tab in the Explore to configure visualization options for the query. Use the chart buttons to pick the visualization type that’s right for the data. + +1. Create and run your query. +2. Click the **Visualization** tab to start configuring your visualization options. +3. Use the chart buttons to pick the visualization type that’s right for the data. For more information, see [Visualization types](https://docs.looker.com/exploring-data/visualizing-query-results/visualization-types). +![](./static/create-visualizations-and-graphs-19.png) + +### Step: Fine-Tune Your Visualizations + +You can customize a visualization to make the data more readable and to add visual styling, for example: + +* Customize visualizations with chart settings +* Include multiple visualization types on a single chart +* Create stacked charts with multiple visualization types + +#### Customize Visualizations with Chart Settings + +To see the visualization options available for a particular visualization type, click that type on the [Visualization types](https://docs.looker.com/exploring-data/visualizing-query-results/visualization-types) documentation page. + +1. In **Visualization**, click **Edit**. The edit options vary depending on the visualization type. +2. Click the **Plot** tab. For details, see [Visualization types](https://docs.looker.com/exploring-data/visualizing-query-results/visualization-types). + +#### Include Multiple Visualization Types on a Single Chart + +You can also create charts that include more than one visualization type: + +1. In **Visualization**, click **Edit**. The edit options vary depending on the visualization type. +2. Click the **Series** tab. +3. In the **Customizations** section, you’ll see an entry for each series in the chart. Click the arrow next to the series you want to change to display its customization options. +4. In the **Type** box, select the type of visualization to use for that series. +Charts with multiple series types always layer line and scatter series in front, then they layer area, column, and bar series. +You can alter the layering order of column, bar, and area series by changing the series’ positions in the data table and clicking the **Run** button. The leftmost series will layer on top and the rightmost series will layer on the bottom. + +#### Create Stacked Charts with Multiple Visualization Types + +You can include stacked series in a chart with multiple visualization types. All the series of the same type as the chart overall will be stacked together; series of other types will not stack. + +1. In **Visualization**, click **Edit**. The edit options vary depending on the visualization type. +2. Click the **Series** tab. +3. To create a stacked chart that uses multiple y-axes, drag any series to a different axis in the **Y** menu. The stacked series will appear together, but all other series can be moved independently. See [Creating stacked charts](https://docs.looker.com/exploring-data/visualizing-query-results#creating_stacked_charts_with_multiple_visualization_types). + diff --git a/docs/platform/18_Dashboards/dashboard-best-practices.md b/docs/platform/18_Dashboards/dashboard-best-practices.md new file mode 100644 index 00000000000..ae7bb447004 --- /dev/null +++ b/docs/platform/18_Dashboards/dashboard-best-practices.md @@ -0,0 +1,24 @@ +--- +title: Best Practices For Building Dashboards +description: To create dashboards that are effective and efficient, you need to consider their performance. As our dashboards can load large amounts of data, building them for optimal performance will save you ti… +# sidebar_position: 2 +helpdocs_topic_id: qydl5ju9lx +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To create dashboards that are effective and efficient, you need to consider their performance. As our dashboards can load large amounts of data, building them for optimal performance will save you time and energy.  + +Here are some best practices you can follow for building your dashboards: + +* **Data volume:** Be mindful of how much data you might return in your dashboards; remember that the more data returned, the more memory will be used in your dashboard settings. +* **Dashboard elements**: Limit the number of queries you use in a single dashboard to 25 or less, if possible. If you need to display more data, you can [create multiple dashboards](create-dashboards.md) and link to them, or you can concatenate similar measures into a single value visualization. +* **Dashboard settings**: As much as possible, avoid setting the auto-refresh to less than 15 minutes and don’t run on load if your dashboard uses filters. You can also make dashboard filters required to prevent users from running the dashboard without filters. +* **Use caching**: If you are interested in looking at historical data from the previous day using the same set of filters, you can reach out to us to enable caching for you. This helps avoid unnecessary querying and improves the response time of your dashboards. +* **Post-query processes**: Be aware that using a lot of post-query processing, such as merging results, custom fields, or table calculations, can slow down your dashboard. We recommend limiting the amount of post-query processes to four. If you are using the same processes across multiple dashboards, you could get them hardcoded into your models, reach out to us if you’d like to do this. +* **Pivoted dimensions**: Pivoting a lot of dimensions also uses a lot of memory when a dashboard is loaded. If the dimension you are pivoting has many unique values, there will be a column for each value. Instead of showing everything at once, we recommend filtering the dashboard to select the values you’re most interested in comparing. +* **Columns and rows**: Having a lot of columns and rows can also slow down your dashboards due to memory issues. Be mindful of how many you need and also filter at the dashboard level to reduce the number of results in an element. +* **Shared filters**: By using shared filters across multiple tiles, you can reduce the total number of queries the dashboard runs, which can help speed it up. +* **Testing the dashboard**: Always test your dashboard after you’ve updated it to make sure you don’t miss any changes in its performance. + diff --git a/docs/platform/18_Dashboards/download-dashboard-data.md b/docs/platform/18_Dashboards/download-dashboard-data.md new file mode 100644 index 00000000000..a610bc78aa6 --- /dev/null +++ b/docs/platform/18_Dashboards/download-dashboard-data.md @@ -0,0 +1,105 @@ +--- +title: Download Dashboard Data +description: This page describes how to download content from By Harness Dashboards. +# sidebar_position: 2 +helpdocs_topic_id: op59lb1pxv +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This page describes how to download content, visualizations or data, from the **Dashboards**. You can download data using the following options: + +* Download Data from a Dashboard +* Download Data from a Dashboard Tile + +### Before you begin + +* [Create Dashboards](create-dashboards.md) +* [Create Visualizations and Graphs](create-visualizations-and-graphs.md) + +### Download Data from a Dashboard + +Perform the following steps to download data from a Dashboard. + +1. To download the entire dashboard, select **Download** from the dashboard’s three-dot menu.![](./static/download-dashboard-data-20.png) +2. Select PDF or CSV as your download format. + +#### Download a Dashboard as a PDF + +You can download your entire dashboard as a PDF. The PDF contains the dashboard title, any dashboard filters, all the dashboard tiles, and the time zone the dashboard was run in. The PDF also includes a timestamp showing when the dashboard was downloaded. + +1. In **Download AWS Cost Dashboard**, in **Format**, select **PDF**. + + ![](./static/download-dashboard-data-21.png) + +2. Select an option from the **Paper Size** drop-down menu. The **Fit Page to Dashboard** option is the default; it sizes the PDF to match the layout of the dashboard on the screen. Other paper size options size the PDF to match a standard paper size and fit the dashboard within it. + + Depending on the layout of the dashboard, large visualizations or groups of overlapping tiles may need to be resized to fit on given page size. + + ![](./static/download-dashboard-data-22.png) + +3. If you select something other than **Fit Page to Dashboard** in the **Paper Size** drop-down, an **Orientation** option appears. You can choose to orient the dashboard in portrait or landscape position. +4. Do not select **Expand tables to show all rows**. This option is relevant only for the table visualizations. If selected, the PDF will show all the rows available in the table visualization, not just the rows displayed in the dashboard tile thumbnail. +5. Select or leave unselected **Arrange dashboard tiles in a single column**. If you select this option, the PDF displays dashboard tiles in a single vertical column. If you do not select this option, the dashboard tiles appear as they are arranged in the dashboard. +6. Click **Open in Browser** to see an image of the PDF in a new tab of your browser. This also downloads a PDF in the Download folder. +7. Click **Cancel** if you no longer want to download the dashboard. +8. Click **Download** to initiate the download. A new tab in your browser will open, showing the status of your download.! + + [](./static/download-dashboard-data-23.png) + + +#### Download a Dashboard as CSVs + +You can download all the tiles from your dashboard as a zipped collection of CSV files.  + +1. In **Download AWS Cost Dashboard**, in **Format**, select **CSV** from the **Format** drop-down menu. + + ![](./static/download-dashboard-data-24.png) + +2. Click **Cancel** if you no longer want to download the dashboard. +3. Click **Download** to initiate the download your zipped CSV collection. + +### Download Data from a Dashboard Tile + +Perform the following steps to download the data from a dashboard tile: + +1. Click the three-dot icon (Tile action) on the tile and click **Download data**.![](./static/download-dashboard-data-25.png) +2. Select the format for your download. Data can be downloaded from dashboard tiles in the following formats: + * TXT (tab-separated values) + * Excel spreadsheet (Excel 2007 or later) + * CSV + * JSON + * HTML + * Markdown + * PNG (image of visualization)Depending on the format you select, some options in the **Advanced data options** menu may not be available. + + ![](./static/download-dashboard-data-26.png) + +3. (Optional) For more options, click the arrow next to **Advanced data options**. + 1. In **Results**, select **As displayed in the data table**. + 2. In **Data** **Values**, choose how you want the downloaded query results to appear: + + * If you choose **Unformatted**, special formatting is not applied to your query results, such as rounding long numbers or adding special characters that may have been put in place. + * If you choose **Formatted**, the data appears more similar to the Dashboard experience in Harness. + 3. In the Number of rows to include, choose how much data you want to download: + + * **Current results table**: Number of rows specified by the row limit + * **All Results**: All results returned by the query + * **Custom**: Custom number of rows +4. Once you’ve selected your options, click the **Download** button to download a file to your computer, or click **Open in Browser** to view the file in the browser. + +### See also + +You can choose to download the following By Harness CCM Dashboards: + +* [View AWS Cost Dashboard](https://docs.harness.io/article/u3yxrebj6r-aws-dashboard) +* [View AWS Reservation Efficiency Dashboard](https://docs.harness.io/article/o86lf6qgr2-aws-reservation-coverage-and-service-cost) +* [View Azure Cost Dashboard](https://docs.harness.io/article/n7vpieto0n-azure-cost-dashboard) +* [View GCP Dashboard](https://docs.harness.io/article/tk55quhfi4-gcp-dashboard) +* [View Cluster Cost Dashboard](https://docs.harness.io/article/uai4ud1ibi-cluster-cost-dashboard) +* [View Multi-cloud Cost Overview Dashboard](https://docs.harness.io/article/ff5f08g4v4-multi-cloud-cost-overview-dashboard) +* [Orphaned EBS Volumes and Snapshots Dashboard](https://docs.harness.io/article/itn49ytd8u-orphaned-ebs-volumes-and-snapshots-dashboard) +* [View AWS EC2 Inventory Cost Dashboard](https://docs.harness.io/article/xbekog2ith-view-aws-ec-2-inventory-cost-dashboard) +* [View AWS EC2 Instance Metrics Dashboard](https://docs.harness.io/article/mwhraec911-view-aws-ec-2-instance-metrics) + diff --git a/docs/platform/18_Dashboards/share-dashboards.md b/docs/platform/18_Dashboards/share-dashboards.md new file mode 100644 index 00000000000..b392fb32c7f --- /dev/null +++ b/docs/platform/18_Dashboards/share-dashboards.md @@ -0,0 +1,174 @@ +--- +title: Schedule and Share Dashboards +description: This topic describes how to schedule and share dashboards. +# sidebar_position: 2 +helpdocs_topic_id: 35gfke0rl8 +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Dashboard lets you schedule immediate or recurring delivery of dashboards. This topic describes how to schedule and share dashboards. + +### Before you begin + +* [Create Dashboards](create-dashboards.md) +* Make sure the dashboard is not in edit mode. + +### Visual Summary + +The following video explains how to create a Dashboard: + +### Step: Schedule a Delivery + +Perform the following steps to schedule a delivery: + +1. Click the three-dot menu in the upper right of the dashboard and select **Schedule delivery**.![](./static/share-dashboards-00.png) +2. In Schedule, the top of the schedule window shows the name automatically given to the delivery. The name defaults to the dashboard’s name. To edit the delivery’s name, click the name (indicated by the dotted underscore), and make your edits.![](./static/share-dashboards-01.png) +3. In **Schedule**, the following options are available: + * Settings + * Filters + * Advanced options + +### Settings + +The settings tab allows you to customize your delivery’s recurrence, destination, format, and more. + +#### Recurrence + +Customize the timing of your delivery in the **Recurrence** section. + +##### Send now + +If you select **Send now** from the **Recurrence** drop-down menu, a one-time delivery of the dashboard will be sent once you fill in the required fields and click the **Send now** button at the bottom of the window. + +##### Time-based and date-based schedules + +Select one of the following options from the **Recurrence** drop-down menu: + +* Monthly +* Weekly +* Daily +* Hourly +* Minutes +* Specific months +* Specific days + +The timing options change depending on the option you’ve chosen. + + + + + + + + + +The **Time**, **Start**, and **End** fields use a 24-hour clock. If the time you want is not available in the drop-down menu, click within the field and manually enter your desired time, such as 9:15, 15:37, and so on. + +**Hourly** and **Minutes** schedules repeat daily within the **Start** and **End** timeframe you set. The end time for **Hourly** and **Minutes** intervals is not inclusive. The last delivery will be sent at the last selected interval prior to the specified end time. For example, if a dashboard is scheduled **Hourly** between 12:00 a.m. and 11:00 p.m., it will be sent on the hour, every hour, from 12:00 a.m. to 10:00 p.m. Or, if a recurrence is every 30 minutes between 12:00 a.m. and 11:00 p.m., the last delivery will be sent at 10:30 p.m. + +#### Destination + +In **Email addresses**, enter the email address(es) of the recipients and press **Enter**. + +#### Format + +The **Format** field contains a drop-down menu of available formats: + +* **CSV zip file**: The unformatted data from the dashboard delivered as a collection of comma-separated values (CSV) files in a zipped directory. For deliveries to email, the ZIP file is delivered as an email attachment. +* **PDF**: An image of the dashboard as a single PDF file. The default layout displays tiles as they are arranged in the dashboard, but other layout and sizing options are available under **Advanced options**. For deliveries to email, the file is delivered as an email attachment. +* **PNG visualization**: An image of the dashboard as a single PNG file. The default layout displays tiles as they are arranged in the dashboard, but other layout options are available under **Advanced options**. For deliveries to email, the image appears inline within the body of the email. +![](./static/share-dashboards-02.png) + +### Filters + +The Filters tab in the schedule and send window shows any filters applied to the dashboard as well as their values. + +In this tab, you can edit the values for any existing filters applied to the dashboard and the new values will be applied to the delivery. The dashboard itself will not be affected. + +![](./static/share-dashboards-03.png) +### Advanced Options + +The **Advanced options** tab provides additional customization for your delivery. The options available depend on the selected format of your delivery. + +#### Custom Message + +Enter a message you would like included in the body of emails to recipients. + +#### Include Links + +Select this option to include in the data delivery emails a View full dashboard link that goes to the dashboard. + +#### Results + +This field is available only for CSV ZIP file formats of dashboard deliveries. It contains two options: With visualizations options applied or As displayed in the data table. You can choose one or the other. + +Choose the With visualizations options applied option to apply some of the visualization settings from the dashboard tiles to your dashboard delivery. This causes the files in your delivery to appear similar to table charts. Any of the following settings in the Plot, Series, and Formatting menus that are configured for a visualization will be applied to the data delivery: + +* Show Row Numbers +* Hide Totals +* Hide Row Totals +* Limit Displayed Rows to a maximum of 500 rows shown or hidden +* Show Full Field Name +* Custom labels for each column + +Choose the As displayed in the data table option to deliver the data as it appears in the data table of each dashboard tile’s Explore from here window. + +#### Values + +This field is available only for CSV ZIP file formats of dashboard deliveries. It contains two options: Formatted or Unformatted. You can choose one or the other. + +* Select Formatted if you want the data to appear similar to how it appears in the Explore experience in Dashboard, although some features (such as linking) aren’t supported by all file types. +* Select Unformatted if you do not want to apply any special formatting of your query results, such as rounding long numbers or adding special characters. This is often preferred when data is being fed into another tool for processing. + +#### Expand Tables to Show all Rows + +This option is available only for PDF formats of dashboard deliveries. + +Select the Expand tables to show all rows box to display all rows of any table visualizations in the dashboard — rather than just those rows that display in the dashboard tile thumbnails. + +#### Arrange Dashboard Tiles in a Single Column + +This option is available only for the PDF and PNG visualization formats of dashboard deliveries. + +Select the **Arrange dashboard tiles in a single column** box to format your PDF or your PNG visualization in a single column layout. This layout displays dashboard tiles in a single vertical column. + +#### Paper Size + +This option is available only for PDF formats of dashboard deliveries. + +You have the option to specify the optimal size and orientation of dashboard PDFs by selecting from the Paper size drop-down menu. + +#### Delivery Time Zone + +By default, dashboard uses the time zone associated with your account to determine when to send your data delivery. + +If you want to specify a different time zone, select the time zone from the drop-down menu. The time zone you select does not affect the data in your dashboard, just the timing of the delivery. + +![](./static/share-dashboards-04.png) +### Save a Schedule Delivery + +* If you set the **Recurrence** field to **Send now**, click the **Send now** button at the bottom of the window for a one-time delivery to the listed destination. +* If you set the Recurrence field to anything other than **Send now**, click the **Save** button to save your schedule. + +### Edit a Schedule + +You can edit only the schedules you have created. To edit a schedule: + +1. Click the three-dot menu at the top right of the dashboard. +2. Select **Schedule delivery** from the drop-down menu. +3. In **Schedules**, click the three-dot menu that applies to the schedule you would like to edit. +4. Choose **Edit** from the drop-down menu.![](./static/share-dashboards-05.png) +5. Make your edits and click **Save**. + +### Duplicate a Schedule + +To duplicate a schedule: + +1. Click the three-dot menu at the top right of the dashboard. +2. Select **Schedule delivery** from the drop-down menu. +3. In **Schedules**, click the three-dot menu that applies to the schedule you would like to duplicate. +4. Choose **Duplicate** from the drop-down menu. **Copy** is appended to the schedule name. +5. Make your edits and click **Save**. + diff --git a/docs/platform/18_Dashboards/static/add-custom-fields-27.png b/docs/platform/18_Dashboards/static/add-custom-fields-27.png new file mode 100644 index 00000000000..6030662d746 Binary files /dev/null and b/docs/platform/18_Dashboards/static/add-custom-fields-27.png differ diff --git a/docs/platform/18_Dashboards/static/add-custom-fields-28.png b/docs/platform/18_Dashboards/static/add-custom-fields-28.png new file mode 100644 index 00000000000..862c5dab6df Binary files /dev/null and b/docs/platform/18_Dashboards/static/add-custom-fields-28.png differ diff --git a/docs/platform/18_Dashboards/static/add-custom-fields-29.png b/docs/platform/18_Dashboards/static/add-custom-fields-29.png new file mode 100644 index 00000000000..0e3aafd30f3 Binary files /dev/null and b/docs/platform/18_Dashboards/static/add-custom-fields-29.png differ diff --git a/docs/platform/18_Dashboards/static/add-custom-fields-30.png b/docs/platform/18_Dashboards/static/add-custom-fields-30.png new file mode 100644 index 00000000000..f32e4e79694 Binary files /dev/null and b/docs/platform/18_Dashboards/static/add-custom-fields-30.png differ diff --git a/docs/platform/18_Dashboards/static/add-custom-fields-31.png b/docs/platform/18_Dashboards/static/add-custom-fields-31.png new file mode 100644 index 00000000000..824c9b2c664 Binary files /dev/null and b/docs/platform/18_Dashboards/static/add-custom-fields-31.png differ diff --git a/docs/platform/18_Dashboards/static/add-custom-fields-32.png b/docs/platform/18_Dashboards/static/add-custom-fields-32.png new file mode 100644 index 00000000000..d2e32f5d834 Binary files /dev/null and b/docs/platform/18_Dashboards/static/add-custom-fields-32.png differ diff --git a/docs/platform/18_Dashboards/static/create-conditional-alerts-16.png b/docs/platform/18_Dashboards/static/create-conditional-alerts-16.png new file mode 100644 index 00000000000..1771e1b0021 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-conditional-alerts-16.png differ diff --git a/docs/platform/18_Dashboards/static/create-conditional-alerts-17.png b/docs/platform/18_Dashboards/static/create-conditional-alerts-17.png new file mode 100644 index 00000000000..0550c8affee Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-conditional-alerts-17.png differ diff --git a/docs/platform/18_Dashboards/static/create-conditional-alerts-18.png b/docs/platform/18_Dashboards/static/create-conditional-alerts-18.png new file mode 100644 index 00000000000..d550919c74e Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-conditional-alerts-18.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-06.png b/docs/platform/18_Dashboards/static/create-dashboards-06.png new file mode 100644 index 00000000000..f1f08a98c83 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-06.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-07.png b/docs/platform/18_Dashboards/static/create-dashboards-07.png new file mode 100644 index 00000000000..16190f58b98 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-07.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-08.png b/docs/platform/18_Dashboards/static/create-dashboards-08.png new file mode 100644 index 00000000000..ead4e83394a Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-08.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-09.png b/docs/platform/18_Dashboards/static/create-dashboards-09.png new file mode 100644 index 00000000000..a718796462e Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-09.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-10.png b/docs/platform/18_Dashboards/static/create-dashboards-10.png new file mode 100644 index 00000000000..0e81e867335 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-10.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-11.png b/docs/platform/18_Dashboards/static/create-dashboards-11.png new file mode 100644 index 00000000000..cddbbce1d11 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-11.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-12.png b/docs/platform/18_Dashboards/static/create-dashboards-12.png new file mode 100644 index 00000000000..2e6e09e8b04 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-12.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-13.png b/docs/platform/18_Dashboards/static/create-dashboards-13.png new file mode 100644 index 00000000000..5df37e3ad19 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-13.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-14.png b/docs/platform/18_Dashboards/static/create-dashboards-14.png new file mode 100644 index 00000000000..a504de37b38 Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-14.png differ diff --git a/docs/platform/18_Dashboards/static/create-dashboards-15.png b/docs/platform/18_Dashboards/static/create-dashboards-15.png new file mode 100644 index 00000000000..640cefee50e Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-dashboards-15.png differ diff --git a/docs/platform/18_Dashboards/static/create-visualizations-and-graphs-19.png b/docs/platform/18_Dashboards/static/create-visualizations-and-graphs-19.png new file mode 100644 index 00000000000..5bcde84182d Binary files /dev/null and b/docs/platform/18_Dashboards/static/create-visualizations-and-graphs-19.png differ diff --git a/docs/platform/18_Dashboards/static/download-dashboard-data-20.png b/docs/platform/18_Dashboards/static/download-dashboard-data-20.png new file mode 100644 index 00000000000..8fba6ccaecd Binary files /dev/null and b/docs/platform/18_Dashboards/static/download-dashboard-data-20.png differ diff --git a/docs/platform/18_Dashboards/static/download-dashboard-data-21.png b/docs/platform/18_Dashboards/static/download-dashboard-data-21.png new file mode 100644 index 00000000000..b50ca8a3d91 Binary files /dev/null and b/docs/platform/18_Dashboards/static/download-dashboard-data-21.png differ diff --git a/docs/platform/18_Dashboards/static/download-dashboard-data-22.png b/docs/platform/18_Dashboards/static/download-dashboard-data-22.png new file mode 100644 index 00000000000..70fe828ea06 Binary files /dev/null and b/docs/platform/18_Dashboards/static/download-dashboard-data-22.png differ diff --git a/docs/platform/18_Dashboards/static/download-dashboard-data-23.png b/docs/platform/18_Dashboards/static/download-dashboard-data-23.png new file mode 100644 index 00000000000..5433a3e771e Binary files /dev/null and b/docs/platform/18_Dashboards/static/download-dashboard-data-23.png differ diff --git a/docs/platform/18_Dashboards/static/download-dashboard-data-24.png b/docs/platform/18_Dashboards/static/download-dashboard-data-24.png new file mode 100644 index 00000000000..30efcec38dd Binary files /dev/null and b/docs/platform/18_Dashboards/static/download-dashboard-data-24.png differ diff --git a/docs/platform/18_Dashboards/static/download-dashboard-data-25.png b/docs/platform/18_Dashboards/static/download-dashboard-data-25.png new file mode 100644 index 00000000000..b7c1e1c669b Binary files /dev/null and b/docs/platform/18_Dashboards/static/download-dashboard-data-25.png differ diff --git a/docs/platform/18_Dashboards/static/download-dashboard-data-26.png b/docs/platform/18_Dashboards/static/download-dashboard-data-26.png new file mode 100644 index 00000000000..0d757912e0b Binary files /dev/null and b/docs/platform/18_Dashboards/static/download-dashboard-data-26.png differ diff --git a/docs/platform/18_Dashboards/static/share-dashboards-00.png b/docs/platform/18_Dashboards/static/share-dashboards-00.png new file mode 100644 index 00000000000..7f4e7154a58 Binary files /dev/null and b/docs/platform/18_Dashboards/static/share-dashboards-00.png differ diff --git a/docs/platform/18_Dashboards/static/share-dashboards-01.png b/docs/platform/18_Dashboards/static/share-dashboards-01.png new file mode 100644 index 00000000000..c11acadf33f Binary files /dev/null and b/docs/platform/18_Dashboards/static/share-dashboards-01.png differ diff --git a/docs/platform/18_Dashboards/static/share-dashboards-02.png b/docs/platform/18_Dashboards/static/share-dashboards-02.png new file mode 100644 index 00000000000..c9b0b1ce816 Binary files /dev/null and b/docs/platform/18_Dashboards/static/share-dashboards-02.png differ diff --git a/docs/platform/18_Dashboards/static/share-dashboards-03.png b/docs/platform/18_Dashboards/static/share-dashboards-03.png new file mode 100644 index 00000000000..f46b8febd5e Binary files /dev/null and b/docs/platform/18_Dashboards/static/share-dashboards-03.png differ diff --git a/docs/platform/18_Dashboards/static/share-dashboards-04.png b/docs/platform/18_Dashboards/static/share-dashboards-04.png new file mode 100644 index 00000000000..9023192a26f Binary files /dev/null and b/docs/platform/18_Dashboards/static/share-dashboards-04.png differ diff --git a/docs/platform/18_Dashboards/static/share-dashboards-05.png b/docs/platform/18_Dashboards/static/share-dashboards-05.png new file mode 100644 index 00000000000..aa083c836be Binary files /dev/null and b/docs/platform/18_Dashboards/static/share-dashboards-05.png differ diff --git a/docs/platform/18_Dashboards/static/use-dashboard-actions-33.png b/docs/platform/18_Dashboards/static/use-dashboard-actions-33.png new file mode 100644 index 00000000000..733cd6aec8a Binary files /dev/null and b/docs/platform/18_Dashboards/static/use-dashboard-actions-33.png differ diff --git a/docs/platform/18_Dashboards/use-dashboard-actions.md b/docs/platform/18_Dashboards/use-dashboard-actions.md new file mode 100644 index 00000000000..5f19b1a7403 --- /dev/null +++ b/docs/platform/18_Dashboards/use-dashboard-actions.md @@ -0,0 +1,32 @@ +--- +title: Use Dashboard Actions +description: This topic describes how to use different dashboard actions. +# sidebar_position: 2 +helpdocs_topic_id: y1oh7mkwmh +helpdocs_category_id: id0hnxv6sg +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to use different dashboard actions. The document uses [AWS Cost Dashboard](https://docs.harness.io/article/u3yxrebj6r-aws-dashboard) as an example. You can use **Dashboard actions** in the same way for other **By Harness Dashboards** too. For example, [GCP Cost Dashboard](https://docs.harness.io/article/tk55quhfi4-gcp-dashboard), [Azure Cost Dashboard](https://docs.harness.io/article/n7vpieto0n-azure-cost-dashboard), [Cluster Cost Dashboard](https://docs.harness.io/article/uai4ud1ibi-cluster-cost-dashboard), and so on. + +### Step: Use Dashboard Actions + +Perform the following steps to use Dashboard actions: + +1. In Harness, click **Dashboards**. +2. In **All Dashboards**, select **By Harness** and click **AWS Cost Dashboard**. +3. In AWS Cost Dashboard, click **Dashboard actions** (3-dot menu to the right of the filter button).![](./static/use-dashboard-actions-33.png)The **Dashboard actions** provide the following options: + + + +| | | +| --- | --- | +| **Option** | **Description** | +| Download | Downloads the dashboard in PDF or CSV format. See [Download Dashboard Data](download-dashboard-data.md). | +| Reset filters | Resets the filter to its default value. By default, AWS Cost Dashboard displays the last 30 days' data. | +| Each tile's time zone | Updates the time zone of the dashboard. The time zone applied to your dashboard can affect the results shown, because of slight differences in the exact hours used for time-based data. If you are interested in the data as it applies to a different region, change the time zone of your dashboard to reflect that region.You can choose one of the following options: + * Choose **Each tile’s time zone** to run all tiles in the time zone in which they were saved. + * Choose **Viewer time zone** to run all tiles in the time zone selected in your account settings. + * Choose any of the time zones listed in the drop-down to run all tiles in that time zone.After you select your time zone, click **Update** in the dashboard time zone window; the dashboard will update for the new time zone.Once you navigate away from the dashboard, the dashboard will return to its default time zone setting. | + diff --git a/docs/platform/19_Terraform/_category_.json b/docs/platform/19_Terraform/_category_.json new file mode 100644 index 00000000000..7939e0b9924 --- /dev/null +++ b/docs/platform/19_Terraform/_category_.json @@ -0,0 +1 @@ +{"label": "Terraform", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Terraform"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "7cude5tvzh"}} \ No newline at end of file diff --git a/docs/platform/19_Terraform/harness-terraform-provider.md b/docs/platform/19_Terraform/harness-terraform-provider.md new file mode 100644 index 00000000000..307490e5374 --- /dev/null +++ b/docs/platform/19_Terraform/harness-terraform-provider.md @@ -0,0 +1,253 @@ +--- +title: Harness Terraform Provider Quickstart +description: This topic shows how to get started with the Harness Terraform Provider. +# sidebar_position: 2 +helpdocs_topic_id: 7cude5tvzh +helpdocs_category_id: w6r9f17pk3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently.​ + +Harness Terraform Provider is a library that you can use to create Harness Infrastructure. You can administer and use Harness functionality from within your Terraform setup using Harness Terraform Provider. + +This quickstart shows you how to write your configurations in Terraform and provision your Harness resources using the Harness Terraform Provider. + +### Before you begin + +* [Introduction to Terraform](https://www.terraform.io/intro) +* [Terraform Registry](https://www.terraform.io/registry) +* [Terraform Configuration Language](https://www.terraform.io/language) + +### Prerequisites + +* You must have a Harness Account. +* You must have an admin setup for your Harness Account. +* You must have a Personal Access Token (PAT). +For detailed steps on how to generate a PAT, see [Create a Personal Access Token](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md#create-personal-access-token). + +### Important + +* Harness Terraform Provider lets you provision the following: + + Organizations + + Projects + + Permissions setup + + Connectors + + Secrets + + Pipelines +* You cannot provision users using Harness Terraform Provider. +You can provision users through SCIM using [Okta](../3_Authentication/6-provision-users-with-okta-scim.md), [OneLogin](../3_Authentication/7provision-users-and-groups-with-one-login-scim.md) or [Azure AD](../3_Authentication/8-provision-users-and-groups-using-azure-ad-scim.md). +* You cannot run or monitor your Pipelines using Harness Terraform Provider. + +### Why use Harness Terraform Provider? + +Harness Terraform Provider lets you use Terraform scripts to configure Harness. Through scripts, it supports the creation of all the key Harness objects, including Projects, Pipelines, Connectors, and Secrets. + +It thus lets you create a repeatable and scalable structure of Harness infrastructure.​​ + +### Visual summary + +Here is a quick overview of Harness Terraform Provider: + +* You write your Terraform Configuration in `.tf` file. +You declare all the resources that represent your infrastructure objects in this file. +For more details, see [Configuration Language](https://www.terraform.io/language). +* Next, you initialize, plan, and apply your resources using the following commands: + + `Terraform init` + + `Terraform plan` + + `Terraform apply` +* Once you confirm Terraform apply, a state file .statetf is generated. +* Your resources are provisioned successfully.![](./static/harness-terraform-provider-00.png) + +### Install Harness Terraform Provider + +To install the Harness Terraform Provider, copy, and paste the following code into your Terraform configuration. + +Enter your Harness Account Id in `account_id`. + +The account Id is in every URL when using Harness: + +`https://app.harness.io/ng/#/account/``**{accountid}**``/home/get-started​` + +Enter your Personal Access Token (PAT) in `platform_api_key`. + +For detailed steps on how to generate a PAT, see [Create a Personal Access Token](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md#create-personal-access-token).​ + + +``` +terraform { + required_providers { + harness = { + source = "harness/harness" + version = "" + } + } + } +provider "harness" { + endpoint = "https://app.harness.io/gateway" + account_id = " AWSCodeDeployRole
  • AWSCodeDeployDeployerAccess
  • DescribeRegions required also.
  • | HTTPS: 443. | [AWS Managed (Predefined) Policies for AWS CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/auth-and-access-control-iam-identity-based-access-control.html#managed-policies) | +| AWS EC2 | Policy: AmazonEC2FullAccessDescribeRegions required also. | HTTP: 80.HTTP: 443.TCP: 9090. | [Controlling Access to Amazon EC2 Resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingIAM.html) | +| AWS ELB, ALB, ECS | Policy for Elastic Load Balancer, Application Load Balancer, and Elastic Container Service.DescribeRegions required also. | Well-known ports: 25, 80, 443, 465, and 587. | [Amazon ECS Service Scheduler IAM Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_IAM_role.html) | +| AWS S3 | Policy: AmazonS3ReadOnlyAccess.DescribeRegions required also. | HTTP: 443. | [Creating an IAM User in Your AWS Account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) | +| Azure | Client (Application) and Tenant (Directory) IDs, and Key. | Windows VMs (WinRM ports): HTTP: 5985, HTTPS: 5986. | [Get application ID and authentication key](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal#get-application-id-and-authentication-key) | +| Bamboo | Username and password for account. | HTTP: 443.TCP: 8085. | [Bamboo permissions](https://confluence.atlassian.com/bamboo/bamboo-permissions-369296034.html) | +| Bugsnag | Data Access API Auth Token. | The Bugsnag Data Access API is exposed on the same TCP port as the dashboard, 49080. | [Data Access API Authentication](https://bugsnagapiv2.docs.apiary.io/#introduction/authentication) | +| Datadog | API Key. | HTTPS: 443. | [Open Ports](https://docs.datadoghq.com/agent/network/?tab=agentv6#open-ports) | +| Docker Registry | User permission level. | TCP: 8083. | [Permission levels](https://docs.docker.com/v17.09/datacenter/dtr/2.0/user-management/permission-levels/) | +| Dynatrace | Access token. | HTTPS: 443. | [Access tokens](https://www.dynatrace.com/support/help/get-started/introduction/why-do-i-need-an-access-token-and-an-environment-id/#anchor-access-tokens) | +| ELK Elasticsearch | User (Read permission) or Token Header and Token Value. | TCP: 9200. | [User authentication](https://www.elastic.co/guide/en/elastic-stack-overview/current/setting-up-authentication.html) | +| Github Repo | User account: repository owner.Organization account: read and write. | HTTP: 443. | [Permission levels for a user account repository](https://help.github.com/articles/permission-levels-for-a-user-account-repository/) [Repository permission levels for an organization](https://help.github.com/articles/repository-permission-levels-for-an-organization/) | +| Google Cloud Platform (GCP) | Policies:
  • Kubernetes Engine Admin.
  • Storage Object Viewer.
  • | SSH: 22. | [Understanding Roles](https://cloud.google.com/iam/docs/understanding-roles?_ga=2.123080387.-954998919.1531518087#curated_roles) | +| JFrog Artifactory | Privileged User: Read permission. | HTTP: 443. | [Managing Permissions](https://www.jfrog.com/confluence/display/RTF/Managing+Permissions) | +| Jenkins | Matrix-based: Read permission.Execute Permission, if jobs are triggered from Harness stage. | HTTPS: 443. | [Matrix-based security](https://www.jenkins.io/doc/book/security/managing-security/) | +| Kubernetes Cluster | One of the following:* Same cluster as kubernetes delegate. Use this option if you installed the Harness delegate in your cluster.
  • Username and password.
  • CA certificate, client certificate, and client key. Key passphrase and key algorithm are optional.
  • For OpenShift: Kubernetes service account token.
  • | Depends where the cluster is hosted, such as GCP or AWS. | [Authenticating](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) | +| Logz | Token-based. | HTTPS: 443. | [Announcing the Logz.io Search API](https://logz.io/blog/announcing-the-logz-io-search-api/) | +| OpenShift | Kubernetes service account token. | HTTPS: 443. | [Enabling Service Account Authentication](https://docs.openshift.com/container-platform/3.6/dev_guide/service_accounts.html#enabling-service-account-authentication) | +| New Relic | API key. | HTTPS: 443. | [Access to REST API keys](https://docs.newrelic.com/docs/apis/getting-started/intro-apis/access-rest-api-keys) | +| Nexus | User account with Repository View Privilege or read for repository. | TCP: 8081. | [Nexus Managing Security](https://help.sonatype.com/repomanager2/configuration/managing-security) | +| Tanzu Application Service (formerly Pivotal Cloud Foundry) | User account with Admin, Org Manager, or Space Manager role. The user account must be able to update spaces, orgs, and applications. | HTTP: 80 or 443. | [Orgs, Spaces, Roles, and Permissions](https://docs.pivotal.io/pivotalcf/2-2/concepts/roles.html#roles) | +| Prometheus | None. | Depends on where the Prometheus server is hosted. For example, on AWS, port 9090 might be required. | [Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) | +| SMTP | None. | TCP: 25. | | +| Splunk | User account with Read permissions on eventtypes objects. | TCP: 8089 for API. | [Set permissions for objects in a Splunk app](http://dev.splunk.com/view/webframework-developapps/SP-CAAAE88) | +| Sumo Logic | User account with access ID and key and query permissions. | HTTPS: 443. | [API Authentication](https://help.sumologic.com/APIs/General-API-Information/API-Authentication) | +| WinRM | User account in the same Active Directory domain as the Windows instances the connection uses. | HTTP: 5985.HTTPS: 5986 and 443.SSH: 22. | [Installation and Configuration for Windows Remote Management](https://docs.microsoft.com/en-us/windows/desktop/winrm/installation-and-configuration-for-windows-remote-management) | + diff --git a/docs/platform/20_References/renaming-entities-and-resources.md b/docs/platform/20_References/renaming-entities-and-resources.md new file mode 100644 index 00000000000..2a6ec7d46bc --- /dev/null +++ b/docs/platform/20_References/renaming-entities-and-resources.md @@ -0,0 +1,65 @@ +--- +title: Entity Name and Id Rules +description: This topic covers the rules about identical names and Ids for Harness entities. +# sidebar_position: 2 +helpdocs_topic_id: 7rsydu6iq2 +helpdocs_category_id: fb16ljb8lu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Entities and Resources in Harness have names and Ids ([entity Identifier](harness-entity-reference.md)). For example, here's a GitHub Connector with the name **scm-hge** and the Id **scmhge**: + +![](./static/renaming-entities-and-resources-11.png) + +Entity Ids are used to refer to Harness entities in [Harness variable expressions](../12_Variables-and-Expressions/harness-variables.md). + +Ids are immutable once the entity has been created, but names can be changed at any time. + +Once the entity is created, you can change the name but the Id remains the same. + +There are some important rules to know: + +* [Entities of the same type in the same Project cannot use use Identical Ids](renaming-entities-and-resources.md#entities-of-the-same-type-in-the-same-project-cannot-use-use-identical-ids) +* [You can use the Same Id in Different Orgs and Projects](renaming-entities-and-resources.md#you-can-use-the-same-id-in-different-orgs-and-projects) +* [Different Types of Entities can have Identical Ids](renaming-entities-and-resources.md#different-types-of-entities-can-have-identical-ids) + +### Entities of the same type in the same Project cannot use use Identical Ids + +Entities of the same type in the same Project can have the same names but must have different Ids. For example, these secrets are both named **foo**, but they have different Ids: + +![](./static/renaming-entities-and-resources-12.png) + +For example, if you try to add a Connector to a Project that already has a Connector with the same Id, you will get an error: + +![](./static/renaming-entities-and-resources-13.png) + +### You can use the Same Id in Different Orgs and Projects + +Entities in different Harness Orgs can use identical Ids, and entities in the different Projects in the same Org can use identical Ids. + +For information on Organizations and Projects, see [Organizations and Projects Overview](../1_Organizations-and-Projects/1-projects-and-organizations.md).For example, here are two GitHub Connectors with the same names and Ids but in different Harness Orgs: + +![](./static/renaming-entities-and-resources-14.png) + +You can also have identical entities in different Projects in the same Org: + +![](./static/renaming-entities-and-resources-15.png) + +### Different Types of Entities can have Identical Ids + +Two entities of the same type, like Connectors, cannot have identical Ids in the same Project. + +Two entities of different types, like Connectors and Pipelines, or even Pipelines and Stages, can have identical Ids in the same Project. + +For example, in this Project there are four different types of entities with identical Ids: + +![](./static/renaming-entities-and-resources-16.png) + +### See also + +* [Organizations and Projects Overview](../1_Organizations-and-Projects/1-projects-and-organizations.md) +* [Entity Deletion Reference](entity-deletion-reference.md) +* [Entity Retention Policy](entity-retention-policy.md) +* [Entity Identifier Reference](entity-identifier-reference.md) + diff --git a/docs/platform/20_References/runtime-inputs.md b/docs/platform/20_References/runtime-inputs.md new file mode 100644 index 00000000000..c5c2dd70d03 --- /dev/null +++ b/docs/platform/20_References/runtime-inputs.md @@ -0,0 +1,145 @@ +--- +title: Fixed Values, Runtime Inputs, and Expressions +description: Most settings in Harness Pipelines allow you to use Fixed Values, Runtime Inputs, and Expressions. This topic describes each of these options. Fixed Values. Fixed Values are simply values that you en… +# sidebar_position: 2 +helpdocs_topic_id: f6yobn7iq0 +helpdocs_category_id: fb16ljb8lu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Most settings in Harness Pipelines allow you to use Fixed Values, Runtime Inputs, and Expressions. + +![](./static/runtime-inputs-02.png) + +This topic describes each of these options. + +### Fixed Values + +Fixed Values are simply values that you enter manually when you configure a setting and do not change at runtime. + +These are settings you don't need to change based on some other step or runtime operation. + +For example, here'a a **Timeout** setting: + +![](./static/runtime-inputs-03.png) + +You can enter a value for this setting such as `10m 30s`. That value is fixed and nothing that happens at runtime will change it. + +### Runtime Inputs + +When you use Runtime Inputs, you are setting placeholders for values that will be provided when you start a Pipeline execution. + +![](./static/runtime-inputs-04.png) + +You can template (or templatize) your Pipeline using Runtime Inputs, enabling users to select different values for each execution. For example, you can turn the Infrastructure Definition settings into Runtime Inputs and have users provide Dev, QA, and Prod values with each execution. + +This templating is different from the Harness Template Library feature.Furthermore, you can create Input Sets for the Runtime Inputs. Harness Input Sets are collections of runtime variables and values that can be provided to Pipelines before execution. You set up Input Sets for different Pipeline uses cases, and then simply select the Input Set you want to use at runtime. + +See [Run Pipelines using Input Sets and Overlays](../8_Pipelines/run-pipelines-using-input-sets-and-overlays.md). + +Sometimes, the inputs and settings for all of the stages in a Pipeline aren't known before you deploy. Some inputs and settings can depend on the execution of the previous stages in the Pipeline. + +For example, you might have an Approval step as part of the stage or Pipeline. Once the approval is received, you want to resume the next stage of the Pipeline execution by providing new inputs. + +To do this, when you add certain stage settings to your Pipeline, use Runtime Inputs. + +#### How Do Runtime Inputs Work? + +You select Runtime Input as the option for a setting. + +The Runtime Input is identified using the expression `<+input>`. + +When you run a Pipeline, you provide the value for the input. + +You can enter a value for the variable or use a Harness expression. + +Later, if you choose to rerun a Pipeline, the Pipeline will run using the Runtime Inputs you provided the last time it ran. + +#### Use Runtime Inputs in a Stage or Pipeline + +Using Runtime Inputs templates some or all of a stage or Pipeline's settings. The same Pipeline can be run using different values for all of the Runtime Inputs. + +#### CI Example + +You can use Runtime Inputs in a CI stage's Infrastructure. Here's an example using a Runtime Input in the **Namespace** setting. + +![](./static/runtime-inputs-05.png) + +#### CD Example + +You can use Runtime Inputs for the Service in a CD stage's Service settings. + +![](./static/runtime-inputs-06.png) +### Using Runtime Inputs During Execution + +Currently, this feature is behind the feature flag `NG_EXECUTION_INPUT`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.You can add runtime input to a pipeline that runs when a stage or a step is executed. If a custom stage is setup for runtime input, you can enter a shell script when prompted by Harness during execution. If a Harness Approval step is setup for runtime input, when the pipeline executes, you can specify the Harness Groups that will approve that step. + +#### Limitations and Requirements + +The following limitations and requirements apply to this feature: + +ServiceV2 and EnvironmentV2 are not supported for runtime input. + +As a user, you must have the Pipeline Execute permission to be able to submit runtime input during execution. + +#### Using Runtime Input During Execution With a Shell Script + +If a runtime input was specified for a step with execution input, you are prompted to enter the values before the step begins. Harness prompts you for values in the Step details. The Pipeline runs when the values are entered. + +When you create a pipeline with a custom stage, add a variable to the step and select **Running input**. To specify an input as runtime input during execution, add the following to the input by substituting the value within the quotes: + +`<+input>.default("abc").runtimeInput()` + +The **Value** field is populated with <+input>. + +In the **Configure Options** window, click the checkbox for **Request input value when the stage/step is being executed** and click **Submit**. + +![](./static/runtime-inputs-07.png) + +In **Execution**, click **Add Step for a Shell Script**, select **Shell Script**, and add a name for the shell script. In the **Script** window, add the variable that was created. For example: + +`<` + +![](./static/runtime-inputs-08.png) + +Save and run the pipeline. + +#### Using Runtime Input with an Approval Step + +To add a runtime input with an Approval step, create a new pipeline, add an Approval stage, and click Set Up Stage. + +In the workflow for Execution, click **Approval**. + +In **Manual Approval**, enter a name for this step. + +![](./static/runtime-inputs-09.png) + +For Approvers, click **User Groups** and select **Runtime input**. + +Click the settings icon for **User Groups**. + +In **Configure Options**, select **Request input value** when the stage/step is being executed, and click **Submit**. + +Save the pipeline and run it for the approval step. + +### Expressions + +With Expressions you can use Harness input, output, and execution variables in a setting. + +All of these variables represent settings and values in the Pipeline before and during execution. + +See [Built-in Harness Variables Reference](../12_Variables-and-Expressions/harness-variables.md). + +When you select **Expression**, you type `<+` and a value and the list of available variables appears. + +![](./static/runtime-inputs-10.png) + +Simply click a variable name to use it as the value for this setting. + +At runtime, Harness will replace the variable with the runtime value. + +### See also + +* [Platform Technical Reference](https://docs.harness.io/category/akr4ga1dfq-platform-technical-reference) + diff --git a/docs/platform/20_References/static/entity-deletion-reference-00.png b/docs/platform/20_References/static/entity-deletion-reference-00.png new file mode 100644 index 00000000000..bec50903938 Binary files /dev/null and b/docs/platform/20_References/static/entity-deletion-reference-00.png differ diff --git a/docs/platform/20_References/static/entity-deletion-reference-01.png b/docs/platform/20_References/static/entity-deletion-reference-01.png new file mode 100644 index 00000000000..73efc41451c Binary files /dev/null and b/docs/platform/20_References/static/entity-deletion-reference-01.png differ diff --git a/docs/platform/20_References/static/entity-identifier-reference-17.png b/docs/platform/20_References/static/entity-identifier-reference-17.png new file mode 100644 index 00000000000..eb8232a201a Binary files /dev/null and b/docs/platform/20_References/static/entity-identifier-reference-17.png differ diff --git a/docs/platform/20_References/static/renaming-entities-and-resources-11.png b/docs/platform/20_References/static/renaming-entities-and-resources-11.png new file mode 100644 index 00000000000..e1bc1837809 Binary files /dev/null and b/docs/platform/20_References/static/renaming-entities-and-resources-11.png differ diff --git a/docs/platform/20_References/static/renaming-entities-and-resources-12.png b/docs/platform/20_References/static/renaming-entities-and-resources-12.png new file mode 100644 index 00000000000..a744d477bb5 Binary files /dev/null and b/docs/platform/20_References/static/renaming-entities-and-resources-12.png differ diff --git a/docs/platform/20_References/static/renaming-entities-and-resources-13.png b/docs/platform/20_References/static/renaming-entities-and-resources-13.png new file mode 100644 index 00000000000..c48ffb54028 Binary files /dev/null and b/docs/platform/20_References/static/renaming-entities-and-resources-13.png differ diff --git a/docs/platform/20_References/static/renaming-entities-and-resources-14.png b/docs/platform/20_References/static/renaming-entities-and-resources-14.png new file mode 100644 index 00000000000..500639c746f Binary files /dev/null and b/docs/platform/20_References/static/renaming-entities-and-resources-14.png differ diff --git a/docs/platform/20_References/static/renaming-entities-and-resources-15.png b/docs/platform/20_References/static/renaming-entities-and-resources-15.png new file mode 100644 index 00000000000..b20b12669c4 Binary files /dev/null and b/docs/platform/20_References/static/renaming-entities-and-resources-15.png differ diff --git a/docs/platform/20_References/static/renaming-entities-and-resources-16.png b/docs/platform/20_References/static/renaming-entities-and-resources-16.png new file mode 100644 index 00000000000..820779fdd23 Binary files /dev/null and b/docs/platform/20_References/static/renaming-entities-and-resources-16.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-02.png b/docs/platform/20_References/static/runtime-inputs-02.png new file mode 100644 index 00000000000..dcae07d40dd Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-02.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-03.png b/docs/platform/20_References/static/runtime-inputs-03.png new file mode 100644 index 00000000000..ccaa0708ad9 Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-03.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-04.png b/docs/platform/20_References/static/runtime-inputs-04.png new file mode 100644 index 00000000000..3598c5be0a4 Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-04.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-05.png b/docs/platform/20_References/static/runtime-inputs-05.png new file mode 100644 index 00000000000..3975fa1a048 Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-05.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-06.png b/docs/platform/20_References/static/runtime-inputs-06.png new file mode 100644 index 00000000000..cc74cf2cba9 Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-06.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-07.png b/docs/platform/20_References/static/runtime-inputs-07.png new file mode 100644 index 00000000000..6f717fe1980 Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-07.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-08.png b/docs/platform/20_References/static/runtime-inputs-08.png new file mode 100644 index 00000000000..59ffea83fae Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-08.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-09.png b/docs/platform/20_References/static/runtime-inputs-09.png new file mode 100644 index 00000000000..ad276c310f3 Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-09.png differ diff --git a/docs/platform/20_References/static/runtime-inputs-10.png b/docs/platform/20_References/static/runtime-inputs-10.png new file mode 100644 index 00000000000..87e5371b8a0 Binary files /dev/null and b/docs/platform/20_References/static/runtime-inputs-10.png differ diff --git a/docs/platform/20_References/static/tags-reference-18.png b/docs/platform/20_References/static/tags-reference-18.png new file mode 100644 index 00000000000..992222678df Binary files /dev/null and b/docs/platform/20_References/static/tags-reference-18.png differ diff --git a/docs/platform/20_References/tags-reference.md b/docs/platform/20_References/tags-reference.md new file mode 100644 index 00000000000..5328b31fb94 --- /dev/null +++ b/docs/platform/20_References/tags-reference.md @@ -0,0 +1,51 @@ +--- +title: Tags Reference +description: You can add Tags to Harness entities and then use the Tags to search for all matching entities. For example, you can add a Tag to a Harness Project and then filter the list of Projects by Tag. What a… +# sidebar_position: 2 +helpdocs_topic_id: i8t053o0sq +helpdocs_category_id: phddzxsa5y +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add Tags to Harness entities and then use the Tags to search for all matching entities. For example, you can add a Tag to a Harness Project and then filter the list of Projects by Tag. + +### What are Tags? + +Tags are simply metadata added to Harness entities. They are strings that can contain any characters. + +Harness Tags are applied to entities and then used to filter them. Multiple Tags can be added to an entity, creating a list of Tags. + +For example, the Tag **docs** has been added to two Projects and so a search for **doc** returns Projects with name and Tags that match: + +![](./static/tags-reference-18.png) +### Limitations + +* [Runtime inputs](runtime-inputs.md) (`<+input>`) are not supported in Tags. +* Harness variable expressions cannot be used in Tags. See [Built-in Harness Variables Reference](../12_Variables-and-Expressions/harness-variables.md). + +### Delegate Tags and General Tags + +Delegate Tags are different from general Tags in the following ways: + +* Delegate Tags are Tags added to Delegates. +* Delegate Tags are not used in searches. +* Delegates are only tagged with Delegate Tags. General Tags are not applied to Delegates. + +### Tag Expressions + +You can reference Tags using [Harness expressions](../12_Variables-and-Expressions/harness-variables.md). + +You simply reference the tagged entity and then use `tags.[tag name]`, like `<+pipeline.tags.docs>` + +For example, here are several different references: + +* `<+pipeline.tags.[tag name]>` +* `<+stage.tags.[tag name]>` +* `<+pipeline.stages.s1.tags.[tag name]>` +* `<+serviceConfig.service.tags.[tag name]>` + +### Related Reference Material + +* [Built-in Harness Variables Reference](../12_Variables-and-Expressions/harness-variables.md) + diff --git a/docs/platform/20_References/whitelist-harness-domains-and-ips.md b/docs/platform/20_References/whitelist-harness-domains-and-ips.md new file mode 100644 index 00000000000..41f5adb50e6 --- /dev/null +++ b/docs/platform/20_References/whitelist-harness-domains-and-ips.md @@ -0,0 +1,36 @@ +--- +title: Allowlist Harness Domains and IPs +description: Harness SaaS Delegates only need outbound access to the Harness domain name (most commonly, app.harness.io) and, optionally, to logging.googleapis.com. The URL logging.googleapis.com is used to provi… +# sidebar_position: 2 +helpdocs_topic_id: ooelo06uy5 +helpdocs_category_id: fb16ljb8lu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness SaaS Delegates only need outbound access to the Harness domain name (most commonly, **app.harness.io)** and, optionally, to **logging.googleapis.com**. + +The URL logging.googleapis.com is used to provide logs to Harness support. + +### Harness Manager + +Users of the Harness Manager browser client need access to **app.harness.io** and **static.harness.io**. This is not a Harness Delegate requirement. It's simply for users to use the browser-based Harness Manager. + +### Vanity URL + +If you are using a Harness vanity URL, like **mycompany.harness.io**, you can allowlist it also. + +### Allowlist Harness SaaS IPs + +The following list is optional. You can allowlist these IPs if needed. + + +``` +35.201.91.229 +162.159.134.64 +162.159.135.64 +2606:4700:7::a29f:8640 +2606:4700:7::a29f:8740 +``` +Harness will not change IPs without 30 days notice to all customers. If a security emergency requires a change, all customers will be notified. + diff --git a/docs/platform/2_Delegates/_category_.json b/docs/platform/2_Delegates/_category_.json new file mode 100644 index 00000000000..16b055e6578 --- /dev/null +++ b/docs/platform/2_Delegates/_category_.json @@ -0,0 +1 @@ +{"label": "Delegates", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Delegates"}, "customProps": {"position": 20, "helpdocs_category_id": "seizygxv7b", "helpdocs_parent_category_id": "9i5thr0ot2"}} \ No newline at end of file diff --git a/docs/platform/2_Delegates/delegate-guide/_category_.json b/docs/platform/2_Delegates/delegate-guide/_category_.json new file mode 100644 index 00000000000..de390f57236 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/_category_.json @@ -0,0 +1 @@ +{"label": "Delegate Guide", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Delegate Guide"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "m9iau0y3hv"}} \ No newline at end of file diff --git a/docs/platform/2_Delegates/delegate-guide/automate-delegate-installation.md b/docs/platform/2_Delegates/delegate-guide/automate-delegate-installation.md new file mode 100644 index 00000000000..3b1b28ccd75 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/automate-delegate-installation.md @@ -0,0 +1,107 @@ +--- +title: Automate delegate lnstallation +description: Automate Delegate installation and registration. +# sidebar_position: 2 +helpdocs_topic_id: 9deaame3qz +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can automate delegate installation and registration by duplicating the downloaded delegate configuration file, renaming the delegate, and applying the new file. You can script this process to duplicate delegates as needed. + +When you apply the new delegate file, the delegate registers with Harness under the new name. + +This topic describes the process used to duplicate, rename, and register a new delegate. You will likely want to script this process. + +### Review: Automation and high availability (HA) + +High availability does not require delegate automation. Automation can be useful, however, when multiple delegates are required to perform concurrent tasks, or depending on the compute resources you assign to delegates. A rule of thumb is one delegate for every 300 to 500 service instances. + +In addition to compute considerations, you can implement high availability for delegates. This means installing multiple delegates in your environment. + +For example, in Kubernetes deployments, you can set up two delegates, each in its own pod in the same target Kubernetes cluster. To do so, edit the Kubernetes delegate `spec` you download from Harness to provide multiple replica pods. + + +``` +... +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + labels: + harness.io/app: harness-delegate + harness.io/account: xxxx + harness.io/name: test + name: test-zeaakf + namespace: harness-delegate +spec: + replicas: 2 + selector: + matchLabels: + harness.io/app: harness-delegate +... +``` +In this example, the `spec` section of the harness-kubernetes.yaml file was changed to provide two replica pods. High availability is provided without automation. + +For the Kubernetes delegate, you only need one delegate in the cluster. Simply increase the number of replicas, and nothing else. Do not add another delegate to the cluster in an attempt to achieve HA.If you want to install Kubernetes delegates in separate clusters, do not use the same harness-kubernetes.yaml and name for both delegates. Download a new Kubernetes YAML `spec` from Harness for each delegate you want to install. This avoids name conflicts.In every case, the delegates must be identical in terms of permissions, keys, connectivity, and so on.With two or more delegates running in the same target environment, high availability is provided by default. The failure of a single delegate does not stop Harness from performing deployments. You can also increase availability further by running three delegates in case you lose two, and so on. + +### Limitations + +* Two delegates in different locations with different connectivity do not support high availability. For example, if you have a delegate in a development environment and another in a production environment, the development delegate does not communicate with the production delegate. The reverse is also true. If the sole delegate in an environment stops running, Harness ceases operation. + +### Step 1: Duplicate the delegate config file + +These steps assume you have already installed and registered a Delegate. If you haven't, see the [Delegate installation topics](https://docs.harness.io/category/9i5thr0ot2).Duplicate the configuration file for a delegate you have installed and registered with your Harness account. + +Ensure that the delegate environment variables are set correctly. + +The delegate configuration file contains environment variables for account, Organization, and Project. The account variable is always set with your Harness account Id. + +If your delegate is registered at the account level, the Organization and Project variables will be empty. If your delegate is registered at the Organization level, the Project variable will be empty. + +If your delegate configuration file uses other environment variables, review them to make certain that you want them duplicated. + +The Delegate Environment Variables are described in the relevant Delegate installation topics. + +### Step 2: Rename the New Delegate + +The process you use to rename a delegate depends on its type. For Docker delegates, you change the name in one environment variable in the Docker compose file. For the Kubernetes delegate, you change multiple instances of the name. + +#### Kubernetes delegate renaming + +In the Kubernetes delegate config file, several labels must be updated: + +* `Secret.metadata.name` +* `StatefulSet.metadata.labels.harness.io/name` +* `StatefulSet.metadata.name` +* `StatefulSet.metadata.spec.selector.matchLabels.harness.io/name` +* `StatefulSet.metadata.spec.template.metadata.labels.harness.io/name` +* `StatefulSet.metadata.spec.template.spec.env.name: DELEGATE_NAME` + +The `DELEGATE_NAME` environment variable looks like this: + + +``` +... + - name: DELEGATE_NAME + value: string +... +``` +#### Docker delegate renaming + +To rename the Docker delegate, simply rename the value for the `DELEGATE_NAME` environment variable. + + +``` +... + - DELEGATE_NAME=my-new-delegate +... +``` +### Step 3: Install the new delegate + +After you update the delegate names, you can apply the configuration file. The delegate installs and registers with Harness. + +### See also + +* [Run Scripts on Delegates](run-scripts-on-delegates.md) + diff --git a/docs/platform/2_Delegates/delegate-guide/build-custom-delegate-images-with-third-party-tools.md b/docs/platform/2_Delegates/delegate-guide/build-custom-delegate-images-with-third-party-tools.md new file mode 100644 index 00000000000..23dbb877554 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/build-custom-delegate-images-with-third-party-tools.md @@ -0,0 +1,424 @@ +--- +title: Build custom delegate images with third-party tools +description: This document explains how to build and host custom delegate images that include the tools you select. +# sidebar_position: 2 +helpdocs_topic_id: c2hjcqvpq8 +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Manager installs and configures delegates with the binaries that most CI/CD pipelines require. In some cases, however, a preconfigured image isn’t the right fit. For example, preconfigured images can: + +* Introduce the vulnerabilities of the binaries they include. +* Restrict you to the use of the included third-party tools and versions. + +This document explains how you can: + +* Build and host a custom delegate image that includes the tools you select. +* Use your custom delegate in CI/CD pipelines. + +This is not a runtime process. For information on how to install tools on the delegate in runtime, see [Install Delegates with Third-Party Tools](install-delegates-with-third-party-tools.md). + +### Select the delegate image + +You can build on either of the following Harness-provided images. + + + +| | | +| --- | --- | +| **Image** | **Description** | +| Harness Delegate Docker image | A publicly available Docker image providing Harness Delegate. | +| Harness Minimal Delegate Docker image | A minimal delegate image available in Docker Hub at . | + +You can use the `latest` version minimal image from the Docker repository. + +![](./static/build-custom-delegate-images-with-third-party-tools-07.png) +### Build the delegate image + +When you build a custom delegate image, you modify the image you select with user privileges and binaries. This section explains the build script used for the process. In this example, the script builds a custom image for deployment by Kubernetes and by Terraform. + +The first lines of the script provide information about the base image and user privileges. This example uses the minimal image with delegate minor version 77029. + + +``` +FROM harness/delegate:22.10.77029.minimal +USER root +``` +The delegate container is granted root user privileges. + +The first `RUN` block installs or updates the `unzip` and `yum-utils` tools. The `--nodocs` option prevents the installation of documentation on the image. + + +``` +RUN microdnf update \ + && microdnf install --nodocs \ + unzip \ + Yum-utils +``` +The second `RUN` block uses the `yum` utility to create a configuration file for the HashiCorp repository, and then uses the `microdnf` package manager to install the required Terraform components: + + +``` +RUN yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo \ + && microdnf install -y terraform +``` +The final `RUN` block retrieves the Kubernetes `kubectl` command-line tool that is required to manipulate clusters. The Linux `chmod +x` instruction makes the utility executable: + + +``` +RUN mkdir /opt/harness-delegate/tools && cd /opt/harness-delegate/tools \ + && curl -LO " -L -s" && chmod +x kubectl  +``` +   + +The final instruction defines the Linux `$PATH` environment variable that provides the location of the tools to be installed: + + +``` +ENV PATH=/opt/harness-delegate/tools/:$PATH +``` +The complete script is as follows: + + +``` +FROM harness/delegate:22.10.77029.minimal +USER root + +RUN microdnf update \ + && microdnf install --nodocs \ + unzip \ + Yum-utils + +RUN yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo \ + && microdnf install -y terraform + +RUN mkdir /opt/harness-delegate/tools && cd /opt/harness-delegate/tools \ + && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl + +ENV PATH=/opt/harness-delegate/tools/:$PATH + +``` +### Upload the image to Docker Hub + +The next step is to upload your custom image to Docker Hub. For information on working with Docker repositories, see [Manage repositories](https://docs.docker.com/docker-hub/repos/) in the Docker documentation. + +### Modify the delegate manifest + +Before you can deploy a delegate, you must: + +* Update the image path to the repository location of the custom image. +* Suspend delegate auto-upgrade functionality. + +Delegate auto-upgrade is not compatible with custom images. + +### Upgrade the image path + +Open the delegate manifest file and locate the container `spec` (`spec.containers`). Change the image path to reflect the repository location of your uploaded image as shown in the following YAML. + + +``` + spec: + terminationGracePeriodSeconds: 600 + restartPolicy: Always + containers: + - image: example/org:custom-delegate + imagePullPolicy: Always + name: delegate + securityContext: + allowPrivilegeEscalation: false + runAsUser: 0    +``` +    + +For purposes of this example, the image was uploaded to `example/org:custom-delegate`. + +### Suspend delegate auto-upgrade + +Before you deploy a custom delegate, you must suspend its auto-upgrade functionality. This step prevents your image from being automatically upgraded and the installed binaries removed.  + +To suspend auto-upgrade, in the delegate manifest, locate the `CronJob` resource. In the resource `spec`, set the `suspend` field to `true` as shown in the following YAML: + + +``` +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + labels: + harness.io/name: custom-del-upgrader-job + name: custom-del-upgrader-job + namespace: harness-delegate-ng +spec: + suspend: true + schedule: "0 */1 * * *" + concurrencyPolicy: Forbid + startingDeadlineSeconds: 20 + +``` +### Example manifest file + +For the complete file, expand the following example. + +Example manifest +``` +apiVersion: v1 +kind: Namespace +metadata: + name: harness-delegate-ng + +--- + +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: harness-delegate-ng-cluster-admin +subjects: + - kind: ServiceAccount + name: default + namespace: harness-delegate-ng +roleRef: + kind: ClusterRole + name: cluster-admin + apiGroup: rbac.authorization.k8s.io + +--- + +apiVersion: v1 +kind: Secret +metadata: + name: custom-del-account-token + namespace: harness-delegate-ng +type: Opaque +data: + DELEGATE_TOKEN: "" + +--- + +# If delegate needs to use a proxy, please follow instructions available in the documentation +# https://ngdocs.harness.io/article/5ww21ewdt8-configure-delegate-proxy-settings + +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + harness.io/name: custom-del + name: custom-del + namespace: harness-delegate-ng +spec: + replicas: 1 + selector: + matchLabels: + harness.io/name: custom-del + template: + metadata: + labels: + harness.io/name: custom-del + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "3460" + prometheus.io/path: "/api/metrics" + spec: + terminationGracePeriodSeconds: 600 + restartPolicy: Always + containers: + - image: foobar/org:custom-delegate + imagePullPolicy: Always + name: delegate + securityContext: + allowPrivilegeEscalation: false + runAsUser: 0 + ports: + - containerPort: 8080 + resources: + limits: + cpu: "0.5" + memory: "2048Mi" + requests: + cpu: "0.5" + memory: "2048Mi" + livenessProbe: + httpGet: + path: /api/health + port: 3460 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + failureThreshold: 2 + startupProbe: + httpGet: + path: /api/health + port: 3460 + scheme: HTTP + initialDelaySeconds: 30 + periodSeconds: 10 + failureThreshold: 15 + envFrom: + - secretRef: + name: custom-del-account-token + env: + - name: JAVA_OPTS + value: "-Xms64M" + - name: ACCOUNT_ID + value: + - name: MANAGER_HOST_AND_PORT + value: https://app.harness.io/gratis + - name: DEPLOY_MODE + value: KUBERNETES + - name: DELEGATE_NAME + value: custom-del + - name: DELEGATE_TYPE + value: "KUBERNETES" + - name: DELEGATE_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: INIT_SCRIPT + value: "" + - name: DELEGATE_DESCRIPTION + value: "" + - name: DELEGATE_TAGS + value: "" + - name: NEXT_GEN + value: "true" + - name: CLIENT_TOOLS_DOWNLOAD_DISABLED + value: "true" + - name: LOG_STREAMING_SERVICE_URL + value: "https://app.harness.io/gratis/log-service/" + +--- + +apiVersion: v1 +kind: Service +metadata: + name: delegate-service + namespace: harness-delegate-ng +spec: + type: ClusterIP + selector: + harness.io/name: custom-del + ports: + - port: 8080 + +--- + +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: upgrader-cronjob + namespace: harness-delegate-ng +rules: + - apiGroups: ["batch", "apps", "extensions"] + resources: ["cronjobs"] + verbs: ["get", "list", "watch", "update", "patch"] + - apiGroups: ["extensions", "apps"] + resources: ["deployments"] + verbs: ["get", "list", "watch", "create", "update", "patch"] + +--- + +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: custom-del-upgrader-cronjob + namespace: harness-delegate-ng +subjects: + - kind: ServiceAccount + name: upgrader-cronjob-sa + namespace: harness-delegate-ng +roleRef: + kind: Role + name: upgrader-cronjob + apiGroup: "" + +--- + +apiVersion: v1 +kind: ServiceAccount +metadata: + name: upgrader-cronjob-sa + namespace: harness-delegate-ng + +--- + +apiVersion: v1 +kind: Secret +metadata: + name: custom-del-upgrader-token + namespace: harness-delegate-ng +type: Opaque +data: + UPGRADER_TOKEN: "NjUxM2FlZWUxODVhMjUyZDdjMDYxNTRmMjU4YWRjYWM=" + +--- + +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-del-upgrader-config + namespace: harness-delegate-ng +data: + config.yaml: | + mode: Delegate + dryRun: false + workloadName: custom-del + namespace: harness-delegate-ng + containerName: delegate + delegateConfig: + accountId: gVcEoNyqQNKbigC_hA3JqA + managerHost: https://app.harness.io/gratis + +--- + +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + labels: + harness.io/name: custom-del-upgrader-job + name: custom-del-upgrader-job + namespace: harness-delegate-ng +spec: + suspend: true + schedule: "0 */1 * * *" + concurrencyPolicy: Forbid + startingDeadlineSeconds: 20 + jobTemplate: + spec: + template: + spec: + serviceAccountName: upgrader-cronjob-sa + restartPolicy: Never + containers: + - image: harness/upgrader:latest + name: upgrader + imagePullPolicy: Always + envFrom: + - secretRef: + name: custom-del-upgrader-token + volumeMounts: + - name: config-volume + mountPath: /etc/config + volumes: + - name: config-volume + configMap: + name: custom-del-upgrader-config + +``` +### Deploy the delegate + +You can deploy the delegate from Harness Manager or by applying the modified delegate manifest file to your cluster. + +![](./static/build-custom-delegate-images-with-third-party-tools-08.png) + +You can confirm the successful deployment and registration of the delegate in Harness Manager. Check the delegate information to ensure that auto-upgrade is not enabled. + +### Create pipelines + +You can use your registered delegate to run Kubernetes and Terraform pipelines. + +For information about creating a Kubernetes pipeline, see [Kubernetes deployment tutorial](https://docs.harness.io/article/knunou9j30). + +![](./static/build-custom-delegate-images-with-third-party-tools-09.png) + +For information about creating a Terraform Plan, see [Provision with the Terraform Apply Step](https://docs.harness.io/article/hdclyshiho). + diff --git a/docs/platform/2_Delegates/delegate-guide/configure-delegate-proxy-settings.md b/docs/platform/2_Delegates/delegate-guide/configure-delegate-proxy-settings.md new file mode 100644 index 00000000000..e666dd28080 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/configure-delegate-proxy-settings.md @@ -0,0 +1,61 @@ +--- +title: Configure delegate proxy settings +description: All of the Delegates include proxy settings you can use to change how the Delegate connects to the Harness Manager. By default, the Harness Delegate uses HTTP and HTTPS in its Proxy Scheme settings.… +# sidebar_position: 2 +helpdocs_topic_id: 5ww21ewdt8 +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +All of the Delegates include proxy settings you can use to change how the Delegate connects to the Harness Manager. + +By default, the Harness Delegate uses HTTP and HTTPS in its Proxy Scheme settings. + +### Kubernetes Proxy Settings + +The proxy settings are in the **harness-delegate.yaml** file: + + +``` +... + - name: PROXY_HOST + value: "" + - name: PROXY_PORT + value: "" + - name: PROXY_SCHEME + value: "" + - name: NO_PROXY + value: "" + - name: PROXY_MANAGER + value: "true" + - name: PROXY_USER + valueFrom: + secretKeyRef: + name: doc-example-proxy + key: PROXY_USER + - name: PROXY_PASSWORD + valueFrom: + secretKeyRef: + name: doc-example-proxy + key: PROXY_PASSWORD +... +``` +The `PROXY_MANAGER` setting determines whether the Delegate bypasses proxy settings to reach the Harness Manager in the cloud. If you want to bypass, enter `false`. + +#### In-Cluster Kubernetes Delegate with Proxy + +If an in-cluster Kubernetes Delegate has a proxy configured, then `NO_PROXY` must contain the cluster master IP. This enables the Delegate to skip the proxy for in-cluster connections. + +### Subnet Masks not Supported + +You cannot use Delegate proxy settings to specify the Cluster Service Network CIDR notation and make the Delegate bypass the proxy to talk to the K8s API. + +Harness does not allow any methods of representing a subnet mask. + +The mask should be set in the cluster itself. For example: + + +``` +kubectl -n default get service kubernetes -o json | jq -r '.spec.clusterIP' +``` diff --git a/docs/platform/2_Delegates/delegate-guide/custom-delegate.md b/docs/platform/2_Delegates/delegate-guide/custom-delegate.md new file mode 100644 index 00000000000..db349dcfb18 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/custom-delegate.md @@ -0,0 +1,87 @@ +--- +title: Create a custom delegate that Includes custom tools +description: Create your own custom Delegate and include the tools needed for your builds and deployments. +# sidebar_position: 2 +helpdocs_topic_id: nbi9uj9wm4 +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +[Harness Delegates](../delegates-overview.md) are installed from the Harness Manager and typically contain the binaries you need for your CI/CD Pipelines. + +In some cases, you might want to add more tools or even create your own custom Delegate and include the tools needed for your builds and deployments. + +This topic explains the different ways to create a custom Delegate. + +### Before you begin + +* [Delegates Overview](../delegates-overview.md) +* [Supported Platforms and Technologies](https://docs.harness.io/article/1e536z41av-supported-platforms-and-technologies) +* [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview) + +### Option: Use the INIT\_SCRIPT environment variable + +In the Delegate config file, locate the `INIT_SCRIPT` environment variable. + +For example, here it is in the Kubernetes Delegate harness-delegate.yaml file: + + +``` +... +apiVersion: apps/v1 +kind: StatefulSet +... +spec: +... + spec: + ... + env: + ... + - name: INIT_SCRIPT + value: |- + echo install wget + apt-get install wget + echo wget installed +... +``` +In `value`, enter your script. For a list of common scripts, see [Common Delegate Initialization Scripts](https://newdocs.helpdocs.io/article/auveebqv37-common-delegate-profile-scripts). + +For steps on using the `INIT_SCRIPT` environment variable, see [Run Scripts on Delegates](run-scripts-on-delegates.md). + +You can see all of the environment variables for the Delegates in the following topics: + +* [Install a Kubernetes Delegate](install-a-kubernetes-delegate.md) +* [Install a Docker Delegate](../delegate-install-docker/install-a-docker-delegate.md) + +### Option: Add a delegate image + +Harness Delegate Docker images are public and you can use them to compose your own Delegate image. + +The Harness Delegate Docker images are located on [Docker Hub](https://hub.docker.com/r/harness/delegate/tags). + +For example, you can curl and install all the tool libraries you want and then curl `delegate:latest` to add the latest Delegate image. + +Or you can create your own image and simply include the Delegate image. + +Here's and example that installs Node.JS on top of a Delegate image and sets an environment variable key:value pair. + + +``` +... +FROM harness/delegate:latest + +RUN apt-get update && apt-get -y install nodejs + +ENV key=value +... +``` +You can see all of the environment variables for the Delegates in the following topics: + +* [Install a Kubernetes Delegate](install-a-kubernetes-delegate.md) +* [Install a Docker Delegate](../delegate-install-docker/install-a-docker-delegate.md) + +### See also + +* [Delegate How-tos](https://docs.harness.io/category/9i5thr0ot2-delegates). + diff --git a/docs/platform/2_Delegates/delegate-guide/delegate-auto-update.md b/docs/platform/2_Delegates/delegate-guide/delegate-auto-update.md new file mode 100644 index 00000000000..731ffa3ac8c --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/delegate-auto-update.md @@ -0,0 +1,40 @@ +--- +title: Delegate auto-update +description: Harness Delegate is installed with automatic updates enabled. Harness recommends that you accept automatic updates to the delegate image. If you prefer to disable auto-update, use one of the followin… +# sidebar_position: 2 +helpdocs_topic_id: iusry91f4u +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Delegate is installed with automatic updates enabled. Harness recommends that you accept automatic updates to the delegate image.  + +If you prefer to disable auto-update, use one of the following options: + +* Modify the delegate YAML to prevent installation of the auto-update component. +* Suspend auto-updates to the installed delegate image. + +**To suspend auto-update on an installed image** + +1. Run the following command to suspend auto-update on the installed image: +`kubectl patch cronjobs -p '{"spec" : {"suspend" : true }}' -n ` +2. In the delegate manifest, locate the **CronJob** resource. In the resource `spec`, set the `suspend` field to `true`: +`spec:` +--`suspend: true` + +**To prevent installation of the auto-update feature** + +* Remove the `cronJob` section before you apply the manifest. + +### Delegate YAML changes + +Harness does not recommend the use of delegate images that are not current. However, if you require an earlier image version, check the repository on [Docker Hub](https://hub.docker.com/). + +**To update the delegate YAML** + +1. Replace **delegate name** with the name you gave your delegate. The Harness Delegate image is the latest release image by default. +2. Replace **account id** with your Harness account ID. + +For an example of a complete Delegate YAML file, see [Example Kubernetes Manifest: Harness Delegate](../delegate-reference/example-kubernetes-manifest-harness-delegate.md). + diff --git a/docs/platform/2_Delegates/delegate-guide/delegate-how-tos.md b/docs/platform/2_Delegates/delegate-guide/delegate-how-tos.md new file mode 100644 index 00000000000..eafe0890fde --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/delegate-how-tos.md @@ -0,0 +1,45 @@ +--- +title: Delegate how-to +description: Harness Delegate is a service you run in your own environment, such as your local network, VPC, or cluster. For example, you can run the Delegate in the deployment target cluster for a CD Pipelin… +# sidebar_position: 2 +helpdocs_topic_id: 0slo2gklsy +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Delegate Guide includes the following how-to topics. + +### Basic installation + +* [Install Harness Delegate on Kubernetes](../delegate-install-kubernetes/install-harness-delegate-on-kubernetes.md) +* [Install Harness Delegate Using Helm](../delegate-install-kubernetes/install-harness-delegate-using-helm.md) +* [Install a Docker Delegate](../delegate-install-docker/install-a-docker-delegate.md) +* [Install a Legacy Kubernetes Delegate](install-a-kubernetes-delegate.md) + +### Advanced installation + +* [Automate Delegate Installation](automate-delegate-installation.md) +* [Non-Root Delegate Installation](non-root-delegate-installation.md) +* [Install a Delegate with Third-Party Custom Tool Binaries](install-a-delegate-with-3-rd-party-tool-custom-binaries.md) + +### Delegate customization + +* [Create a Custom Delegate that Includes Custom Tools](custom-delegate.md) + +### Configuring delegates + +* [Run Initialization Scripts on Delegates](run-scripts-on-delegates.md) +* [Configure Delegate Proxy Settings](configure-delegate-proxy-settings.md) + +### Delegate management + +* [Select Delegates with Tags](select-delegates-with-selectors.md) +* [Delegate Registration and Verification](delegate-registration.md) +* [Delete a Delegate](delete-a-delegate.md) + +### Secure delegates + +* [Secure Delegates with Tokens](secure-delegates-with-tokens.md) +* [Truststore Override for Delegates](trust-store-override-for-delegates.md) + diff --git a/docs/platform/2_Delegates/delegate-guide/delegate-registration.md b/docs/platform/2_Delegates/delegate-guide/delegate-registration.md new file mode 100644 index 00000000000..43fd07a0af1 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/delegate-registration.md @@ -0,0 +1,59 @@ +--- +title: Delegate registration and verification +description: To set up a Harness Delegate, you install the Delegate in your environment and the Delegate automatically registers with your Harness account. The Delegate config file (for example, Kubernetes Delega… +# sidebar_position: 2 +helpdocs_topic_id: 39tx85rekj +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To set up a Harness Delegate, you install the Delegate in your environment and the Delegate automatically registers with your Harness account. + +The Delegate config file (for example, Kubernetes Delegate YAML file) contains your Harness account Id. That's how the Delegate knows where to register. + +### Installing and registering delegates + +To install a Delegate, follow the steps in the relevant Delegate installation topic, such as [Install a Kubernetes Delegate](install-a-kubernetes-delegate.md) or [Install a Docker Delegate](../delegate-install-docker/install-a-docker-delegate.md). + +Once you have installed the Delegate in your environment, click **Verify** in the Delegate wizard and Harness will verify that it is receiving heartbeats from the Delegate. + +![](./static/delegate-registration-01.png) +This means Harness is waiting for the Delegate you installed to register. + +Registration can take a few minutes. + +Once the Delegate registers, the **Verify** screen will indicate that the Delegate is running. + +### Verifying delegate registration manually + +The Verify screen also includes troubleshooting steps. + +Here are a few of the steps for the Kubernetes Delegate. + +Check the status of the Delegate on your cluster: + + +``` +kubectl describe pod -n harness-delegate +``` +Check the Delegate logs: + + +``` +kubectl logs -f -n harness-delegate +``` +If the pod isn't up, you might see the following error in your cluster: + + +``` +CrashLoopBackOff: Kubernetes Cluster Resources are not available. +``` +Make sure the Kubernetes Cluster Resources (CPU, Memory) are enough. + +If the Delegate didn’t reach a healthy state, try this: + + +``` +kubectl describe pod -n harness-delegate +``` diff --git a/docs/platform/2_Delegates/delegate-guide/delete-a-delegate.md b/docs/platform/2_Delegates/delegate-guide/delete-a-delegate.md new file mode 100644 index 00000000000..491c5adff7f --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/delete-a-delegate.md @@ -0,0 +1,84 @@ +--- +title: Delete a delegate +description: This topic describes how to delete a Harness Delegate from a Kubernetes cluster and Harness. +# sidebar_position: 2 +helpdocs_topic_id: tl6ql57em6 +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes how to delete a delegate from your Kubernetes cluster and Harness installation. + +### Identify the delegate type + +Harness Delegate is installed as a Kubernetes [Deployment](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/) object**.** A legacy delegate, on the other hand, is installed as a Kubernetes [StatefulSet](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/) object. This means that the process used to delete a legacy delegate differs from the process used to delete Harness Delegate. + +You can verify the delegate you're using by looking at its manifest file or by running `kubectl get all -n harness-delegate-ng`. + +To delete a legacy delegate, skip to the "Delete a legacy delegate" section. + +### Delete a delegate + +Use the following process to delete a delegate. + +#### Step 1: Delete the deployment for the delegate + +To delete a delegate from your Kubernetes cluster, you delete the **Deployment** object that represents its deployment. + +`kubectl delete deployment -n harness-delegate-ng ` + +Use the following command to retrieve a list of deployments: + +`kubectl get deployments` + +The deployment name is specified in the `metadata.name` field of the Kubernetes manifest you used to install the delegate. + + +``` +... +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + harness.io/name: doc-demos + name: doc-demos + namespace: harness-delegate-ng +... +``` +In this example, the `name` field is specified as `doc-demos.` + +Next, delete the Updater **CronJob**: + +`kubectl delete cronjob -n harness-delegate-ng -upgrader-job` + +For example, if the **Deployment** name is `quickstart-delegate`: + +`kubectl delete cronjob -n harness-delegate-ng quickstart-delegate-upgrader-job` + +#### Step 2: Delete the delegate in Harness + +Locate the delegate in the Harness account/Project/Org, click more options (⋮), and then click **Delete**. + +![](./static/delete-a-delegate-15.png) +### Delete a legacy delegate + +Use the following process to delete a Legacy Delegate. + +#### Step 1: Delete the StatefulSet for the delegate + +To delete a legacy delegate from your Kubernetes cluster, you delete the **StatefulSet** object that represents its deployment. + +A **StatefulSet** resource is ensures that the desired number of pods are running and available at all times. If you delete a pod that belongs to a **StatefulSet** without deleting the **StatefulSet** itself, the pod is recreated. + +For example, you can use the following command to delete the **StatefulSet** that created a delegate pod named `quickstart-vutpmk-0`: + +`$ kubectl delete statefulset -n harness-delegate-ng quickstart-vutpmk` + +The name of the delegate pod includes the name of the **StatefulSet** followed by the pod identifier `-0`. + +#### Step 2: Delete the delegate in Harness + +Locate the delegate in the Harness account/Project/Org, click more options (⋮), and then click **Delete**. + +![](./static/delete-a-delegate-16.png) \ No newline at end of file diff --git a/docs/platform/2_Delegates/delegate-guide/enable-root-user-privileges-to-add-custom-binaries.md b/docs/platform/2_Delegates/delegate-guide/enable-root-user-privileges-to-add-custom-binaries.md new file mode 100644 index 00000000000..65b7c1fb79b --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/enable-root-user-privileges-to-add-custom-binaries.md @@ -0,0 +1,75 @@ +--- +title: Enable root user privileges to add custom binaries +description: You can install Harness Delegate with or without root user privileges. By default, the Harness Delegate container runs as root user. The Delegate installer provides the option to install the Delegate… +# sidebar_position: 2 +helpdocs_topic_id: lbndemc7qi +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can install Harness Delegate with or without root user privileges. By default, the Harness Delegate container runs as root user.  + +The Delegate installer provides the option to install the Delegate with non-root user privileges. Non-root user access supports the security principle of minimum access. But without root user access, you cannot modify the Delegate image with custom binaries. + +This topic explains how to use the Delegate installer to install with or without root user privileges. This topic also explains how to modify an installed Delegate to enable root user privileges and the installation of custom binaries. + +### Delegate images + +Harness provides the following Delegate images. Each image includes a set of tools that target a particular scenario. + + + +| | | +| --- | --- | +| **Delegate Image** | **Description** | +| harness/delegate:*YY.MM.xxxxx* | Includes the Delegate and its dependencies.Includes client tools such as `kubectl`, Helm, and ChartMuseum. | +| harness/delegate:*YY.MM.xxxxx*.minimal | Includes the Delegate and its dependencies. | + +For detailed information on the contents of Docker Delegate images, see [Support for Docker Delegate Images](support-for-delegate-docker-images.md). + +### Select user privileges in the installer + +The easiest way to set user privileges for the Delegate container is to use the Delegate installer. + +![](./static/enable-root-user-privileges-to-add-custom-binaries-10.png) +**To set container privileges in the Delegate installer** + +1. Advance to the **Delegate Setup** page.![](./static/enable-root-user-privileges-to-add-custom-binaries-11.png) +2. Clear or select the checkbox as follows: +* To set non-root user privileges, clear **Run delegate with root access**. +* To set root user privileges, select **Run delegate with root access**. + +The Delegate is installed with the specified privilege level. + +### Specify user privileges in delegate YAML + +To add binaries to a Delegate image that was installed without root user privileges, you can change the Delegate manifest file to allow them. To do so, locate the container `spec` and ensure it includes the following `securityContext` object: + + +``` +spec: + containers: + - image: harness/delegate:ng + imagePullPolicy: Always + name: harness-delegate-instance + securityContext: + allowPrivilegeEscalation: false + runAsUser: 0 +``` +### Use INIT\_SCRIPT with the microdnf package manager + +To add binaries, you must first install the `microdnf` package manager on the Delegate image. This utility is required to run installations and other operations on images.  + +Use the `INIT_SCRIPT` environment variable to specify the custom binaries you want `microdnf` to install. + + +``` +- name: INIT_SCRIPT + value: |- + microdnf install -y zip unzip +``` +In this example, the value of `INIT_SCRIPT` is the `microdnf install` instruction that installs the `zip` and `unzip` packages. + +Note that the `apt-get` command-line tool and profile scripts target an earlier Ubuntu-based image and are not supported for these images. + diff --git a/docs/platform/2_Delegates/delegate-guide/install-a-delegate-with-3-rd-party-tool-custom-binaries.md b/docs/platform/2_Delegates/delegate-guide/install-a-delegate-with-3-rd-party-tool-custom-binaries.md new file mode 100644 index 00000000000..a88965fa61d --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/install-a-delegate-with-3-rd-party-tool-custom-binaries.md @@ -0,0 +1,247 @@ +--- +title: Install a delegate with third-party tool custom binaries +description: Use a Delegate image that includes no binaries and use the Delegate YAML environment variables to install the binaries you want. +# sidebar_position: 2 +helpdocs_topic_id: ql86a0iqta +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness delegates include binaries for the third-party SDKs that are required for Harness-supported integrations including Kubernetes, Helm, and so on. The binaries are listed below in [Table: Certified SDK Versions for Deployment Types](#table_certified_sdk_versions_for_deployment_types). + +Harness includes multiple binary versions to support customers using code that requires versions other than the latest. + +##### Problem + +Older binary versions might include minor vulnerabilities that are detected in vulnerability scans. You might want to avoid vulnerabilities by selecting the binary versions you install. + +You might also want to install tools that Harness does not include. + +##### Solution + +To support this customization, Harness provides a delegate image that does not include any third-party SDK binaries. We call this image the No Tools Image. + +Using the No Tools Image and delegate YAML, you can install the specific SDK versions you want. You install software on the Delegate using the `INIT_SCRIPT` environment variable in the delegate YAML. + +This topic explains how to use the No Tools delegate image and install specific SDK versions. + +##### Required SDKs for Harness + +If you use the No Tools Image, you must install certain SDKs so that Harness can perform its tasks. These SDKs are covered in this topic and listed below in [Table: Certified SDK Versions for Deployment Types](#table_certified_sdk_versions_for_deployment_types). + +### Step 1: Edit delegate YAML + +To install a delegate, you download its YAML file and run it in your environment. + +Before you run the delegate, you edit the YAML file to change the following: + +* Delegate environment variables +* Delegate image + +These steps are below. + +### Step 2: Add Harness-required SDKs + +In the delegate container `spec`, use the `INIT_SCRIPT` environment variable to download the certified SDK versions required by Harness. + +The SDKs you need to add depend on what type of deployment you are doing with Harness. + +For more information on how to use the `INIT_SCRIPT` environment variable, see [Run Initialization Scripts on Delegates](run-scripts-on-delegates.md). + +#### Table: Certified SDK versions for deployment types + +The following table lists each of the certified SDKs for each deployment type. + +You must add the certified SDKs for your deployment type. Harness requires these tools to perform tasks. + +##### Kubernetes deployments + +For Kubernetes deployments, include the SDKs and tools that your manifest type requires. + + + +| | | | | +| --- | --- | --- | --- | +| **Manifest Type** | **Required Tool/SDK** | **Certified Version** | **Installation Command** | +| Kubernetes | `kubectl` | v1.24.3 | ```## Kubectl curl -LO https://dl.k8s.io/release/v1.24.3/bin/linux/amd64/kubectl -o kubectl chmod +x ./kubectl mv kubectl /opt/harness-delegate/custom-client-tools/kubectl```| +| | `go-template` | v0.4.1 | ```## go-template mkdir -p /opt/harness-delegate/client-tools/go-template/v0.4.1/curl -L https://app.harness.io/public/shared/tools/go-template/release/v0.4.1/bin/linux/amd64/go-template -o go-templatechmod +x ./go-templatemv go-template /opt/harness-delegate/client-tools/go-template/v0.4.1/go-template```| +| Helm | `kubectl` | v1.24.3 | ```## Kubectlcurl -LO https://dl.k8s.io/release/v1.24.3/bin/linux/amd64/kubectl -o kubectlchmod +x ./kubectlmv kubectl /opt/harness-delegate/custom-client-tools/kubectl```| +| | `helm` | v3.9.2 | ```## Helm V3curl -L0 https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz -o helm-v3.9.2.tar.gztar -xvzf helm-v3.9.2.tar.gzchmod +x ./linux-amd64/helmmv ./linux-amd64/helm /opt/harness-delegate/custom-client-tools/helm3```| +| Helm (chart is stored in GCS or S3) | `kubectl` | v1.24.3 | ```## Kubectlcurl -LO https://dl.k8s.io/release/v1.24.3/bin/linux/amd64/kubectl -o kubectlchmod +x ./kubectlmv kubectl /opt/harness-delegate/custom-client-tools/kubectl```| +| | `helm` | v3.9.2 | ```## Helm V3curl -L0 https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz -o helm-v3.9.2.tar.gztar -xvzf helm-v3.9.2.tar.gzchmod +x ./linux-amd64/helmmv ./linux-amd64/helm /opt/harness-delegate/custom-client-tools/helm3```| +| | `chartmuseum` | v0.8.2 and v0.12.0 | ```## Install default chartmuseum version from Harness CDNcurl -L https://app.harness.io/public/shared/tools/chartmuseum/release/v0.8.2/bin/linux/amd64/chartmuseum -o chartmuseumchmod +x ./chartmuseummv chartmuseum /opt/harness-delegate/client-tools/chartmuseum/v0.8.2/chartmuseum## Install newer version of chartmuseum versiom from Harness CDN## To use this version USE_LATEST_CHARTMUSEUM_VERSION should be enabledcurl -L https://app.harness.io/public/shared/tools/chartmuseum/release/v0.12.0/bin/linux/amd64/chartmuseum -o chartmuseumchmod +x ./chartmuseummv chartmuseum /opt/harness-delegate/client-tools/chartmuseum/v0.12.0/chartmuseum## Install custom version of chartmuseum from official release## Binary should be moved to one of predefined paths:## /opt/harness-delegate/client-tools/chartmuseum/v0.8.2/chartmuseum ## /opt/harness-delegate/client-tools/chartmuseum/v0.12.0/chartmuseum [If USE_LATEST_CHARTMUSEUM_VERSION is enabled]curl -L https://get.helm.sh/chartmuseum-v0.14.0-linux-amd64.tar.gz -o chartmuseum-v0.14.tar.gztar xzvf chartmuseum-v0.14.tar.gzchmod +x ./linux-amd64/chartmuseummv ./linux-amd64/chartmuseum /opt/harness-delegate/client-tools/chartmuseum/v0.8.2/chartmuseum```| +| Kustomize | `kubectl` | v1.24.3 | ```## Kubectlcurl -LO https://dl.k8s.io/release/v1.24.3/bin/linux/amd64/kubectl -o kubectlchmod +x ./kubectlmv kubectl /opt/harness-delegate/custom-client-tools/kubectl```| +| | `kustomize` | v4.5.4 | ```## kustomizecurl -L0 https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv4.5.4/kustomize_v4.5.4_linux_amd64.tar.gz -o kustomize_v4.5.4.tar.gztar -xvzf kustomize_v4.5.4.tar.gzchmod +x ./kustomizemv kustomize /opt/harness-delegate/custom-client-tools/kustomize```| +| OpenShift | `kubectl` | v1.24.3 | ```## kubectlcurl -LO https://dl.k8s.io/release/v1.24.3/bin/linux/amd64/kubectl -o kubectlchmod +x ./kubectlmv kubectl /opt/harness-delegate/custom-client-tools/kubectl```| +| | `oc` | v4 | ```## OpenShift OCcurl -L0 https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/linux/oc.tar.gz -o oc.tar.gztar -xvzf oc.tar.gzchmod +x ./ocmv oc /opt/harness-delegate/custom-client-tools/oc```| +| Terraform | `terraform-config-inspect` | v.1.0 | ```## terraform-config-inspect v1.0mkdir -p /opt/harness-delegate/client-tools/tf-config-inspect/v1.0/curl -L https://app.harness.io/storage/harness-download/harness-terraform-config-inspect/v1.0/linux/amd64/terraform-config-inspect -o terraform-config-inspectchmod +x ./terraform-config-inspectmv terraform-config-inspect /opt/harness-delegate/client-tools/tf-config-inspect/v1.0/terraform-config-inspect```| +| | | v.1.1 | ```## terraform-config-inspect v1.1mkdir -p /opt/harness-delegate/client-tools/tf-config-inspect/v1.1/curl -L https://app.harness.io/storage/harness-download/harness-terraform-config-inspect/v1.1/linux/amd64/terraform-config-inspect -o terraform-config-inspectchmod +x ./terraform-config-inspectmv terraform-config-inspect /opt/harness-delegate/client-tools/tf-config-inspect/v1.1/terraform-config-inspect```| +| WinRm | `harness-pywinrm` | v0.4-dev | ```## This library is available for download in CDNhttps://app.harness.io/public/shared/tools/harness-pywinrm/release/v0.4-dev/bin/linux/amd64/harness-pywinrm```| + + +##### Native Helm deployments + +For Native Helm deployments, include the following SDKs/tools. + + + +| | | | | +| --- | --- | --- | --- | +| **Manifest Type** | **Required Tool/SDK** | **Certified Version** | **Installation Command** | +| Helm Chart | helm | v3.9.2 | ```## Helm V3curl -L0 https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz -o helm-v3.9.2.tar.gztar -xvzf helm-v3.9.2.tar.gzchmod +x ./linux-amd64/helmmv ./linux-amd64/helm /opt/harness-delegate/custom-client-tools/helm3```| +| | kubectlRequired if Kubernetes version is 1.16+. | v1.24.3 | ```## Kubectlcurl -LO https://dl.k8s.io/release/v1.24.3/bin/linux/amd64/kubectl -o kubectlchmod +x ./kubectlmv kubectl /opt/harness-delegate/custom-client-tools/kubectl```| + +##### SCM Required if OPTIMIZED\_GIT\_FETCH\_FILES Feature Flag is Enabled + +Harness performs a `git clone` to fetch files. If the fetch is timing out, it can be because the repo is too large for the network connection to fetch it before timing out. + +To fetch very large repos, Harness can enable the feature flag `OPTIMIZED_GIT_FETCH_FILES` on your account. When this feature flag is enabled, Harness will use provider-specific APIs to improve performance. + +If you have enabled this feature flag, then your delegate YAML must include the following download. + + +``` +## scm +mkdir -p /opt/harness-delegate/client-tools/scm/36d92fd8/ +curl -L https://app.harness.io/public/shared/tools/scm/release/36d92fd8/bin/linux/amd64/scm -o scm +chmod +x ./scm +mv scm /opt/harness-delegate/client-tools/scm/36d92fd8/scm +``` +Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +#### Example of Kubernetes delegate manifest with required SDK downloads + +The following delegate YAML contains examples of downloads for all Harness SDKs. + +You can edit the YAML to include only the SDKs and versions Harness requires for your deployment type. + + +``` +... + - name: DELEGATE_TYPE + value: "KUBERNETES" + - name: DELEGATE_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: INIT_SCRIPT + value: | + mkdir -p /opt/harness-delegate/custom-client-tools/ + mkdir -p /opt/harness-delegate/download-tools/ + cd /opt/harness-delegate/download-tools + + ## Kubectl + curl -L0 https://dl.k8s.io/release/v1.24.3/bin/linux/amd64/kubectl -o kubectl + chmod +x ./kubectl + mv kubectl /opt/harness-delegate/custom-client-tools/kubectl + + ## Helm V3 + curl -L0 https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz -o helm-v3.9.2.tar.gz + tar -xvzf helm-v3.9.2.tar.gz + chmod +x ./linux-amd64/helm + mv ./linux-amd64/helm /opt/harness-delegate/custom-client-tools/helm3 + + ## Kustomize + curl -L0 https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv4.5.4/kustomize_v4.5.4_linux_amd64.tar.gz -o kustomize_v4.5.4.tar.gz + tar -xvzf kustomize_v4.5.4.tar.gz + chmod +x ./kustomize + mv kustomize /opt/harness-delegate/custom-client-tools/kustomize + + ## OpenShift OC + curl -L0 https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/linux/oc.tar.gz -o oc.tar.gz + tar -xvzf oc.tar.gz + chmod +x ./oc + mv oc /opt/harness-delegate/custom-client-tools/oc + + ## go-template + mkdir -p /opt/harness-delegate/client-tools/go-template/v0.4.1/ + curl -L0 https://app.harness.io/public/shared/tools/go-template/release/v0.4.1/bin/linux/amd64/go-template -o go-template + chmod +x ./go-template + mv go-template /opt/harness-delegate/client-tools/go-template/v0.4.1/go-template + + ## scm + mkdir -p /opt/harness-delegate/client-tools/scm/36d92fd8/ + curl -L0 https://app.harness.io/public/shared/tools/scm/release/36d92fd8/bin/linux/amd64/scm -o scm + chmod +x ./scm + mv scm /opt/harness-delegate/client-tools/scm/36d92fd8/scm + + ## Replace default version of chartmuseum v0.12.0 with v0.14 + ## USE_LATEST_CHARTMUSEUM_VERSION is enabled for account + curl -L https://get.helm.sh/chartmuseum-v0.14.0-linux-amd64.tar.gz -o chartmuseum-v0.14.tar.gz + tar xzvf chartmuseum-v0.14.tar.gz + chmod +x ./linux-amd64/chartmuseum + mv ./linux-amd64/chartmuseum /opt/harness-delegate/client-tools/chartmuseum/v0.12.0/chartmuseum + + cd /opt/harness-delegate +... +``` +### Step 3: Disable default SDK downloads + +In the delegate container `spec`, add the following environment variables to prevent the delegate from downloading the default SDK binary versions. + + +``` +... + spec: + containers: + ... + env: + ... + - name: INSTALL_CLIENT_TOOLS_IN_BACKGROUND + value: "false" + - name: CLIENT_TOOLS_DOWNLOAD_DISABLED + value: "true" +... +``` +### Step 4: Add your custom tools + +In the delegate container `spec`, use the `INIT_SCRIPT` environment variable to download any additional tools you want to add. + +### Step 5: Update environment variables for common tools + +You can set up custom paths to certain third-party binaries using environment variables. + + + +| | | +| --- | --- | +| **Tool** | **Environment Variable Name** | +| Helm v3 | HELM3\_PATH | +| Helm v2 | HELM\_PATHIf you are performing a [Native Helm deployment](https://docs.harness.io/article/lbhf2h71at-native-helm-quickstart), do not use `HELM_PATH` for the Helm 2 binary. Harness requires the Helm 2 binary on the delegate in its standard path, for example: `/usr/local/bin/helm`. | +| Kustomize | KUSTOMIZE\_PATH | +| Kubectl | KUBECTL\_PATH | +| OpenShift (OC) | OC\_PATH | + +The following example uses these environment variables: + + +``` +... + - name: HELM3_PATH + value: /opt/harness-delegate/custom-client-tools/helm + - name: KUSTOMIZE_PATH + value: /opt/harness-delegate/custom-client-tools/kustomize + - name: KUBECTL_PATH + value: /opt/harness-delegate/custom-client-tools/kubectl + - name: OC_PATH + value: /opt/harness-delegate/custom-client-tools/oc +... +``` +### Step 6: Update delegate image with 'no tools' image tag + +In the delegate container `spec`, edit the image to use the `ubi-no-tools` tag. + + +``` +... + spec: + containers: + - image: harness/delegate:ubi-no-tools + imagePullPolicy: Always +... +``` +### See also + +* [Common Delegate Initialization Scripts](../delegate-reference/common-delegate-profile-scripts.md) + diff --git a/docs/platform/2_Delegates/delegate-guide/install-a-kubernetes-delegate.md b/docs/platform/2_Delegates/delegate-guide/install-a-kubernetes-delegate.md new file mode 100644 index 00000000000..765771d1905 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/install-a-kubernetes-delegate.md @@ -0,0 +1,427 @@ +--- +title: Install a legacy Kubernetes delegate +description: Install a harness Kubernetes Delegate. +# sidebar_position: 2 +helpdocs_topic_id: f9bd10b3nj +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness Delegate is a service you run in your own environment, such as your local network, VPC, or cluster. + +For example, you can run the Delegate in the deployment target cluster for a CD Pipeline or the build farm cluster for a CI Pipeline. + +The Delegate connects all of your artifact, infrastructure, collaboration, verification, and other providers with the Harness Manager. + +Most importantly, the Delegate performs all Harness operations. + +There are several types of Delegates. This topic describes how to install the Kubernetes Delegate. + + +:::note +If you are migrating from Harness FirstGen to Harness NextGen, you must install new Delegates in Harness NextGen. Harness FirstGen Delegates won't work with Harness NextGen. +::: + + +### Limitations + +Currently, Harness Kubernetes Delegates don't install with the default settings in GKE Auto Pilot Mode. Please use the Manual mode when creating the cluster to make sure it meets the Delegate requirements. + +The Delegate requires access to all the Connectors and Harness Secrets needed to run a Pipeline. This means that the Delegate requires permissions to do the following: + +* Access all the secrets used by all the Connectors used in a Pipeline. +* Create and update secrets in Kubernetes. This is necessary to pull the images needed to run individual Steps. + +### Visual summary + +The following diagram shows how the Delegate enables Harness to integrate with all of your deployment resources: + +![](./static/install-a-kubernetes-delegate-12.png) +Here's a 10min video that walks you through adding a Harness Kubernetes Cluster Connector and Harness Kubernetes Delegate. The Delegate is added to the target cluster and then the Kubernetes Cluster Connector uses the Delegate to connect to the cluster: + +### Inline or standalone installation + +You can install a Delegate whenever you are adding a Connector to a Pipeline or you can install one outside a Pipeline in **Resources**. + +The steps involved are the same. + +### Installation location + +You can install the Kubernetes Delegate inside or outside your deployment target cluster (CD) or build farm cluster (CIE). + +* **Inside the cluster:** you can install the Kubernetes Delegate inside the target or build farm cluster. Later, when you add a Kubernetes Cluster Connector, the Connector can inherit its credentials from the Kubernetes Delegate. +* **Outside the cluster:** you can install the Kubernetes Delegate outside the target or build farm cluster. Later, when you add a Kubernetes Cluster Connector, the Connector cannot inherit its credentials from the Kubernetes Delegate. In this case, the Kubernetes Cluster Connector must use an alternate method for credentials. For example, the master URL of the target cluster and a Service Account with the required credentials. + +### Step 1: Ensure Kubernetes prerequisites + +To install a Kubernetes Delegate, you must have access to a Kubernetes cluster. You'll install the Harness Delegate as YAML or Helm Chart. + +For connectivity, see [Delegate Requirements and Limitations](../delegate-reference/delegate-requirements-and-limitations.md). + +You'll need the following Kubernetes permissions to install the delegate: + +* Permission to create a namespace (for the Harness Delegate namespace). +* Permission to create statefulSets (to create the Harness Delegate pod). + +### Step 2: Select the Kubernetes delegate type + +Inline or standalone, click **New Delegate**. + + Delegate selection options appear. + +![](./static/install-a-kubernetes-delegate-13.png) +Click **Kubernetes**, and then click **Continue**. + +Enter a name and description for the Delegate that will let others know what it is used for, or where it's installed. + +### Step 3: Add delegate name + + +:::note +**Do not run Delegates with the same name in different clusters.** See [Troubleshooting](https://docs.harness.io/article/jzklic4y2j-troubleshooting). +::: + + +Add a name for the Delegate. The name will be added to the Delegate YAML as the `name` metadata of the StatefulSet. + + +:::note +**Legacy Delegates are deployed as StatefulSet objects. By default, the StatefulSet.serviceName field is empty (“”) and does not need to be specified. Delegates do not require service names.** +::: + + + +:::note +**The combined length of the Delegate name and the service name must not exceed 255 bytes. If the maximum length is exceeded, the Delegate might not appear in the Harness Manager UI. For more information on StatefulSet.serviceName, see** [**StatefulSetSpec**](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/#StatefulSetSpec) **in** [**Kubernetes API**](https://kubernetes.io/docs/reference/kubernetes-api/)**. +::: + + +Add Tags to the Delegate. By default, Harness adds a Tag using the name you enter, but you can more. Simply type them in and press Enter. + +These Tags are useful for selecting the Delegate when creating a Connector. + +### Step 4: Select delegate size + +In **Delegate Size**, select the size of Delegate you want to install. + +Your Kubernetes cluster must have the unallocated resources required to run the Harness Delegate workload: + +* Laptop - 1.6GB memory, 0.5CPU +* Small - 3.3GB memory, 1CPU +* Medium - 6.6GB memory, 2CPU +* Large - 13.2GB memory, 4CPU + +**Important:** these sizing requirements are for the Delegate only. Your cluster will require more memory for Kubernetes, the operating system, and other services. + +#### Important resource considerations + +These requirements are for the Delegate only. Your cluster will have system, Kubernetes, and other resources consumers. Make sure that the cluster has enough memory, storage, and CPU for all of its resource consumers. + +Most importantly, when the Delegate is installed inside the target deployment or build farm cluster, the cluster must also support the resources needed by the services you are deploying or building. + +For example, if you use the Small option that requires 3.3GB of memory, don't use a cluster with only 4GB or memory. It won't be enough to run the Delegate and other resources. + +### Step 5: Download and install the script + +Click **Download Script**. The YAML file for the Kubernetes Delegate, and its README, will download to your computer as an archive. + +Open a terminal and navigate to where the Delegate file is located. + +Extract the YAML file's folder from the download and then navigate to the folder that you extracted: + + +``` +tar -zxvf harness-delegate-kubernetes.tar.gz + +cd harness-delegate-kubernetes +``` +You'll connect to your cluster using the terminal so you can simply copy the YAML file over. + +In the same terminal, log into your Kubernetes cluster. In most platforms, you select the cluster, click **Connect**, and copy the access command. + +Let's quickly confirm that the cluster you created can connect to the Harness platform. Enter the following command: + +Next, install the Harness Delegate using the **harness-delegate.yaml** file you just downloaded. In the terminal connected to your cluster, run this command: + + +``` +kubectl apply -f harness-delegate.yaml +``` +The successful output is something like this: + + +``` +% kubectl apply -f harness-delegate.yaml +namespace/harness-delegate unchanged +clusterrolebinding.rbac.authorization.k8s.io/harness-delegate-cluster-admin unchanged +secret/k8s-quickstart-proxy unchanged +statefulset.apps/k8s-quickstart-sngxpn created +service/delegate-service unchanged +``` +Run this command to verify that the Delegate pod was created: + + +``` +kubectl get pods -n harness-delegate-ng +``` +It'll take a moment for the Delegate to appear in Harness' **Delegates** list. + +You're ready to connect Harness to your artifact server and cluster. After those quick steps, you'll begin creating your deployment. + +### Review: Delegate role requirements + +The YAML provided for the Harness Delegate defaults to the `cluster-admin` role because that ensures anything could be applied. If you can't use `cluster-admin` because you are using a cluster in your company, you'll need to edit the Delegate YAML. + +The set of permissions should include `list`, `get`, `create`, `watch` (to fetch the pod events), and `delete` permissions for each of the entity types Harness uses. + +If you don’t want to use `resources: [“*”]` for the Role, you can list out the resources you want to grant. Harness needs `configMap`, `secret`, `event`, `deployment`, and `pod` at a minimum for deployments, as stated above. + +In the Delegate installation settings, you also have the option to select cluster read-only access and namespace-specific access. When you select these options, the YAML generated by Harness is changed to reflect the limited access: + +![](./static/install-a-kubernetes-delegate-14.png) +### Step 6: Verify + +For an overview of verification, see [Delegate Registration and Verification](delegate-registration.md). + +In the Delegate wizard, click **Verify** and Harness will verify that it is receiving heartbeats from the Delegate. + +Your Delegate is installed. + +### Option: Troubleshooting + +Harness will provide a lot of troubleshooting steps. Here are a few: + +Check the status of the Delegate on your cluster: + + +``` +kubectl describe pod -n harness-delegate-ng +``` +Check the Delegate logs: + + +``` +kubectl logs -f -n harness-delegate-ng +``` +If the pod isn't up, you might see the following error in your cluster: + + +``` +CrashLoopBackOff: Kubernetes Cluster Resources are not available. +``` +Make sure the Kubernetes Cluster Resources (CPU, Memory) are enough. + +If the Delegate didn’t reach a healthy state, try this: + + +``` +kubectl describe pod -n harness-delegate-ng +``` +### Kubernetes delegate environment variables + +The following table lists each of the environment variables in the Harness Kubernetes Delegate YAML. + + + +| | | | +| --- | --- | --- | +| **Environment variable** | **Description** | **Example** | +| `JAVA_OPTS` | JVM options for the Delegate. Use this variable to override or add JVM parameters. | +``` +- name: JAVA_OPTS value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Xms64M" +``` + | +| `ACCOUNT_ID` | The Harness account Id for the account where this Delegate will attempt to register.This value is added automatically to the Delegate config file (YAML, etc) when you add the Delegate. | +``` +- name: ACCOUNT_ID value: H5W8iol5TNWc4G9h5A2MXg +``` + | +| `DELEGATE_TOKEN` | The Harness account token used to register the Delegate. | +``` +- name: DELEGATE_TOKEN value: d239xx88bf7xxxxxxx836ea +``` + | +| `MANAGER_HOST_AND_PORT` | The Harness SaaS manager URL. `https` indicates port 443. | +``` +- name: MANAGER_HOST_AND_PORT value: https://app.harness.io +``` + | +| `WATCHER_STORAGE_URL` | The URL for the Watcher versions. | +``` +- name: WATCHER_STORAGE_URL value: https://app.harness.io/public/prod/premium/watchers +``` + | +| `WATCHER_CHECK_LOCATION` | The Delegate version location for the Watcher to check for. | +``` +- name: WATCHER_CHECK_LOCATION value: current.version +``` + | +| `REMOTE_WATCHER_URL_CDN` | The CDN URL for Watcher builds. | +``` +- name: REMOTE_WATCHER_URL_CDN value: https://app.harness.io/public/shared/watchers/builds +``` + | +| `DELEGATE_STORAGE_URL` | The URL where published Delegate jars are stored. | +``` +- name: DELEGATE_STORAGE_URL value: https://app.harness.io +``` + | +| `DELEGATE_CHECK_LOCATION` | The storage location hosting the published Delegate versions. | +``` +- name: DELEGATE_CHECK_LOCATION value: delegateprod.txt +``` + | +| `DEPLOY_MODE` | Deployment mode: Kubernetes, Docker, etc. | +``` +- name: DEPLOY_MODE value: KUBERNETES +``` + | +| `DELEGATE_NAME` | The name of the Delegate. This is the name that will appear in Harness when the Delegate is registered.You can automate Delegate creation by omitting the name, and then have a script copying the Delegate YAML file and add a unique name to `value` for each new Delegate you want to register.See [Automate Delegate Installation](automate-delegate-installation.md). | +``` +- name: DELEGATE_NAME value: qa +``` + | +| `NEXT_GEN` | Indicates that this Delegate will register in [Harness NextGen](https://docs.harness.io/article/ra3nqcdbaf-compare-first-gen-and-next-gen).If this variable is set to `false`, the Delegate will attempt to register in Harness FirstGen. | +``` +- name: NEXT_GEN value: "true" +``` + | +| `DELEGATE_DESCRIPTION` | The description added to the Delegate in the Harness Manager or YAML before registering.It appears in the Delegate details page in the Harness Manager. | +``` +- name: DELEGATE_DESCRIPTION value: "" +``` + | +| `DELEGATE_TYPE` | The type of Delegate. | +``` +- name: DELEGATE_TYPE value: "KUBERNETES" +``` + | +| `DELEGATE_TAGS` | The Tags added to the Delegate in the Harness Manager or YAML before registering.Tags are generated by Harness using the Delegate name but you can also add your own Tags.Tags appear in the Delegate details page in the Harness Manager.See [Tags Reference](../../20_References/tags-reference.md) and [Select Delegates with Tags](select-delegates-with-selectors.md). | +``` +- name: DELEGATE_TAGS value: "" +``` + | +| `DELEGATE_TASK_LIMIT` | The maximum number of tasks the Delegate can perform at once.All of the operations performed by the Delegate are categorized as different types of tasks. | +``` +- name: DELEGATE_TASK_LIMIT value: "50" +``` + | +| `DELEGATE_ORG_IDENTIFIER` | The Harness Organization [Identifier](../../20_References/entity-identifier-reference.md) where the Delegate will register.Delegates at the account-level do not have a value for this variable. | +``` +- name: DELEGATE_ORG_IDENTIFIER value: "engg" +``` + | +| `DELEGATE_PROJECT_IDENTIFIER` | The Harness Project [Identifier](../../20_References/entity-identifier-reference.md) where the Delegate will register.Delegates at the account or Org-level do not have a value for this variable. | +``` +- name: DELEGATE_PROJECT_IDENTIFIER value: "myproject" +``` + | +| `PROXY_*` | All of the Delegates include proxy settings you can use to change how the Delegate connects to the Harness Manager.The `secretKeyRef` are named using the Delegate name. | +``` +- name: PROXY_HOST value: ""- name: PROXY_PORT value: ""- name: PROXY_SCHEME value: ""- name: NO_PROXY value: ""- name: PROXY_MANAGER value: "true"- name: PROXY_USER valueFrom: secretKeyRef: name: mydel-proxy key: PROXY_USER- name: PROXY_PASSWORD valueFrom: secretKeyRef: name: mydel-proxy key: PROXY_PASSWORD +``` + | +| `INIT_SCRIPT` | You can run scripts on the Delegate using `INIT_SCRIPT`.For example, if you wanted to install software on the Delegate pod, you can enter the script in `INIT_SCRIPT` and then apply the Delegate YAML.A multiline script must follow the YAML spec for [literal scalar style](https://yaml.org/spec/1.2-old/spec.html#id2795688).See [Run Scripts on Delegates](run-scripts-on-delegates.md). | +``` +- name: INIT_SCRIPT value: |- echo install wget apt-get install wget echo wget installed +``` + | +| `POLL_FOR_TASKS` | Enables or disables polling for Delegate tasks.By default, the Delegate uses Secure WebSocket (WSS) for tasks. If the `PROXY_*` settings are used and the proxy or some intermediary does not allow WSS, then set `POLL_FOR_TASKS` to true to enable polling. | +``` +- name: POLL_FOR_TASKS value: "false" +``` + | +| `HELM_DESIRED_VERSION` | By default, Harness Delegates are installed with and use Helm 3.You can set the Helm version in the Harness Delegate YAML file using the `HELM_DESIRED_VERSION` environment property. Include the `v` with the version. For example, `HELM_DESIRED_VERSION: v2.13.0`. | +``` +- name: HELM_DESIRED_VERSION value: "" +``` + | +| `USE_CDN` | Makes the Delegate use a CDN for new versions. | +``` +- name: USE_CDN value: "true" +``` + | +| `CDN_URL` | The CDN URL for Delegate versions. | +``` +- name: CDN_URL value: https://app.harness.io +``` + | +| `JRE_VERSION` | The Java Runtime Environment version used by the Delegate. | +``` +- name: JRE_VERSION value: 1.8.0_242 +``` + | +| `HELM3_PATH`,`HELM_PATH` | When you Install and run a new Harness Delegate, Harness includes Helm 3 support automatically. But in some cases, you might want to use one of the custom Helm binaries available from [Helm release](https://github.com/helm/helm/releases).For a Helm 3 binary, enter the local path to the binary in `HELM3_PATH`.For a Helm 2 binary, enter the path local path to the binary in `HELM_PATH`.If you are performing a [Native Helm deployment](https://docs.harness.io/article/lbhf2h71at-native-helm-quickstart), do not use `HELM_PATH` for the Helm 2 binary. Harness will look for the Helm 2 binary on the Delegate in its standard path, such as `/usr/local/bin/helm`. | +``` +- name: HELM3_PATH value: ""- name: HELM_PATH value: "" +``` + | +| `KUSTOMIZE_PATH` | The Harness Delegate ships with the 3.5.4 release of Kustomize.If you want to use a different release of Kustomize, add it to a location on the Delegate, update `KUSTOMIZE_PATH`, and (re)start the Delegate. | +``` +- name: KUSTOMIZE_PATH value: "" +``` + | +| `KUBECTL_PATH` | You can use `KUBECTL_PATH` to change the kubectl config path. The default is `~/. kube/config`. | +``` +- name: KUBECTL_PATH value: "" +``` + | +| `GRPC_SERVICE_ENABLED`,`GRPC_SERVICE_CONNECTOR_PORT` | By default, the Delegate requires HTTP/2 for gRPC (gRPC Remote Procedure Calls) be enabled for connectivity between the Delegate and Harness Manager. | +``` +- name: GRPC_SERVICE_ENABLED value: "true"- name: GRPC_SERVICE_CONNECTOR_PORT value: "8080" +``` + | +| `VERSION_CHECK_DISABLED` | By default, the Delegate always checks for new versions (via the Watcher). | +``` +- name: VERSION_CHECK_DISABLED value: "false" +``` + | +| `DELEGATE_NAMESPACE` | The namespace for the Delegate is taken from the `StatefulSet` namespace. | +``` +- name: DELEGATE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace +``` + | + +### Notes + +#### Empty serviceName + +By default, Harness does not include a value for `serviceName` in the `StatefulSet` in the Delegate YAML: + + +``` +... +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + harness.io/name: myDelegate + name: remove + namespace: harness-delegate-ng +spec: + replicas: 2 + podManagementPolicy: Parallel + selector: + matchLabels: + harness.io/name: myDelegate + serviceName: "" + template: + metadata: + labels: + harness.io/name: myDelegate +... +``` +You do not need to change `serviceName`, but you can if you have a static code analysis tool that flags it or some other use case. + +Simply add the Delegate name as the value using the syntax `harness.io/name: [Delegate name]`. + +For example, if your Delegate name is `myDelegate`, you would add `harness.io/name: myDelegate`: + + +``` +... + serviceName: + harness.io/name: myDelegate +... +``` diff --git a/docs/platform/2_Delegates/delegate-guide/install-delegates-with-third-party-tools.md b/docs/platform/2_Delegates/delegate-guide/install-delegates-with-third-party-tools.md new file mode 100644 index 00000000000..81be432f876 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/install-delegates-with-third-party-tools.md @@ -0,0 +1,247 @@ +--- +title: Install delegates with third-party tools +description: Harness Manager installs and configures delegates with the binaries that most CI/CD pipelines require. In some cases, however, you might want to add tools to the delegate image or create your own del… +# sidebar_position: 2 +helpdocs_topic_id: x0i1ydkv34 +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Manager installs and configures delegates with the binaries that most CI/CD pipelines require. In some cases, however, you might want to add tools to the delegate image or create your own delegate and customize the tool set for your builds and deployments. This document describes methods you can use to install third-party tools on the delegate image at runtime.  + +For basic information about Harness Delegate, see [Delegate Overview](../delegates-overview.md).  + +### Considerations + +Consider the following in your choice of approach: + +How the delegate detects the client tool binary: + +* `$PATH` environment variable +* Repository index file + +How client tool binaries are moved to the delegate container: + +* `emptyDir` volume mount with initialization container +* Permanent volume mount +* Docker tools Image with shared volume +* Custom delegate image + +### $PATH environment variable + +The easiest way to install a single version of a binary on an image is to combine the use of the Linux `$PATH` and `INIT_SCRIPT` environment variables. This approach supports most use cases. The use of the `$PATH` environment variable also solves the problem of how the delegate detects available tools. + +#### Process + +The process is simple and includes two basic steps: + +* Define the `$PATH` environment variable to specify the locations and filenames of custom tools. +* Define the `INIT_SCRIPT` environment variable to export the `$PATH` location of the binary you want to install. + +#### Cost + +This approach limits you to the use of one version of each software package. + +#### Benefit + +This approach is easily implemented in YAML. + +### emptyDir volume mount with INIT container + +The easiest way to install multiple versions of a binary on an image is to transfer custom tools from an initialization (`INIT`) container. This strategy is ideal for large deployments that implement complex use cases.  + +#### Process + +To implement this solution, modify the harness-delegate.yaml file to allow the following operations: + +* Mount an `emptyDir` volume to the delegate container. +* Download tools to the target path from an `INIT` container. + +#### Cost + +This approach delays delegate startup. The delegate cannot run until the `INIT` container completes the download process. + +#### Benefit + +You can implement this approach without additional resources like permanent storage.  + +### Modify the harness-delegate.yaml file + +In this approach, you modify the harness-delegate.yaml file with declarative definitions of the following Kubernetes objects:  + +* `securityContext` +* `volumeMounts` +* `initContainers` +* `volumes` + +This section explains how to modify the harness-delegate.yaml file. + +#### Update the security context + +Edit the delegate YAML to ensure that the files that the INIT container downloads files with the correct permissions. The `INIT` container must: + +* Run with root user privileges. +* Share the security context of the running delegate user. + +The following example shows the specification of the `securityContext.fsGroup` and `securityContext.runAsUser` values: + + +``` +securityContext: + fsGroup: 1001 + runAsUser: 1001 +``` +For more information about the fields of a pod security context, see the [Kubernetes API Reference](https://jamesdefabia.github.io/docs/api-reference/v1/definitions/#_v1_podsecuritycontext). + +#### Add the emptyDir volume mount + +Declare a volume mount and specify the mount path and name. The following example specifies the mounting of the `emptyDir` volume. + + +``` +volumeMounts: +- mountPath: /opt/harness-delegate/client-tools + name: client-tools +``` +  + +Mounting the volume at the default client tools location eliminates the need for further configuration. + +You can alternately mount the volume to any directory and configure the delegate to discover the tools. + +For more information on volume mounts, see [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) in the [Kubernetes](https://kubernetes.io/docs/home/) documentation. + +#### Add initialization containers + +Declare one or more `INIT` containers. Each `INIT` container must mount the same `emptyDir` volume.  + + +``` + initContainers: + - name: install-kubectl + image: curlimages/curl + command: ['sh', '-c', "mkdir -m 777 -p /client-tools/kubectl/v1.13.2 \ + && curl -#s -L -o /client-tools/kubectl/v1.13.2/kubectl https://app.harness.io/public/shared/tools/kubectl/release/v1.13.2/bin/linux/amd64/kubectl \ + && chmod +x /client-tools/kubectl/v1.13.2/kubectl"] + args: + - chown 1001 /client-tools; + volumeMounts: + - mountPath: /client-tools + name: client-tools + - name: install-helm3 + image: curlimages/curl + command: ['sh', '-c', "mkdir -m 777 -p /client-tools/helm/v3.8.0 \ + && curl -#s -L -o /client-tools/helm/v3.8.0/helm https://app.harness.io/public/shared/tools/helm/release/v3.8.0/bin/linux/amd64/helm \ + && chmod +x /client-tools/helm/v3.8.0/helm"] + args: + - chown 1001 /client-tools; + volumeMounts: + - mountPath: /client-tools + name: client-tools +``` +#### Define the emptyDir shared volume + +Define the `emptyDir` volume that the `INIT` containers share. The `emptyDir` type volume is ephemeral and is destroyed with its pod. + + +``` + volumes: + - name: client-tools + emptyDir: {} +``` +For more information about the Kubernetes emptyDir volume, see [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir).  + +#### Example + +The following example includes a segment of the delegate YAML file that contains the required changes. For the complete file, see the sample [harness-delegate.yaml](../delegate-reference/example-harness-delegate-yaml.md) in the *Delegate Reference*. + + +``` + ... + # Update the security context to match delegate running user. + # This provides downloaded files with the correct permissions. + # Running the INIT container with root permissions should also be okay. + securityContext: + fsGroup: 1001 + runAsUser: 1001 + ... + # Mount a shared emptyDir volume from below. Here it's mounted to the default client tools location to avoid additional configuration. + # Or you can mount it to any directory and configure the delegate to discover the tools. + volumeMounts: + - mountPath: /opt/harness-delegate/client-tools + name: client-tools + ... + # Add one or more INIT containers with the same emptyDir volume mounted + initContainers: + - name: install-kubectl + image: curlimages/curl + command: ['sh', '-c', "mkdir -m 777 -p /client-tools/kubectl/v1.13.2 \ + && curl -#s -L -o /client-tools/kubectl/v1.13.2/kubectl https://app.harness.io/public/shared/tools/kubectl/release/v1.13.2/bin/linux/amd64/kubectl \ + && chmod +x /client-tools/kubectl/v1.13.2/kubectl"] + args: + - chown 1001 /client-tools; + volumeMounts: + - mountPath: /client-tools + name: client-tools + - name: install-helm3 + image: curlimages/curl + command: ['sh', '-c', "mkdir -m 777 -p /client-tools/helm/v3.8.0 \ + && curl -#s -L -o /client-tools/helm/v3.8.0/helm https://app.harness.io/public/shared/tools/helm/release/v3.8.0/bin/linux/amd64/helm \ + && chmod +x /client-tools/helm/v3.8.0/helm"] + args: + - chown 1001 /client-tools; + volumeMounts: + - mountPath: /client-tools + name: client-tools + ... + # Define the emptyDir shared volume. + volumes: + - name: client-tools + emptyDir: {} +``` +  + +### Mount a permanent volume + +If you prefer to store your client tools apart from the delegate, try mounting a permanent volume to the delegate container.  + +#### Process + +* Create a permanent volume +* Mount the volume to the delegate + +#### Cost + +This approach is complex because it requires the allocation of a permanent store. + +#### Benefit + +You can download and install client tools without adding to delegate start time. Moreover,  you can replace or update tools during delegate runtime. You might also be able to update tools without restarting the delegate. A permanent volume mount is also a “one-and-done” approach; you can mount the installed volume to multiple delegates.  + +#### Example + +There are many ways to implement this approach. The following YAML declares a permanent volume mount for NFS storage. + + +``` +volumeMounts: +- mountPath: "/opt/harness-delegate/client-tools" + name: nfs +volumes: +- name: nfs + persistentVolumeClaim: + claimName: nfs-ng +``` +      + +For sample YAML files for NFS servers, volumes, and for a full harness-delegate.yaml file that includes a mounted NFS volume, see the *Delegate Reference*. + +### Docker tools image with shared volume + +This approach is less flexible than mounting a permanent volume but it is also easier to implement. This approach works best in cases in which you create an image using a stable set of client tools. You can then use a shared volume to give the delegate container access. + +### Custom delegate image + +This approach is preferable in cases where you’re already creating your own delegate images. You build the tools you need into your custom image. The drawback to this approach is that it does not support delegate auto-update. + diff --git a/docs/platform/2_Delegates/delegate-guide/non-root-delegate-installation.md b/docs/platform/2_Delegates/delegate-guide/non-root-delegate-installation.md new file mode 100644 index 00000000000..481d6d9c8bd --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/non-root-delegate-installation.md @@ -0,0 +1,111 @@ +--- +title: Non-root delegate installation +description: By default, Harness Delegates use root access. You can install a different Docker image tag of the Delegate if you want to install and run the Delegate as non-root. Harness Delegate images are public… +# sidebar_position: 2 +helpdocs_topic_id: h2kydm6qme +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +By default, Harness Delegates use root access. You can install a different Docker image tag of the Delegate if you want to install and run the Delegate as non-root. + +Harness Delegate images are publicly hosted on [Docker Hub](https://hub.docker.com/r/harness/delegate/tags).This topic described how to install and run a Delegate as non-root. + +### Before you begin + +* [Delegate Requirements and Limitations](../delegate-reference/delegate-requirements-and-limitations.md) +* [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview) + +### Limitations + +* The Harness Delegate does NOT require root account access. Kubernetes and Docker Delegates run as root by default. +* If you do not run the Delegate as root, be aware that you cannot install any software using a [Delegate Initialization Script](../delegate-reference/common-delegate-profile-scripts.md). + +### Step 1: Download the Delegate Config File + +Download the Delegate config file as part of its installation. + +For examples, see: + +* [Install a Docker Delegate](../delegate-install-docker/install-a-docker-delegate.md) +* [Install a Kubernetes Delegate](install-a-kubernetes-delegate.md) + +### Option: Pick a Non-Root Type + +Harness Delegate images are publicly hosted on [Docker Hub](https://hub.docker.com/r/harness/delegate/tags) and Harness has non-root options for different platforms: + +![](./static/non-root-delegate-installation-27.png) +Unless you are using OpenShift or a Universal Base Image (UBI), you will want to use to use `delegate:non-root`. + +### Step 2: Update the Delegate Image + +In the Delegate config file, update the image tag to use the non-root image: `harness/delegate:non-root`. + +For example, here's the Docker Delegate config file updated: + + +``` +version: "3.7" +services: + harness-ng-delegate: + restart: unless-stopped + deploy: + resources: + limits: + cpus: "0.5" + memory: 2048M + image: harness/delegate:non-root + environment: + - ACCOUNT_ID=xxx + - DELEGATE_TOKEN=xxx + - MANAGER_HOST_AND_PORT=https://app.harness.io + - WATCHER_STORAGE_URL=https://app.harness.io/public/prod/premium/watchers +... +``` +Here's the Kubernetes Delegate config file updated: + + +``` +... +--- + +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + harness.io/name: foo + name: foo + namespace: harness-delegate-ng +spec: + replicas: 1 + podManagementPolicy: Parallel + selector: + matchLabels: + harness.io/name: foo + serviceName: "" + template: + metadata: + labels: + harness.io/name: foo + spec: + containers: + - image: harness/delegate:non-root + imagePullPolicy: Always + name: harness-delegate-instance + ports: + - containerPort: 8080 + +... +``` +### Step 3: Install the Delegate + +Install the Delegate as described in topics such as the following: + +* [Install a Docker Delegate](../delegate-install-docker/install-a-docker-delegate.md) +* [Install a Kubernetes Delegate](install-a-kubernetes-delegate.md) + +### See also + +* [Automate Delegate Installation](automate-delegate-installation.md) + diff --git a/docs/platform/2_Delegates/delegate-guide/run-scripts-on-delegates.md b/docs/platform/2_Delegates/delegate-guide/run-scripts-on-delegates.md new file mode 100644 index 00000000000..f2cafa35f2d --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/run-scripts-on-delegates.md @@ -0,0 +1,148 @@ +--- +title: Install software on the delegate with initialization scripts +description: You can use delegate setup files to run startup scripts on delegate host, container, or pod during the installation process. You can also add script after the Delegate is installed, and then simply r… +# sidebar_position: 2 +helpdocs_topic_id: yte6x6cyhn +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use delegate setup files to run startup scripts on delegate host, container, or pod during the installation process. + +You can also add script after the Delegate is installed, and then simply restart the Delegate. + +This topic describes how to set up the Delegate config files for running scripts. + +### Limitations + +* Editing or deleting scripts can prevent the automatic removal of binaries that were installed earlier. This means you must restart or remove them from the pod or VM. +* You cannot use Harness secrets in scripts. Connectivity to Harness is established only after the script is run and the delegate is registered with Harness. + +### Application installation by delegate type + +* **Legacy Delegate**. Use `INIT_SCRIPT` to install applications. The Delegate Profiles feature is deprecated. +* **Harness Delegate**. You can use `INIT_SCRIPT` or add initialization to the delegate image. + +### Review: What can I run In a script? + +Harness supports the use of the commands that are supported on the host, container or pod that runs the delegate. Linux shell commands are most common. If `kubectl`, Helm, or Docker is running on the host/container/pod where you install the Delegate, you have access to those commands. The Kubernetes and Docker Delegates include Helm. + +The Harness Delegate base image is built on Ubuntu 18.04 or later. This means the delegate script supports default Ubuntu packages. + +Harness Delegate installation packages include `TAR` and `cURL`. You can use `cURL` and `TAR` in your delegate scripts without installing these tools. For example, the following script runs without additional dependencies: + + +``` +microdnf install -y unzip +apt-get install -y python +curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" +unzip awscli-bundle.zip +./awscli-bundle/install -b ~/bin/aws +``` +#### When do scripts run? + +Delegate scripts are applied under the following conditions: + +* **New Delegate** - If you add a Delegate script when you create the Delegate, the commands are executed before the Delegate is started. +* **Running Delegate** - If you apply a Delegate script to a running Delegate, either by applying it as a new script or by switching the Delegate’s current script, the script commands are executed when the Delegate is restarted, but before the Delegate comes up. + +### Step 1: Download the delegate config file + +When you install a Delegate, you are prompted to download its config file. For Kubernetes and Docker Delegates, this is a YAML file. + +![](./static/run-scripts-on-delegates-28.png) +Download the file and open it in a text editor. + +### Step 2: Add a script to the delegate INIT\_SCRIPT environment variable + +In the Delegate config file, locate the `INIT_SCRIPT` environment variable. + +For example, here it is in the Kubernetes Delegate harness-delegate.yaml file: + + +``` +... +apiVersion: apps/v1 +kind: StatefulSet +... +spec: +... + spec: + ... + env: + ... + - name: INIT_SCRIPT + value: |- + echo install wget + apt-get install wget + echo wget installed +... +``` +In `value`, enter your script. For a list of common scripts, see [Common Delegate Initialization Scripts](../delegate-reference/common-delegate-profile-scripts.md). + +A multiline script must follow the YAML spec for [literal scalar style](https://yaml.org/spec/1.2-old/spec.html#id2795688).The script should not be in quotes.For the Docker Delegate, Harness uses a Docker compose file, so you add your script like this: + + +``` +... + - | + INIT_SCRIPT= + echo Init Script Example + echo Done!! +... +``` +A Docker Compose file doesn't use the exact same YAML formatting as Kubernetes manifests and so the script formatting is slightly different. + +### Step 3: Install the delegate + +Follow the remaining Delegate installation steps. + +See: + +* [Install a Kubernetes Delegate](install-a-kubernetes-delegate.md) +* [Install a Docker Delegate](../delegate-install-docker/install-a-docker-delegate.md) + +### Step 4: Verify the script + +Check the Delegate pod/host/container to see if the script ran correctly. + +For example, here is a simple hello world script in a Docker Delegate: + + +``` +... + - | + INIT_SCRIPT= + echo hello world! +... +``` +The Docker Delegate file is a Docker compose file so it uses YAML formatting different from Kubernetes manifest YAML.Once the Delegate in installed, run `docker ps` to get the container Id, and then run `docker logs `. + +In the logs, you will see that your script has run before the Delegate is installed. + +The script is run between `Starting initialization script for Delegate` and `Completed executing initialization script`: + + +``` +% docker logs 9d405639948f +Watcher not running +Delegate not running + +Starting initialization script for Delegate +hello world! +Completed executing initialization script +Checking Watcher latest version... +The current version 1.0.72500 is not the same as the expected remote version 1.0.72702 +Downloading Watcher 1.0.72702 ... +######################################################################## 100.0% +Checking Delegate latest version... +Downloading Delegate ... +#=#=# +Watcher started + +``` +### See also + +* [Automate Delegate Installation](automate-delegate-installation.md) + diff --git a/docs/platform/2_Delegates/delegate-guide/secure-delegates-with-tokens.md b/docs/platform/2_Delegates/delegate-guide/secure-delegates-with-tokens.md new file mode 100644 index 00000000000..8db8ca022ac --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/secure-delegates-with-tokens.md @@ -0,0 +1,125 @@ +--- +title: Secure delegates with tokens +description: Secure Delegate to Harness communication by replacing the default Delegate token with new tokens. +# sidebar_position: 2 +helpdocs_topic_id: omydtsrycn +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Delegate tokens are used by Harness to encrypt communication between Harness Delegates and the Harness Manager. By default, when a new Harness account is created, all Harness Delegates in that account include the same token. + +You can further secure Delegate to Harness communication by replacing the default Delegate token with new tokens. You can add and revoke Delegate tokens per your governance policies and replace revoked tokens with custom tokens when needed. + +### Step 1: Generate a new token + +You can generate a new token when you create a Delegate or as a separate process. + +#### Generate a token when creating a delegate + +When you create a new Delegate, you can generate a new token. + +In **Delegate Setup**, in **Delegate Tokens**, click **Add**, and then name and apply the new token: + +![](./static/secure-delegates-with-tokens-02.png) + +The new token is created and its value is copied to your system clipboard. The new token also appears in the list using the name you gave it. + +Save the new token value. You cannot retrieve the token value after this. + +Now you can update the Delegate(s) with the new token. + +In **Delegate Tokens**, select the new token. + +#### Generate a token without creating a delegate + +In Harness, click **Project Setup > Delegates** in a Project or **Account Settings > Account Resources > Delegates** for the entire account. + +Click **Tokens**. Here you can see, create, and revoke all Delegate tokens. + +Click **New Token**. + +Here's an Account Settings example: + +![](./static/secure-delegates-with-tokens-03.png) + +Enter a name for the new token, and then click **Apply**. + +You can copy the token and save it somewhere safe, if needed. + +![](./static/secure-delegates-with-tokens-04.png) + +The new token is created and its value is copied to your system clipboard. The new token also appears in the list using the name you gave it. + +Save the new token value. You cannot retrieve the token value after this. + +When you install a new Delegate, you can select the token to use: + +![](./static/secure-delegates-with-tokens-05.png) +### Option: Update and restart existing delegate + +You can update an existing Delegate with a new token value and then restart the Delegate. + +#### Kubernetes delegate + +The Delegate is set up using the **harness-delegate.yaml** you downloaded originally. + +Edit the **harness-delegate.yaml** you downloaded originally with the new token and then run `kubectl apply -f harness-delegate.yaml` to restart the Delegate pods. + +Paste the token in the Delegate `ACCOUNT_SECRET` setting in the `StatefulSet` spec: + + +``` +... +--- + +apiVersion: apps/v1 +kind: StatefulSet +... + env: +... + - name: ACCOUNT_SECRET + value: [enter new token here] +... +``` +Run `kubectl apply -f harness-delegate.yaml` + +The Delegate pods get restarted automatically. The pods will restart and take the updated settings. + +#### Docker delegate + +You will destroy and recreate the container using the **docker-compose.yml** you downloaded originally. + +Paste the token in the Delegate settings: + + +``` +version: "3.7" +services: + harness-ng-delegate: + restart: unless-stopped + deploy: + resources: + limits: + cpus: "0.5" + memory: 2048M + image: harness/delegate:latest + environment: + - ACCOUNT_ID=12345678910 + - ACCOUNT_SECRET=[enter new token here] + - MANAGER_HOST_AND_PORT=https://app.harness.io + - WATCHER_STORAGE_URL=https://app.harness.io/public/pro +... +``` +Create a new container: `docker-compose -f docker-compose.yaml up -d`. + +You can verify that the environment variable has the new token using `docker exec [container ID] env`. + +### Option: Revoke tokens + +On the **Tokens** page, click **Revoke** to revoke any token. + +![](./static/secure-delegates-with-tokens-06.png) +Click **Revoke**. The token is revoked. The Harness Manager will not accept connections from any Delegates using this revoked token. + diff --git a/docs/platform/2_Delegates/delegate-guide/select-delegates-with-selectors.md b/docs/platform/2_Delegates/delegate-guide/select-delegates-with-selectors.md new file mode 100644 index 00000000000..95ffd902c34 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/select-delegates-with-selectors.md @@ -0,0 +1,98 @@ +--- +title: Select delegates with delegate selectors and tags +description: Use Delegate Tags to select specific Delegates in Connectors, steps, and more. +# sidebar_position: 2 +helpdocs_topic_id: nnuf8yv13o +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness runs tasks by using Harness Delegate to connect your environment to resources. Harness selects the best delegate based on previous use or round-robin selection. See [How Does Harness Manager Pick Delegates?](../delegates-overview.md#how-does-harness-manager-pick-delegates) + +In some cases, you might want Harness to select specific delegates. In these cases, you can use the **Delegate Selector** settings in Pipelines, Connectors, and so on, with corresponding delegate tags. + +### Review: Delegate tags + +A delegate tag is added to your delegate automatically when you set it up in Harness. The tag is added using the name you give your Delegate. + +You can also add more tags in the **Tags** field during the setup process: + +![](./static/select-delegates-with-selectors-17.png) + +For detailed information on how delegates are selected during execution, see [Delegates Overview](../delegates-overview.md). + +You can select a delegate based on its tags in the **Delegate Selector** settings of Harness entities like pipelines and connectors. + +### Review: Delegate selector priority + +You can use delegate selectors at multiple places, such as the pipeline, stage, and step levels. + +It's important to know which delegate selectors are given priority so that you ensure the correct delegate is used when you want it used. + +The delegate selector priority is: + +1. Step +2. Step Group +3. Stage +4. Pipeline +5. Connector + +![](./static/select-delegates-with-selectors-18.png) +The step level has the highest priority. Any delegate selected in a step's **Delegate Selector** setting overrides any Delegates selected in 2-5 above. + +A connector can be used in multiple places in a pipeline, such as a stage infrastructure's **Cloud Provider** setting or even in certain step settings.### Option: Step and step group delegate selector + +Delegates can be selected for steps and [step groups](https://docs.harness.io/article/ihnuhrtxe3-run-steps-in-parallel-using-a-step-group) in their **Advanced** settings. + +Here is a step example: + +![](./static/select-delegates-with-selectors-19.png) +Here is a step group example: + +![](./static/select-delegates-with-selectors-20.png) + +### Option: Select a delegate for a connector using tags + +When you add a connector you are given the option of connecting to your third part account using any available delegate or specific delegates. + +![](./static/select-delegates-with-selectors-21.png) +You select specific delegates using their tags. + +You only need to select one of a delegate's tags to select it. All delegates with the tag are selected. + +Here, the tag is **test1**, and you can see multiple delegates match it: + +![](./static/select-delegates-with-selectors-22.png) +### Option: Pipeline delegate selector + +Delegates can be selected for an entire pipeline in the pipeline **Advanced Options** settings. + +![](./static/select-delegates-with-selectors-23.png) +### Option: Stage delegate selector + +Delegates can be selected for an entire stage in the stage **Advanced** settings. + +![](./static/select-delegates-with-selectors-24.png) +### Option: Infrastructure connector + +Delegates can be selected for the connector used in a stage's Infrastructure settings, such as a CD stage's **Cluster Details** > **Connector** setting. + +![](./static/select-delegates-with-selectors-25.png) +### Option: Select a delegate for a step using tags + +You can select one or more delegates for each pipeline step. + +In each step, in **Advanced**, there in the **Delegate Selector** option: + +![](./static/select-delegates-with-selectors-26.png) +You only need to select one of a delegate's tags to select it. All delegates with the tag are selected. + +### Option: Modify tags using Harness API + +See [Delegate Group Tags Resource](https://harness.io/docs/api/tag/Delegate-Group-Tags-Resource/). + +### See also + +* [Delegates Overview](../delegates-overview.md) + diff --git a/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-07.png b/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-07.png new file mode 100644 index 00000000000..ca0eee3f393 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-07.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-08.png b/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-08.png new file mode 100644 index 00000000000..2148d4411b0 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-08.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-09.png b/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-09.png new file mode 100644 index 00000000000..70f55e8f4e6 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/build-custom-delegate-images-with-third-party-tools-09.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/delegate-registration-01.png b/docs/platform/2_Delegates/delegate-guide/static/delegate-registration-01.png new file mode 100644 index 00000000000..ac504839fb4 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/delegate-registration-01.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/delete-a-delegate-15.png b/docs/platform/2_Delegates/delegate-guide/static/delete-a-delegate-15.png new file mode 100644 index 00000000000..ea51330a0ee Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/delete-a-delegate-15.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/delete-a-delegate-16.png b/docs/platform/2_Delegates/delegate-guide/static/delete-a-delegate-16.png new file mode 100644 index 00000000000..ea51330a0ee Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/delete-a-delegate-16.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/enable-root-user-privileges-to-add-custom-binaries-10.png b/docs/platform/2_Delegates/delegate-guide/static/enable-root-user-privileges-to-add-custom-binaries-10.png new file mode 100644 index 00000000000..7f9cb5b75ca Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/enable-root-user-privileges-to-add-custom-binaries-10.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/enable-root-user-privileges-to-add-custom-binaries-11.png b/docs/platform/2_Delegates/delegate-guide/static/enable-root-user-privileges-to-add-custom-binaries-11.png new file mode 100644 index 00000000000..e2c50d3ff03 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/enable-root-user-privileges-to-add-custom-binaries-11.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-12.png b/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-12.png new file mode 100644 index 00000000000..ac2d948fbe3 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-12.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-13.png b/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-13.png new file mode 100644 index 00000000000..a3beb6cbcbd Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-13.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-14.png b/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-14.png new file mode 100644 index 00000000000..d08a40ef1ac Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/install-a-kubernetes-delegate-14.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/non-root-delegate-installation-27.png b/docs/platform/2_Delegates/delegate-guide/static/non-root-delegate-installation-27.png new file mode 100644 index 00000000000..8bbba59d0d4 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/non-root-delegate-installation-27.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/run-scripts-on-delegates-28.png b/docs/platform/2_Delegates/delegate-guide/static/run-scripts-on-delegates-28.png new file mode 100644 index 00000000000..52ac8e8e4fd Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/run-scripts-on-delegates-28.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-02.png b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-02.png new file mode 100644 index 00000000000..e2babd75851 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-02.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-03.png b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-03.png new file mode 100644 index 00000000000..d859e7ce173 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-03.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-04.png b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-04.png new file mode 100644 index 00000000000..1d3acac90ab Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-04.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-05.png b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-05.png new file mode 100644 index 00000000000..8d99482c4ad Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-05.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-06.png b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-06.png new file mode 100644 index 00000000000..bddd4aa38e8 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/secure-delegates-with-tokens-06.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-17.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-17.png new file mode 100644 index 00000000000..ca18e987bb7 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-17.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-18.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-18.png new file mode 100644 index 00000000000..88a9343c5cd Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-18.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-19.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-19.png new file mode 100644 index 00000000000..cb716ba0396 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-19.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-20.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-20.png new file mode 100644 index 00000000000..02439376feb Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-20.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-21.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-21.png new file mode 100644 index 00000000000..dc318d93d2c Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-21.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-22.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-22.png new file mode 100644 index 00000000000..221625bee0e Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-22.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-23.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-23.png new file mode 100644 index 00000000000..ae40d893763 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-23.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-24.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-24.png new file mode 100644 index 00000000000..334563dc786 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-24.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-25.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-25.png new file mode 100644 index 00000000000..f7d038ce90d Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-25.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-26.png b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-26.png new file mode 100644 index 00000000000..62188171722 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/select-delegates-with-selectors-26.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/static/trust-store-override-for-delegates-00.png b/docs/platform/2_Delegates/delegate-guide/static/trust-store-override-for-delegates-00.png new file mode 100644 index 00000000000..bddd48af084 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-guide/static/trust-store-override-for-delegates-00.png differ diff --git a/docs/platform/2_Delegates/delegate-guide/support-for-delegate-docker-images.md b/docs/platform/2_Delegates/delegate-guide/support-for-delegate-docker-images.md new file mode 100644 index 00000000000..64da3d1bb41 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/support-for-delegate-docker-images.md @@ -0,0 +1,57 @@ +--- +title: Support for Docker delegate Images +description: Harness Delegate is packaged and distributed in different types of images, and run in different types of containers. This document describes the support for Docker-based images. +# sidebar_position: 2 +helpdocs_topic_id: 6nwxxv14gr +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Delegate is packaged and distributed in different types of images, and run in different types of containers. These include Kubernetes, ECS, and Docker containers. Harness offers multi-architecture support; images are built for amd64 and arm64 architectures. Harness also supports a delegate typed as legacy. + +This topic lists the image tags that are associated with different images by delegate type. This topic further lists the client libraries that are included in amd64 and arm64 images by tag. You can use the image tags that are provided in the following tables to select different delegates. + +### Legacy Delegate + +Image tags are included in the images distributed from [Docker Hub](https://hub.docker.com/r/harness/delegate/tags). + +Images that are deprecated are not updated with new features or with fixes. + + + +| | | | +| --- | --- | --- | +| **Delegate image by tag** | **Base image** | **Description** | +| `latest` | `ubuntu:20.04` | Includes all client libraries. | +| `ubi` | `redhat/ubi8-minimal:8.4` | Includes all client libraries. | +| `ubi.minimal` | `redhat/ubi8-minimal:8.4` | n/a | +| `ubi-no-tools` | n/a | Deprecated. | +| `non-root` | n/a | Deprecated. | +| `non-root-openshift` | n/a | Deprecated. | + +### Delegate + +Image tags for Harness Delegate are included in the images distributed from [Docker Hub](https://hub.docker.com/r/harness/delegate-immutable/tags). + +In the following image tags, the *xxxxx* placeholder is the delegate version, for example, *year.month.delegate\_version*. + + + +| | | | +| --- | --- | --- | +| **Delegate image by tag** | **Base image** | **Description** | +| *yy.mm.xxxxx* | `redhat/ubi8-minimal:8.4` | This image includes all client libraries. | +| *yy.mm.xxxxx*.minimal | `redhat/ubi8-minimal:8.4` | n/a | + +### Client Libraries + +The following table shows the client libraries that are included in the images of amd64 and arm64 by tag. + + + +| | | | +| --- | --- | --- | +| **Image tag** | **amd64 client library** | **arm64 client library** | +| `latest``ubi``yy.mm.xxxxx` | `kubectl: v1.13.2``kubectl: v1.19.2``go-template: v0.4``go-template: v0.4.1``harness-pywinrm: v0.4-dev``helm: v2.13.1``helm: v3.1.2``helm: v3.8.0``chartmuseum: v0.12.0``chartmuseum: v0.8.2``tf-config-inspect: v1.0``tf-config-inspect: v1.1``oc: v4.2.16``kustomize: v3.5.4``kustomize: v4.0.0``scm` | `kubectl: v1.13.2``kubectl: v1.19.2``go-template: v0.4.1``helm: v2.13.1``helm: v3.1.2``helm: v3.8.0``chartmuseum: v0.12.0``tf-config-inspect: v1.1``oc: v4.2.16``kustomize: v3.5.4``kustomize: v4.0.0``scm` | + diff --git a/docs/platform/2_Delegates/delegate-guide/trust-store-override-for-delegates.md b/docs/platform/2_Delegates/delegate-guide/trust-store-override-for-delegates.md new file mode 100644 index 00000000000..bc15bd217e2 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-guide/trust-store-override-for-delegates.md @@ -0,0 +1,179 @@ +--- +title: Truststore override for delegates +description: Replace or use a different default truststore with Harness Delegates. +# sidebar_position: 2 +helpdocs_topic_id: nh6tdfse6g +helpdocs_category_id: m9iau0y3hv +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Delegates perform most Harness tasks. Delegates make outbound TLS/SSL connections to the Harness SaaS platform to obtain these task assignments. The TLS/SSL connection from the delegate to Harness requires a trusted certificate. + +Harness Delegate ships with a Java Runtime Environment (JRE) that includes a default trusted certificate in its [truststore](https://docs.oracle.com/cd/E19830-01/819-4712/ablqw/index.html) (located at `jdk8u242-b08-jre/lib/security/cacerts`). This truststore uses multiple trusted certificates, however, you might want to limit them to conform to your company's security protocols. + +Harness' only requirement is that the JRE truststore includes the certificate delegates use to trust Harness (app.harness.io). + +This topic describes how to limit the truststore used with Harness Delegates and ensure the trusted certificate Harness requires is included in the delegate truststore. + +### Before you begin + +* [Delegates Overview](../delegates-overview.md) +* [Install a Kubernetes Delegate](install-a-kubernetes-delegate.md) + +### Required: Harness trusted certificate + +TLS/SSL communication between the Harness Delegate and Harness SaaS uses a certificate from the DigiCert Global Root CA: + +![](./static/trust-store-override-for-delegates-00.png) +For Delegates to communicate with Harness, this root CA certificate must be installed in the delegate truststore. + +The public key for the certificate is publicly available for downloaded: + + +``` +-----BEGIN CERTIFICATE----- +MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD +QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB +CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97 +nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt +43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P +T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4 +gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO +BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR +TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw +DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr +hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg +06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF +PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls +YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk +CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4= +-----END CERTIFICATE----- +``` +This topic describes how to import this certificate into a new truststore. + +#### Third-party certificates + +Harness Delegate also connects to the third-party tools you use with Harness. You should also include those certificates in the Delegate truststore. + +For example, to pull a Docker image from an artifact server like Nexus or DockerHub, the truststore must include the certificates that those tools require. + +### Step 1: Stop the delegate + +You don't need to stop the Kubernetes delegate. You can run `kubectl apply` after you update the Kubernetes delegate YAML file. + +### Step 2: Create truststore with the Harness trusted certificate + +Let's walk through the steps of creating a new truststore and importing the Harness trusted certificate. + +Copy the following public key to a file and save it. + + +``` +-----BEGIN CERTIFICATE----- +MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh +MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 +d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD +QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT +MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j +b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG +9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB +CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97 +nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt +43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P +T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4 +gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO +BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR +TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw +DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr +hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg +06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF +PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls +YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk +CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4= +-----END CERTIFICATE----- +``` +In this example, we'll name the file **DigiCertGlobalRootCA.pem**. + +Run the following command to create a truststore: + + +``` +keytool -import -file DigiCertGlobalRootCA.pem -alias DigiCertRootCA -keystore trustStore.jks +``` +The above command will ask for a password. You can choose your own password. + +This command creates a file named **trustStore.jks** and imports DigiCert global root CA certificate. + +**Note where the trustStore.jks is located.** You will provide this path to the delegate as an environment variable. + +### Step 3: Add third-party certificates to the truststore + +You should import any certificates required by the third-party tools you use with Harness. + +In most cases, you can navigate to the third-party tool's website portal and download the certificate using a **Copy** or **Export** button in the browser. Save the certificate as a PEM (.pem) file and import it into the truststore. + +To add multiple certificates in the trustStore.jks you created, run the `keytool -import` command multiple times with the different aliases and certificate PEM files for the certificates you are importing. + +### Step 4: Update the delegate JAVA\_OPTS environment variable + +Update the delegate JAVA\_OPTS environment variable to point to the location of the new truststore file. + +#### Kubernetes delegate + +Edit the Kubernetes delegate YAML file. It's named **harness-delegate.yaml**. + +Open the delegate YAML file in a text editor. + +In the `StatefulSet` manifest, under `env`, locate `JAVA_OPTS`. + +Here's what the default setting looks like: + + +``` +... +apiVersion: apps/v1 +kind: StatefulSet +... +spec: + ... + spec: + ... + env: + - name: JAVA_OPTS + value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Xms64M" +... +``` +Update the `JAVA_OPTS` environment variable with the location of the new trustStore.jks file and the password. + +For example: + + +``` + ... + env: + - name: JAVA_OPTS + value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Xms64M -Djavax.net.ssl.trustStore= -Djavax.net.ssl.trustStoreType=jks -Djavax.net.ssl.trustStorePassword=" +... +``` +Next, you can apply the delegate YAML file, described in the next step. + +### Step 5: Start the delegate + +Now that the `JAVA_OPTS` environment variable is updated, you can start the delegate. + +#### Kubernetes delegate + +Apply the Kubernetes delegate YAML file you edited: + + +``` +kubectl apply -f harness-delegate.yaml +``` +The delegate starts and appears on the **Harness Delegates** page. + diff --git a/docs/platform/2_Delegates/delegate-install-docker/_category_.json b/docs/platform/2_Delegates/delegate-install-docker/_category_.json new file mode 100644 index 00000000000..584c8f57331 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-install-docker/_category_.json @@ -0,0 +1 @@ +{"label": "Docker Delegates", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Docker Delegates"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "cybg19aoxt"}} \ No newline at end of file diff --git a/docs/platform/2_Delegates/delegate-install-docker/install-a-docker-delegate.md b/docs/platform/2_Delegates/delegate-install-docker/install-a-docker-delegate.md new file mode 100644 index 00000000000..58c2cdaa4cc --- /dev/null +++ b/docs/platform/2_Delegates/delegate-install-docker/install-a-docker-delegate.md @@ -0,0 +1,121 @@ +--- +title: Install a Docker Delegate +description: Install a Docker Delegate. +# sidebar_position: 2 +helpdocs_topic_id: cya29w2b99 +helpdocs_category_id: cybg19aoxt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The Harness Delegate is a worker process you run in your deployment target environment, such as your local network, VPC, or cluster. The Delegate connects all of your artifact, infrastructure, collaboration, verification and other providers with the Harness Manager. + +Most importantly, the Delegate performs all deployment operations. + +There are several types of Delegates. This topic describes how to install the Docker Delegate. + +### Before you begin + +* [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview) +* [Projects and Organizations](../../1_Organizations-and-Projects/1-projects-and-organizations.md) + +### Review: System Requirements + +The Docker Delegate has the following system requirements: + +* Default 0.5 CPU. +* Default 768MB RAM: there is a cap of 768MB per Delegate, but when the Delegate is updating there might be two Delegates running. Hence, the minimum is 1.5GB. Ensure that you provide the minimum memory for the Delegate and enough memory for the host/node system. + +### Step 1: Download the Docker Delegate + +The Delegate can be installed at the Harness account, Organization, or Project level. + +You can install a Delegate as part of setting up a Connector or independent of another process. + +Once you have selected **New Delegate** on a Delegates page or as part of setting up a Connector, the Delegates selection page appears. + +![](./static/install-a-docker-delegate-00.png) +Click **Docker** and then click **Continue**. + +### Step 2: Name and Tag the Delegate + +Enter a name for the Delegate. This is the name you will see when the Delegate is listed in Harness. + +**Do not run Delegates with the same name in different clusters.** See [Troubleshooting](https://docs.harness.io/article/jzklic4y2j-troubleshooting).Add Tags to the Delegate. By default, Harness adds a Tag using the name you enter, but you can more. Simply type them in and press Enter. + +These Tags are useful for selecting the Delegate when creating a Connector. + +![](./static/install-a-docker-delegate-01.png) +Click **Continue**. + +### Step 3: Run the Docker Delegate Script + +If you system already has a Delegate image, then Harness doesn't pull the latest image when you run `docker-compose`. Simply run `docker pull harness/delegate` to pull the latest.Now you can see the YAML file for the Delegate: + +![](./static/install-a-docker-delegate-02.png) +Click **Download YAML file** and copy the Docker compose file to a machine where you have Docker installed. + +Run the following command to install the Delegate in Docker: + + +``` +docker-compose -f docker-compose.yaml up -d +``` +The Delegate installs. Type docker ps to see the container: + + +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +6b242707a57a harness/delegate:latest "/bin/sh -c './entry…" 3 days ago Up 32 seconds local-docker_harness-ng-del +``` +#### Verification + +For an overview of verification, see [Delegate Registration and Verification](../delegate-guide/delegate-registration.md). + +In the Delegate wizard, click **Verify** and Harness will verify that it is receiving heartbeats from the Delegate. + +Your Delegate is installed. + +You can see the registered Delegate in the Delegate list. + +![](./static/install-a-docker-delegate-03 +Note the **Connected** status. If there is a connectivity error, you will see **Not Connected**. If there's an error, ensure the Docker host can connect to `https://app.harness.io`. + +That's it. The Delegate is installed and registered and connected. + +### Harness Docker Delegate Environment Variables + +The following table lists each of the environment variables in the Harness Docker Delegate YAML. + + + +| | | | +| --- | --- | --- | +| **Name** | **Description** | **Example** | +| `ACCOUNT_ID` | The Harness account Id for the account where this Delegate will attempt to register. | ```- ACCOUNT_ID=H5W8i2828282828Xg```| +| `DELEGATE_TOKEN` | The Harness account token used to register the Delegate. | ```- DELEGATE_TOKEN=d229ee88bf7bbxxxx6ea```| +| `MANAGER_HOST_AND_PORT` | The Harness SaaS manager URL. `https` indicates port 443. |```- MANAGER_HOST_AND_PORT=https://app.harness.io```| +| `WATCHER_STORAGE_URL` | The URL for the Watcher versions.See [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview). | ```- WATCHER_STORAGE_URL=https://app.harness.io/public/prod/premium/watchers```| +| `WATCHER_CHECK_LOCATION` | The Delegate version location for the Watcher to check for. | ```- name: WATCHER_CHECK_LOCATION value: current.version```| +| `REMOTE_WATCHER_URL_CDN` | The CDN URL for Watcher builds. | ```- name: REMOTE_WATCHER_URL_CDN value: https://app.harness.io/public/shared/watchers/builds```| +| `DELEGATE_STORAGE_URL` | The URL where published Delegate jars are stored. | ```- name: DELEGATE_STORAGE_URL value: https://app.harness.io```| +| `DELEGATE_CHECK_LOCATION` | The storage location hosting the published Delegate versions. | ```- name: DELEGATE_CHECK_LOCATION value: delegateprod.txt```| +| `DEPLOY_MODE` | Deployment mode: Kubernetes, Docker, etc. | ```- name: DEPLOY_MODE value: DOCKER```| +| `DELEGATE_NAME` | The name of the Delegate. This is the name that will appear in Harness when the Delegate is registered.You can automate Delegate creation by omitting the name, and then have a script copying the Delegate YAML file and add a unique name to `value` for each new Delegate you want to register.See [Automate Delegate Installation](../delegate-guide/automate-delegate-installation.md). | ```- name: DELEGATE_NAME value: qa```| +| `NEXT_GEN` | Indicates that this Delegate will register in [Harness NextGen](https://docs.harness.io/article/ra3nqcdbaf-compare-first-gen-and-next-gen).If it set to `false`, the Delegate will attempt to register in Harness FirstGen. | ```- name: NEXT_GEN value: "true"```| +| `DELEGATE_DESCRIPTION` | The description added to the Delegate in the Harness Manager or YAML before registering.It appears in the Delegate details page in the Harness Manager. | ```- name: DELEGATE_DESCRIPTION value: ""```| +| `DELEGATE_TYPE` | The type of Delegate. | ```- name: DELEGATE_TYPE value: "DOCKER"```| +| `DELEGATE_TAGS` | The Tags added to the Delegate in the Harness Manager or YAML before registering.Tags are generated by Harness using the Delegate name but you can also add your own Tags.Tags appear in the Delegate details page in the Harness Manager.See [Tags Reference](../../20_References/tags-reference.md) and [Select Delegates with Tags](../delegate-guide/select-delegates-with-selectors.md). | ```- name: DELEGATE_TAGS value: ""```| +| `DELEGATE_TASK_LIMIT` | The maximum number of tasks the Delegate can perform at once.All of the operations performed by the Delegate are categorized as different types of tasks. | ```- name: DELEGATE_TASK_LIMIT value: "50"```| +| `DELEGATE_ORG_IDENTIFIER` | The Harness Organization [Identifier](../../20_References/entity-identifier-reference.md) where the Delegate will register.Delegates at the account-level do not have a value for this variable. | ```- name: DELEGATE_ORG_IDENTIFIER value: "engg"```| +| `DELEGATE_PROJECT_IDENTIFIER` | The Harness Project [Identifier](../../20_References/entity-identifier-reference.md) where the Delegate will register.Delegates at the account or Org-level do not have a value for this variable. | ```-name: DELEGATE_PROJECT_IDENTIFIER value: "myproject"```| +| `PROXY_MANAGER` | Indicates whether to use the Harness Manager or a proxy. If you use `true` (the default) you are indicating that you proxy outbound traffic to Harness. | ```- PROXY_MANAGER=true```| +| `INIT_SCRIPT` | You can run scripts on the Delegate using `INIT_SCRIPT`.For example, if you wanted to install software on the Delegate pod, you can enter the script in `INIT_SCRIPT` and then apply the Delegate YAML.A multiline script must follow the YAML spec for [literal scalar style](https://yaml.org/spec/1.2-old/spec.html#id2795688).See [Run Scripts on Delegates](../delegate-guide/run-scripts-on-delegates.md). | ```- INIT_SCRIPT= echo hello world!```| +| `USE_CDN` | Makes the Delegate use a CDN for new versions. | ```- name: USE_CDN value: "true"```| +| `CDN_URL` | The CDN URL for Delegate versions. | ```- name: CDN_URL value: https://app.harness.io```| +| `VERSION_CHECK_DISABLED` | By default, the Delegate always checks for new versions (via the Watcher). | ```- name: VERSION_CHECK_DISABLED value: "false"```| + +### See also + +* [Automate Delegate Installation](../delegate-guide/automate-delegate-installation.md) + diff --git a/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-00.png b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-00.png new file mode 100644 index 00000000000..ad1c5532363 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-00.png differ diff --git a/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-01.png b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-01.png new file mode 100644 index 00000000000..b1e8af17ede Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-01.png differ diff --git a/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-02.png b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-02.png new file mode 100644 index 00000000000..f217f9918b6 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-02.png differ diff --git a/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-03.png b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-03.png new file mode 100644 index 00000000000..df4d9b1fddb Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-docker/static/install-a-docker-delegate-03.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/_category_.json b/docs/platform/2_Delegates/delegate-install-kubernetes/_category_.json new file mode 100644 index 00000000000..642d2ce7c76 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-install-kubernetes/_category_.json @@ -0,0 +1 @@ +{"label": "Kubernetes Delegates", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Kubernetes Delegates"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "uimq4rlif9"}} \ No newline at end of file diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/install-harness-delegate-on-kubernetes.md b/docs/platform/2_Delegates/delegate-install-kubernetes/install-harness-delegate-on-kubernetes.md new file mode 100644 index 00000000000..b5616911d5a --- /dev/null +++ b/docs/platform/2_Delegates/delegate-install-kubernetes/install-harness-delegate-on-kubernetes.md @@ -0,0 +1,129 @@ +--- +title: Install Harness Delegate on Kubernetes +description: This document explains how to install Harness Delegate on Kubernetes using a Helm chart or a Kubernetes manifest. +# sidebar_position: 2 +helpdocs_topic_id: 2132l9r4gt +helpdocs_category_id: uimq4rlif9 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This document explains how to install Harness Delegate into Harness NextGen. The delegate is installed into Kubernetes environments using Helm or a Kubernetes manifest. This document steps through both installation methods. + +Harness Delegate offers some configurable settings to support proxied environments and delegate auto-upgrade. + +Harness supports a version skew of up to *n*-2 delegate versions, for which *n* is the current version of the installed delegate. For example, with monthly releases, Harness supports your June installation through August. Harness Delegate includes an algorithm that automatically expires delegates three months after installation. These limitations help to ensure compatibility between the delegate and Harness components, including Harness Manager. + +Delegate auto-upgrade status and expiration dates are shown in Harness Manager for each installed delegate where they apply. + +For an introduction to delegates, see [Delegate Overview](../delegates-overview.md). For more information about the delegate automatic update process, see [Delegate Auto-Update](../delegate-guide/delegate-auto-update.md). + +### Install Process + +Harness Delegate is deployed using Harness Manager. This document describes the requirements for the process, explains the installation screens, and provides steps you can use to verify or troubleshoot the process. + +The delegate is added to the target cluster. Kubernetes Cluster Connector uses the delegate to connect to the cluster. By default, Harness delegates install and run with cluster root access. + +The following diagram shows the integration of Harness Delegate into a Kubernetes deployment. + +![](./static/install-harness-delegate-on-kubernetes-09.png) +### Requirements + +This section describes the requirements for Harness Delegate. + +#### Permissions + +Harness Delegate requires the following access and permissions: + +* A machine configured for access to the Harness SaaS URL: . +* Access to the target Kubernetes cluster with installation by Kubernetes manifest (YAML) or Helm chart. +* A [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) that permits creation of the following: + + A namespace to host the delegate + + [Deployment](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/) resources, including the [StatefulSet](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/) objects required to manage the delegate + +#### Compute Resources + +The compute resources that the delegate workload requires depend on the scale of your deployment and the number of replica pods to be deployed. + +* Deploy to laptop: 0.5 CPU with 2.0 GB memory +* Small-scale deployment: 1.0 CPU with 4.0 GB memory +* Medium-scale deployment: 2.0 CPU with 8.0 GB memory +* Large-scale deployment: 4.0 CPU with 16.0 GB memory + +### Installation + +Harness Delegate for Helm is installed using a Kubernetes manifest or by deploying a Helm chart. The installation process requires you to configure the deployment and, if you are using a proxy, to configure proxy settings. Harness deploys the delegate and listens for a heartbeat to confirm the delegate is running. If you receive a message that the delegate could not be installed, see the final section of this document for links to troubleshooting information. + +For basic information on Harness Delegate, see [Delegate Requirements and Limitations](../delegates-overview.md). + +**To install the Delegate** + +1. Open the target project and select **Delegates**. ![](./static/install-harness-delegate-on-kubernetes-10.png) +In this example, **Harness Project** is the target of the deployment. +2. Click **Create a Delegate**.![](./static/install-harness-delegate-on-kubernetes-11.png) +3. Review the prerequisites and click **Continue**. +4. Enter the name of your delegate. +The name is populated into the **ID** field. You can change the name of the delegate after it is deployed; you cannot change the delegate ID. +5. (Optional) Enter a description and create tags to be associated with your delegate. +6. In **Delegate Size**, select the size of the deployment. + + ![](./static/install-harness-delegate-on-kubernetes-12.png) + + In this example, the target deployment is of medium size. + +7. In **Delegate Permissions**, select the access level you want to grant the delegate.  +In this example, the delegate is granted default access with cluster-wide read/write access. + +You can install the delegate using a Helm chart. + +![](./static/install-harness-delegate-on-kubernetes-13.png) +Or you can use a Kubernetes manifest. + +![](./static/install-harness-delegate-on-kubernetes-14.png) +1. Select the installer you prefer and click **Continue**. + +### Install by Helm Chart + +Some delegate values are configured in the harness-delegate-values.yml file. You can download the values file to configure a proxy or customize other editable values. + +![](./static/install-harness-delegate-on-kubernetes-15.png) + +1. (Optional) To download the harness-delegate-values.yml file, click **Download YAML file**. + + For detailed information about configuring a proxy for the delegate, see [Configure Delegate Proxy Settings](../delegate-guide/configure-delegate-proxy-settings.md). + + For information about additional editable fields in the harness-delegate-values.yaml file, see [Delegate Environment Variables](../delegate-reference/delegate-environment-variables.md). + +1. Click **Continue**.![](./static/install-harness-delegate-on-kubernetes-16.png) +2. Copy the provided commands and apply the chart. + +If the deployment is successful, the installer verifies the delegate heartbeat. Otherwise, if there are problems, see [Delegate Issues](https://docs.harness.io/article/jzklic4y2j-troubleshooting#delegate_issues). After the issue is resolved, try the `helm upgrade` command again. + +If you require additional assistance, contact Harness Support. + +### Install by Kubernetes Manifest + +You can download the delegate manifest file to configure a proxy or customize other editable values. + +![](./static/install-harness-delegate-on-kubernetes-17.png) +1. (Optional) To download the harness-delegate.yml file, click **Download YAML file**. +For detailed information about configuring a proxy for the delegate, see [Configure Delegate Proxy Settings](../delegate-guide/configure-delegate-proxy-settings.md). +For a sample manifest, see [Example Kubernetes Manifest: Harness Delegate](../delegate-reference/example-kubernetes-manifest-harness-delegate.md). +2. Click **Continue**.![](./static/install-harness-delegate-on-kubernetes-18.png) + +1. Copy the provided command and apply the YAML. + +If the deployment is successful, the installer verifies the delegate heartbeat. If the deployment is not successful, see [Troubleshooting](https://docs.harness.io/article/jzklic4y2j) for instructions. After you resolve the issue, apply the YAML again. + +If you require additional assistance, contact Harness Support. + +### Confirm Installation in Harness Manager + +When installation is complete, check Harness Manager to verify the status of the delegate auto-update feature. If auto-update is not enabled, confirm the delegate date of expiration. + +You can find auto-update information in Harness Manager. Check the list of delegates by name. + +![](./static/install-harness-delegate-on-kubernetes-19.png) +You can find expiration information listed with the details for the delegate. + +![](./static/install-harness-delegate-on-kubernetes-20.png) \ No newline at end of file diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/install-harness-delegate-using-helm.md b/docs/platform/2_Delegates/delegate-install-kubernetes/install-harness-delegate-using-helm.md new file mode 100644 index 00000000000..ff2b45a2889 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-install-kubernetes/install-harness-delegate-using-helm.md @@ -0,0 +1,219 @@ +--- +title: Install Harness Delegate Using Helm +description: Harness Delegate is a service that runs in the target environment for a deployment – typically a local network, a VPC, or a cluster. Harness Delegate connects artifacts, infrastructure, and providers… +# sidebar_position: 2 +helpdocs_topic_id: zo44dwgmin +helpdocs_category_id: uimq4rlif9 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Delegate is a service that runs in the target environment for a deployment – typically a local network, a VPC, or a cluster. Harness Delegate connects artifacts, infrastructure, and providers with Harness Manager, and is responsible for performing deployment operations. + +This document explains how to install Harness Delegate using the [Helm](https://helm.sh/) package manager. + +### Install Process + +Harness Delegate for Helm is deployed by using an installer. This document describes the requirements for the process, explains the screens of the installer, and provides steps you can use to verify or troubleshoot the process. + +By default, Harness Delegate installs and runs with cluster root access. + +### Requirements + +Harness Delegate for Helm has the following requirements. + +#### Permissions + +Harness Delegate for Helm requires the following access and permissions: + +* A machine configured for access to the Harness SaaS URL: `https://app.harness.io`. +* Access to the target Kubernetes cluster with installation by application manifest (YAML) or Helm chart. +* A [**ClusterRole**](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) that permits creation of the following: + + A namespace to host Harness Delegate + + [**Deployment**](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/) resources, including the [**StatefulSet**](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/stateful-set-v1/) objects required to manage Harness Delegate + +For more information about the permissions you need to install a Delegate, see [Review: Delegate Role Requirements](../delegate-guide/install-a-kubernetes-delegate.md#review-delegate-role-requirements). + +#### Compute Resources + +The compute resources that the Delegate workload requires depend on the scale of your deployment and the number of replica pods to be deployed. + +* **Deploy to laptop:** 0.5 CPU with 2.0 GB memory +* **Small-scale deployment:** 1.0 CPU with 4.0 GB memory +* **Medium-scale deployment:** 2.0 CPU with 8.0 GB memory +* **Large-scale deployment:** 4.0 CPU with 16.0 GB memory + +### Installation + +Harness Delegate for Helm is installed by deploying a Helm chart. The installation process requires you to configure the deployment and, if you are using a proxy, to configure proxy settings. Harness deploys the Delegate and listens for a heartbeat to confirm the Delegate is running. If you receive a message that the Delegate could not be installed, see the final section of this document for troubleshooting information. + +For basic information on Harness Delegate, see [Delegate Requirements and Limitations](../delegates-overview.md). + +#### Configure the Deployment + +1. Select the **Kubernetes** option. + + ![](./static/install-harness-delegate-using-helm-00.png) + +2. Review **Kubernetes Prerequisites** to ensure that your cluster can support the deployment. Click **Continue**. + + ![](./static/install-harness-delegate-using-helm-01.png) + +3. In **Delegate Name**, give your Delegate a name. The Delegate name is populated into the **ID** field.
    + A valid Delegate name is a lowercase character string that does not begin or end with a number. You can use the dash mark (“-”) to separate segments of the string; other special characters are not permitted. Delegate names must be unique in the cluster. +4. (Optional) Click the pencil icon to change the identifier of the Delegate resource. +5. (Optional) Click **+ Description** to create a description to be associated with the Delegate. +6. (Optional) Click **+ Tags** to label the Delegate resource with tags. This increases searchability. +7. Select the size of the deployment.
    + The size of the deployment determines the CPU and memory resources allocated to running the Delegate. The following table summarizes the options + + +| | | | | +| --- | --- | --- | --- | +| **Size** | **Replicas** | **CPU/Memory** | **Description** | +| **Laptop** | 1 | 0.5/2.0 GB | Run up to 2 parallel deployments or builds.For production workloads. | +| **Small** | 2 | 1.0/4.0 GB | Run up to 10 parallel deployments or builds.For small-scale production workloads. | +| **Medium** | 4 | 2.0/8.0 GB | Run up to 20 parallel deployments or builds.For medium-scale production workloads. | +| **Large** | 8 | 4.0/16.0 GB | Run up to 40 parallel deployments or builds.For large-scale production workloads. | + +8. Select the **Helm Chart** installer option. Harness Delegate for Helm is installed using a Helm chart. + + ![](./static/install-harness-delegate-using-helm-02.png) + +9. In **Delegate Tokens**, specify the one or more token names to be associated with the Delegate. In the example above, `default_token` is specified. +10. In **Delegate Permissions**, select the permissions you want to grant the Delegate. + + + +| | | | | +| --- | --- | --- | --- | +| **User role** | **Scope** | **read/write** | **Description** | +| Install Delegate with cluster-wide read/write access | cluster | read/write | The Delegate is installed with cluster-wide access in the namespace you are prompted to specify during the installation process.This Delegate binds to the default **cluster-admin** [ClusterRole](https://kubernetes.io/docs/reference/kubernetes-api/authorization-resources/cluster-role-v1/). The Delegate reads and writes tasks across all namespaces in the cluster. | +| Install Delegate with cluster-wide read access | cluster | read | The Delegate is installed with cluster-wide access in the namespace you are prompted to specify during the installation process.This Delegate binds to the default **view** [ClusterRole](https://kubernetes.io/docs/reference/kubernetes-api/authorization-resources/cluster-role-v1/) and is limited to the performance of read-only task in the cluster. | +| Install Delegate with specific namespace access | namespace | read/write | The Delegate is installed with namespace access in the namespace you specify during the installation process.This requires modification of the command-line instruction to specify installation and operation of the Delegate in the given namespace. | + +For detailed information about Kubernetes default user roles, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in [Kubernetes Documentation](https://kubernetes.io/docs/home/). + +### Configure Proxy Settings + +The configuration of proxy settings is an optional step in the installation process. You can download the Delegate YAML and configure settings to modify how the Delegate connects to Harness Manager. Skip this step if your environment is not configured to use the Kubernetes proxy service. + +![](./static/install-harness-delegate-using-helm-03.png) +Proxy settings are configured using the following process: + +* Download the Delegate YAML. +* Open the YAML file and configure the desired proxies. +* Save and install the modified YAML file. + +For more information about how proxies work in Kubernetes, see [Service](https://kubernetes.io/docs/concepts/services-networking/service/) in [Kubernetes Documentation](https://kubernetes.io/docs/home/). + +#### Delegate Proxy Settings + +The following proxies determine how Harness Delegate connects to Harness Manager. If `PROXY_MANAGER` is set to `true`, any other proxy settings are ignored. + +For in-cluster Delegates with configured proxies, the `noProxy` value must be the master IP address of the cluster. This allows the Delegate to bypass the proxy for in-cluster connections. + + + +| | | +| --- | --- | +| **Value** | **Description** | +| `proxyHost` | The host name address of the proxy, for example, `proxy.example.com.` | +| `proxyPort` | The proxy port number to which Harness Delegate connects. | +| `proxyUser` | The name of the service account to be authenticated for proxy access. | +| `proxyPassword` | The password for the service account to be authenticated. | +| `proxyScheme` | The network addressing scheme that the proxy uses. This must be HTTP or HTTPS. | +| `noProxy` | A comma-separated list of domain suffixes that are not subject to proxy, for example, `.example.com`. | + +**To configure Delegate proxy settings** + +1. Click **Download YAML File**. +2. Navigate to the download location and open the `harness-helm-values.yaml` file. +3. Modify the proxy settings and save the file. + + ![](./static/install-harness-delegate-using-helm-04.png) + + For information on using proxy settings with Helm Delegate, see [Configure Delegate Proxy Settings](../delegate-guide/configure-delegate-proxy-settings.md). + +#### Deploy and Verify + +Deploy the configured Harness Delegate using the `helm` CLI from a machine configured for access to the target cluster. A successful deployment is confirmed by listening for a heartbeat. The installer monitors the process and displays status messages. + +**To deploy Harness Delegate** + +1. Copy and run the provided commands. + + ![](./static/install-harness-delegate-using-helm-05.png) + + Harness Manager waits for a heartbeat to confirm that Harness Delegate is installed and running. + + ![](./static/install-harness-delegate-using-helm-06.png) + + Installation could take several minutes. + +2. If Harness Delegate was successfully installed, click **Done**. + + If Harness Delegate cannot be installed, the following message appears: + + ![](./static/install-harness-delegate-using-helm-07.png) + + If the Delegate cannot be installed, see the following section for information on common problems. + +### Troubleshooting + +Installation failures can result from common problems including unhealthy pods and a lack of compute resources. + +You can retrieve the information required to triage and resolve most failure conditions with the following `kubectl` instructions. + + + +| | | | +| --- | --- | --- | +| **To check** | **Use** `**kubectl**` **command** | **To resolve** | +| **Delegate pod status** | `kubectl describe pods -n ``kubectl describe pod ``kubectl describe pod_name -n ` | Check to ensure the pod is ready and available.Check pod status to confirm that the Delegate pod was scheduled to a node and is running.Resolve issues that keep pods in Pending or Waiting status.If the state of the pod is `CrashLoopBackoff.Kubernetes Cluster Resources are not available`, increase the cluster resources for CPU units and memory.See [Debug Pods](https://kubernetes.io/docs/home/) in [Kubernetes Documentation](https://kubernetes.io/docs/home/). | +| **Delegate logs** | `kubectl logs -f -n ` | Examine the logs for the namespace.See [Troubleshooting Clusters](https://kubernetes.io/docs/tasks/debug/debug-cluster/) in [Kubernetes Documentation](https://kubernetes.io/docs/home/). | + +When the issue is resolved, apply the Delegate YAML a second time. + +![](./static/install-harness-delegate-using-helm-08.png) + +From **Apply YAML and verify connection**, copy the instructions to the command line. + +For further information on troubleshooting, see [Troubleshooting Harness](https://docs.harness.io/article/jzklic4y2j-troubleshooting). + +### Delegate Field Reference + +The following table lists and describes fields and values that are used by Harness Delegate for Helm. + + + +| | | | +| --- | --- | --- | +| **Field** | **Configurable** | **Description** | +| **accountId** | No | The ID of the service account associated with the target cluster. | +| **delegateToken** | No | The secret that identifies the account. | +| **delegateName** | Yes | A user-specified name for the Delegate consisting of lowercase alphanumeric characters. A hyphen ("-") is permitted as a character separator. The name must not begin or end with a number.Example: `my-sample-delegate` | +| **delegateDockerImage** | No | The location and name of the Docker image that contains the Delegate. | +| **managerEndpoint** | No | The HTTPS location of the manager endpoint. | +| **tags** | Yes | A comma-separated list of tags that identify the Delegate. | +| **description** | Yes | A description of the Delegate. | +| **k8sPermissionsType** | Yes | The **ClusterRole** granted the Delegate.Specify one of the following valid values: `CLUSTER_ADMIN`, `CLUSTER_VIEWER`, `NAMESPACE_ADMIN`.Specify `CLUSTER_ADMIN` for cluster-wide read/write privileges.Specify `CLUSTER_VIEWER` for cluster-wide view privileges.Specify `NAMESPACE_ADMIN` to restrict the Delegate to read/write access to a given namespace. | +| **replicas** | No | The desired number of pods that run the Delegate.The default value depends on the size of the deployment that is selected in the installer. | +| **cpu** | No | The CPU resource units allocated to the Delegate.The default value depends on the size of the deployment that is selected in the installer. | +| **memory** | No | The desired amount of memory resources allocated to the Delegate.The default value depends on the size of the deployment that is selected in the installer. | +| **initScript** | Yes | The path and filename to an optional initialization script that runs before the Delegate starts. | +| **javaOpts** | Yes | Specify the `JAVA_OPTS` environment variable to pass custom settings to the JVM. | + +The following table lists and describes additional configurable fields in the values.yaml file. + + + +| | | +| --- | --- | +| **Field** | **Description** | +| **serviceAccount.name** | Specify this field to set a user-specified name for the service account. Otherwise, the service name is the *chartname*, `harness-delegate-ng`. | +| **serviceAccount.annotations** | Use this field to annotate the service account with metadata.For information about annotations, see [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) in [Kubernetes Documentation](https://kubernetes.io/docs/home/). | +| **autoscaling** | Use this object to enable and specify pod autoscaling. To enable autoscaling, **autoscaling.enabled** must be set to `true`. By default, this field is set to `false`.For information on autoscaling, see [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) in [Kubernetes Documentation](https://kubernetes.io/docs/home/). | +| **pollForTasks** | Set this value to `true` to enable polling for tasks using REST API methods. By default, this value is set to `false` and the use of a socket connection is assumed. | +| **upgrader** | Set **upgrader.enabled** to `true` to enable automatic upgrading of the Delegate image by Harness. | + diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-09.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-09.png new file mode 100644 index 00000000000..d433e17f3d4 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-09.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-10.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-10.png new file mode 100644 index 00000000000..acd5957534e Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-10.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-11.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-11.png new file mode 100644 index 00000000000..09968cabd18 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-11.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-12.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-12.png new file mode 100644 index 00000000000..ebd55535c36 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-12.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-13.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-13.png new file mode 100644 index 00000000000..9f61f61762d Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-13.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-14.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-14.png new file mode 100644 index 00000000000..3b31204021e Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-14.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-15.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-15.png new file mode 100644 index 00000000000..df9ba2809a4 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-15.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-16.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-16.png new file mode 100644 index 00000000000..e217fcba4e4 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-16.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-17.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-17.png new file mode 100644 index 00000000000..0a29863c3f3 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-17.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-18.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-18.png new file mode 100644 index 00000000000..0d8a701d2f0 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-18.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-19.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-19.png new file mode 100644 index 00000000000..c82822eed9d Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-19.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-20.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-20.png new file mode 100644 index 00000000000..ffb72f0ef4e Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-on-kubernetes-20.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-00.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-00.png new file mode 100644 index 00000000000..869907d193a Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-00.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-01.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-01.png new file mode 100644 index 00000000000..c998b4fa89b Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-01.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-02.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-02.png new file mode 100644 index 00000000000..46b97ab3d10 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-02.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-03.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-03.png new file mode 100644 index 00000000000..043317b9c82 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-03.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-04.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-04.png new file mode 100644 index 00000000000..27b2aefc2bb Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-04.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-05.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-05.png new file mode 100644 index 00000000000..c4c7b9525d5 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-05.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-06.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-06.png new file mode 100644 index 00000000000..fc3af564838 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-06.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-07.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-07.png new file mode 100644 index 00000000000..db90082c843 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-07.png differ diff --git a/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-08.png b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-08.png new file mode 100644 index 00000000000..c4c7b9525d5 Binary files /dev/null and b/docs/platform/2_Delegates/delegate-install-kubernetes/static/install-harness-delegate-using-helm-08.png differ diff --git a/docs/platform/2_Delegates/delegate-reference/_category_.json b/docs/platform/2_Delegates/delegate-reference/_category_.json new file mode 100644 index 00000000000..368f9181d77 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/_category_.json @@ -0,0 +1 @@ +{"label": "Delegate Reference", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Delegate Reference"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "vm60533pvt"}} \ No newline at end of file diff --git a/docs/platform/2_Delegates/delegate-reference/common-delegate-profile-scripts.md b/docs/platform/2_Delegates/delegate-reference/common-delegate-profile-scripts.md new file mode 100644 index 00000000000..579b668e079 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/common-delegate-profile-scripts.md @@ -0,0 +1,241 @@ +--- +title: Common Delegate Initialization Scripts +description: This functionality is limited temporarily to the platforms and settings you can see. More functionality for this feature in coming soon. This topic provides information on script availability and som… +# sidebar_position: 2 +helpdocs_topic_id: auveebqv37 +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can run scripts on Harness Delegate pods, hosts, and containers to install applications or run commands. + +For more information about running scripts, see [Install Software on the Delegate with Initialization Scripts](../delegate-guide/run-scripts-on-delegates.md).This topic provides information on script availability and some common delegate initialization scripts. + +### Limitations + +* When you edit or delete scripts, the binaries that were already installed by those scripts are not automatically removed. To remove them, you must restart or clean up the pod or VM. +* You cannot use Harness secrets in scripts. This is because the script runs before the delegate is registered with and establishes a connection to Harness. + +### Review: What Can I Run In a Script? + +You can add any commands supported on the host/container/pod running the delegate. Linux shell commands are most common. If `kubectl`, Helm, or Docker is running on the host/container/pod where you install the delegate, then you can use their commands. Kubernetes and Docker delegates include Helm. + +The base image for the delegate is Ubuntu 18.04 or later. This means you can use any default Ubuntu package in delegate script. + +#### Legacy Delegates + +Legacy Delegates include `cURL`, `tar,` and `unzip` as part of their installation package. This means you can use `cURL`, `tar`, and `unzip` in delegate scripts without installing them. For example, the following script works without the installation of any packages: + + +``` +usr/bin/apt-get install -y python +curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" +unzip awscli-bundle.zip +./awscli-bundle/install -b ~/bin/aws +``` +#### Harness Delegate + +Harness Delegate is packaged with `cURL` and `tar`. + +#### When is the Script Executed? + +Delegate scripts are applied under the following conditions: + +* **New Delegate.** Scripts added on delegate creation run before the delegate starts. +* **Running Delegate.** Scripts applied during delegate runtime, either by application as a new script or by switching the Delegate’s current script, run on delegate restart, before the delegate reaches steady state. + +### Terraform + +Here is an example of a script for installing Terraform: + + +``` +# Install TF +curl -O -L https://releases.hashicorp.com/terraform/0.12.25/terraform_0.12.25_linux_amd64.zip +unzip terraform_0.12.25_linux_amd64.zip +mv ./terraform /usr/bin/ +# Check TF install +terraform --version +``` +### Helm 2 + +The following script installs Helm and Tiller in the Delegate's cluster: + + +``` +# Add the Helm version that you want to install +HELM_VERSION=v2.14.0 +# v2.13.0 +# v2.12.0 +# v2.11.0 + +export DESIRED_VERSION=${HELM_VERSION} + +echo "Installing Helm $DESIRED_VERSION ..." + +curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash + +# If Tiller is already installed in the cluster +helm init --client-only + +# If Tiller is not installed in the cluster +# helm init +``` +The `helm init` command is used with Helm 2 to install Tiller into a Kubernetes cluster. The command does not exist in Helm 3; nor is Tiller used in Helm 3.`DESIRED_VERSION` is used by a function in the [Helm install script](https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get).If Helm is installed in a different cluster than the delegate, make sure the `kubeconfig` in the delegate cluster references the correct cluster. Use the following command to set the context. + + +``` +kubectl config current-context cluster_name +``` +If you are using TLS for communication between Helm and Tiller, ensure that you use the `--tls` parameter with your commands. For more information, see [Using SSL Between Helm and Tiller](https://docs.helm.sh/using_helm/#using-ssl-between-helm-and-tiller) from Helm, and the section **Securing your Helm Installation** in that document.The following example shows how to add a Helm chart from a private repository using the secrets `repoUsername` and `repoPassword` from Harness [Text Secrets](../../6_Security/2-add-use-text-secrets.md). + + +``` +# Other installation method +# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get> get_helm.sh +# chmod 700 get_helm.sh +# ./get_helm.sh + +curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash + +helm init --client-only + +helm repo add --username <+secrets.getValue("repoUsername")> --password <+secrets.getValue("repoPassword")> nginx https://charts.bitnami.com/bitnami + +helm repo update +``` +The `helm init` command does not exist in Helm 3. This command is used with Helm 2 to install Tiller into a Kubernetes cluster. Tiller is not used in Helm 3.### Helm 3 + +You do not need to add a script for Helm 3. Harness includes Helm 3 support in any Delegate that can connect to the target Kubernetes cluster. + +### Pip + +Ensure that you run `apt-get update` before running any `apt-get` commands. +``` +apt-get update +# Install pip +apt-get -y install python-pip +# Check pip install +pip -v +``` +### Unzip + +Ensure that you run `apt-get update` before running any `apt-get` commands. +``` +apt-get update +# Install Unzip +apt-get install unzip +``` +### AWS CLI + +The following script installs the [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html) on the Delegate host. + + +``` +# Install AWS CLI +curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" +unzip awscliv2.zip +./awscli-bundle/install -b ~/bin/aws +# install +sudo ./aws/install +# Check AWS CLI install +aws --version +``` +### AWS Describe Instance + +The following script describes the EC2 instance based on its private DNS hostname: + + +``` +aws ec2 describe-instances --filters "Name=network-interface.private-dns-name,Values=ip-10-0-0-205.ec2.internal" --region "us-east-1" +``` +The value for the `Values` parameter is the hostname of the delegate. + +### AWS List All Instances in a Region + +The following script will list all of the EC2 instances in the region you supply: + + +``` +aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,State.Name,InstanceType,PrivateIpAddress,PublicIpAddress,Tags[?Key==`Name`].Value[]]' --region "us-east-1" --output json | tr -d '\n[] "' | perl -pe 's/i-/\ni-/g' | tr ',' '\t' | sed -e 's/null/None/g' | grep '^i-' | column -t +``` +### Git CLI + +Run `apt-get update` before you run`apt-get` commands. +``` +apt-get update +# Install Git with auto approval +yes | apt-get install git +# Check git install +git --version +``` +### Cloud Foundry CLI + +Harness supports Cloud Foundry CLI version 6 only. Support for version 7 is pending.Below is one example of CF CLI installation, but the version of the CF CLI you install on the Delegate should always match the PCF features you are using in your Harness PCF deployment. + +For example, if you are using buildpacks in the manifest.yml of your Harness service, the CLI you install on the delegate should be version 3.6 or later. + +The following example script installs Cloud Foundry CLI on a delegate: + + +``` +sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo + +sudo yum -y install cf-cli +``` +The `-y` parameter is needed for a prompt. + +When the script has been applied and you click the timestamp for the Delegate the output will be similar to this: + + +``` +Running transaction + Installing : cf-cli-6.46.1-1.x86_64 1/1 + Verifying : cf-cli-6.46.1-1.x86_64 1/1 + +Installed: + cf-cli.x86_64 0:6.46.1-1 + +Complete! +``` +For information on installing the CLI on different distributions, see [Installing the cf CLI](https://docs.pivotal.io/pivotalcf/2-3/cf-cli/install-go-cli.html) from PCF. + +### Docker Installation + +To install Docker on the Delegate, use the following script: + + +``` +apt-get update +apt-get install -y apt-utils dnsutils docker +``` +Ensure that you run `apt-get update` before running any `apt-get` commands.### PowerShell + +You can run PowerShell scripts on Harness Delegate, even though the delegate must be run on Linux. Linux supports PowerShell using [PowerShell core](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-windows?view=powershell-7). + +For information about how to create your script, see [Installing PowerShell on Linux](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-linux?view=powershell-7) from Microsoft. + +The scripts you run must be supported by the version of PowerShell you install. + +Here is an example for Ubuntu 16.04: + + +``` +# Download the Microsoft repository GPG keys +wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb + +# Register the Microsoft repository GPG keys +sudo dpkg -i packages-microsoft-prod.deb + +# Update the list of products +sudo apt-get update + +# Install PowerShell +sudo apt-get install -y powershell + +# Start PowerShell +pwsh +``` +If apt-get is not installed on your Delegate host, you can use snap (`snap install powershell --classic`). See [Install PowerShell Easily via Snap in Ubuntu 18.04](http://ubuntuhandbook.org/index.php/2018/07/install-powershell-snap-ubuntu-18-04/). + diff --git a/docs/platform/2_Delegates/delegate-reference/delegate-environment-variables.md b/docs/platform/2_Delegates/delegate-reference/delegate-environment-variables.md new file mode 100644 index 00000000000..a8f422a5e8d --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/delegate-environment-variables.md @@ -0,0 +1,46 @@ +--- +title: Delegate environment variables +description: The following table describes the environment variables that apply to the Delegate manifest. Some of these variables are included in the YAML by default; you can specify others based on your use case… +# sidebar_position: 2 +helpdocs_topic_id: b032tf34k9 +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following table describes the environment variables that apply to the Delegate manifest. Some of these variables are included in the YAML by default; you can specify others based on your use case. + + + +| | | | +| --- | --- | --- | +| **Name** | **Description** | **Example** | +| `JAVA_OPTS` | JVM options for the Delegate. Use this variable to override or add JVM parameters. | `- name: JAVA_OPTS``value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Xms64M"` | +| `ACCOUNT_ID` | The Harness account Id for the account where this Delegate will attempt to register.This value is added automatically to the Delegate config file (YAML, etc) when you add the Delegate. | `- name: ACCOUNT_ID``value: H5W8iol5TNWc4G9h5A2MXg` | +| `ACCOUNT_SECRET` | The Harness account token used to register the Delegate. | `- name: ACCOUNT_SECRET``value: d239xx88bf7xxxxxxx836ea` | +| `MANAGER_HOST_AND_PORT` | The Harness SaaS manager URL. HTTPS indicates port 443. | `- name: MANAGER_HOST_AND_PORT``value: https://app.harness.io` | +| `WATCHER_STORAGE_URL` | The URL for the Watcher versions.See [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview). | `- name: WATCHER_STORAGE_URL``value: https://app.harness.io/public/prod/premium/watchers` | +| `WATCHER_CHECK_LOCATION` | The Delegate version location for the Watcher to check for. | `- name: WATCHER_CHECK_LOCATION``value: current.version` | +| `REMOTE_WATCHER_URL_CDN` | The CDN URL for Watcher builds. | `- name: REMOTE_WATCHER_URL_CDN``value: https://app.harness.io/public/shared/watchers/builds` | +| `DELEGATE_STORAGE_URL` | The URL where published Delegate jars are stored. | `- name: DELEGATE_STORAGE_URL``value: https://app.harness.io` | +| `DELEGATE_CHECK_LOCATION` | The storage location hosting the published Delegate versions. | `- name: DELEGATE_CHECK_LOCATION``value: delegateprod.txt` | +| `DEPLOY_MODE` | Deployment mode: Kubernetes, Docker, etc. | `- name: DEPLOY_MODE``value: KUBERNETES` | +| `DELEGATE_NAME` | The name of the Delegate. This is the name that will appear in Harness when the Delegate is registered.You can automate Delegate creation by omitting the name, and then have a script copying the Delegate YAML file and add a unique name to value for each new Delegate you want to register.See [Automate Delegate Installation](../delegate-guide/automate-delegate-installation.md). | `- name: DELEGATE_NAME``value: qa` | +| `NEXT_GEN` | Indicates that this Delegate will register in [Harness NextGen](https://docs.harness.io/article/ra3nqcdbaf-compare-first-gen-and-next-gen).If it set to false, the Delegate will attempt to register in Harness FirstGen. | `- name: NEXT_GEN``value: "true"` | +| `DELEGATE_DESCRIPTION` | The description added to the Delegate in the Harness Manager or YAML before registering.It appears in the Delegate details page in the Harness Manager. | `- name: DELEGATE_DESCRIPTION``value: ""` | +| `DELEGATE_TYPE` | The type of Delegate. | `- name: DELEGATE_TYPE``value: "KUBERNETES"` | +| `DELEGATE_TAGS` | The Tags added to the Delegate in the Harness Manager or YAML before registering.Tags are generated by Harness using the Delegate name but you can also add your own tags. You can specify multiple tags in YAML as a comma-separated list.Tags appear in the Delegate details page in the Harness Manager.See [Tags Reference](../../20_References/tags-reference.md) and [Select Delegates with Tags](../delegate-guide/select-delegates-with-selectors.md). | `- name: DELEGATE_TAGS``value: ""`or,`- name: DELEGATE_TAGS``value: has_jq, has_gcloud` | +| `DELEGATE_TASK_LIMIT` | The maximum number of tasks the Delegate can perform at once.All of the operations performed by the Delegate are categorized as different types of tasks. | `- name: DELEGATE_TASK_LIMIT``value: "50"` | +| `DELEGATE_ORG_IDENTIFIER` | The Harness Organization [Identifier](../../20_References/entity-identifier-reference.md) where the Delegate will register.Delegates at the account-level do not have a value for this variable. | `- name: DELEGATE_ORG_IDENTIFIER``value: "engg"` | +| `DELEGATE_PROJECT_IDENTIFIER` | The Harness Project [Identifier](../../20_References/entity-identifier-reference.md) where the Delegate will register.Delegates at the account or Org-level do not have a value for this variable. | `- name: DELEGATE_PROJECT_IDENTIFIER``value: "myproject"` | +| `PROXY_*` | All of the Delegates include proxy settings you can use to change how the Delegate connects to the Harness Manager.The secretKeyRef are named using the Delegate name. | `- name: PROXY_HOST``value: ""``- name: PROXY_PORT``value: ""``- name: PROXY_SCHEME``value: ""``- name: NO_PROXY``value: ""``- name: PROXY_MANAGER``value: "true"``- name: PROXY_USER``valueFrom:``secretKeyRef:``name: mydel-proxy``key: PROXY_USER``- name: PROXY_PASSWORD``valueFrom:``secretKeyRef:``name: mydel-proxy``key: PROXY_PASSWORD` | +| `INIT_SCRIPT` | You can run scripts on the Delegate using INIT\_SCRIPT.INIT\_SCRIPT is typically not used for Delegates.For the Delegate, initialization should be baked into the image not executed on startup. | `- name: INIT_SCRIPT``value: |-``echo install wget``apt-get install wget``echo wget installed` | +| `POLL_FOR_TASKS` | Enables or disables polling for Delegate tasks.By default, the Delegate uses Secure WebSocket (WSS) for tasks. If the PROXY\_\* settings are used and the proxy or some intermediary does not allow WSS, then set POLL\_FOR\_TASKS to true to enable polling. | `- name: POLL_FOR_TASKS``value: "false"` | +| `HELM_DESIRED_VERSION` | By default, Harness Delegates are installed with and use Helm 3.You can set the Helm version in the Harness Delegate YAML file using the HELM\_DESIRED\_VERSION environment property. Include the v with the version. For example, HELM\_DESIRED\_VERSION: v2.13.0. | `- name: HELM_DESIRED_VERSION``value: ""` | +| `USE_CDN` | Makes the Delegate use a CDN for new versions. | `- name: USE_CDN``value: "true"` | +| `CDN_URL` | The CDN URL for Delegate versions. | `- name: CDN_URL``value: https://app.harness.io` | +| `JRE_VERSION` | The Java Runtime Environment version used by the Delegate. | `- name: JRE_VERSION``value: 1.8.0_242` | +| `GRPC_SERVICE_ENABLED,``GRPC_SERVICE_CONNECTOR_PORT` | By default, the Delegate requires HTTP/2 for gRPC (gRPC Remote Procedure Calls) be enabled for connectivity between the Delegate and Harness Manager. | `- name: GRPC_SERVICE_ENABLED``value: "true"``- name: GRPC_SERVICE_CONNECTOR_PORT``value: "8080"` | +| `VERSION_CHECK_DISABLED` | By default, the Delegate always checks for new versions (via the Watcher). | `- name: VERSION_CHECK_DISABLED``value: "false"` | +| `DELEGATE_NAMESPACE` | The namespace for the Delegate is taken from the StatefulSet namespace. | `- name: DELEGATE_NAMESPACE``valueFrom:``fieldRef:``fieldPath: metadata.namespace` | + diff --git a/docs/platform/2_Delegates/delegate-reference/delegate-requirements-and-limitations.md b/docs/platform/2_Delegates/delegate-reference/delegate-requirements-and-limitations.md new file mode 100644 index 00000000000..d9013b7a133 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/delegate-requirements-and-limitations.md @@ -0,0 +1,82 @@ +--- +title: Delegate Requirements and Limitations +description: This topic lists the limitations and requirements of the Harness Delegate. Before you begin. Delegates Overview. Delegate Limitations. Deployment limits -- Deployment limits are set by account type.. Y… +# sidebar_position: 2 +helpdocs_topic_id: k7sbhe419w +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic lists the limitations and requirements of the Harness Delegate. + +### Before you begin + +* [Delegates Overview](../delegates-overview.md) + +### Delegate Limitations + +* **Deployment limits:** Deployment limits are set by account type**.** +* You might need to install multiple Delegates depending on how many Continuous Delivery tasks you do concurrently, and on the compute resources you are providing to each Delegate. Typically, you will need one Delegate for every 300-500 service instances across your applications. +A service instance is when you use Harness to deploy the underlying infrastructure for the instance. +For example, an instance of a Kubernetes workload where Harness creates the pods, or an instance of an ECS task where Harness creates the service for the task. + +### System Requirements + +The Delegate is installed in your network and connects to the Harness Manager. + +One Delegate size does not fit all use cases, so Harness let's you pick from several options: + +![](./static/delegate-requirements-and-limitations-00 +Remember that the memory and CPU requirements are for the Delegate only. You Delegate host/pod/container will need more computing resources for its operations systems and other services such as Docker or Kubernetes. + +The Delegate runs on a Linux/UNIX server or container. + +Ensure that you provide the minimum memory for the Delegate and enough memory for the host/node system. For example, an AWS EC2 instance type such as m5a.xlarge has 16GB of RAM, 8 for the Delegate and 8 for the remaining operations. + +The ​Shell Script Delegate requires cURL 7.64.1 or later. + +Access to artifact servers, deployment environments, and cloud providers. As shown in the following illustration: + +![](./static/delegate-requirements-and-limitations-01.png) +### Allowlist Harness Domains and IPs + +Harness SaaS Delegates only need outbound access to the Harness domain name (most commonly, **app.harness.io)** and, optionally, to **logging.googleapis.com**. The URL logging.googleapis.com is used to provide logs to Harness support. + +See [Allowlist Harness Domains and IPs](../../20_References/whitelist-harness-domains-and-ips.md). + +### Network Requirements + +The following network requirements are for connectivity between the Harness Delegate you run in your network and the **Harness Manager** (SaaS or On-Prem), and for your browser connection to the Harness Manager. + +All network connections from your local network to Harness SaaS are outbound-only.* HTTPS port 443 outbound from the Delegate to Harness. +* HTTP/2 for gRPC (gRPC Remote Procedure Calls) +* Delegate requirements: The Delegate will need API/SSH/HTTP access to the providers you add to Harness, such as: + + Cloud Providers. + + Verification Providers. + + Artifact Servers (repos). + + Source repositories. + + Collaboration Providers. + + SSH access to target physical and virtual servers. + +#### gRPC Limitations + +If you do not enable gRPC connections, the following limitations apply: + +* [Cloud Cost Management (CCM)](https://docs.harness.io/category/exgoemqhji-ccm) will not collect events. +* If the `ARTIFACT_PERPETUAL_TASK` feature flag is enabled in your account, Harness performs perpetual artifact collection. If you do not enable gRPC connections, this will not work. + +Contact [Harness Support](mailto:support@harness.io) to enable or disable feature flags.### Permissions and Ports + +See [Permissions and Ports for Harness Connections](../../20_References/permissions-and-ports-for-harness-connections.md). + +### Add Certificates and Other Software to Delegate + +For steps on adding certs or other software to the Delegate, see [Common Delegate Initialization Scripts](common-delegate-profile-scripts.md). + +### Delegate Access Requirements + +* The Harness Delegate does NOT require root account access, but the Kubernetes and Docker Delegates run as root by default. If you do not need to install applications using Delegate Profiles, then you can use a non-root account or install the application without the Delegate. +See [Non-Root Delegate Installation](../delegate-guide/non-root-delegate-installation.md). +* If you do not run the Delegate as root, be aware that you cannot install any software using a [Delegate Initialization Script](common-delegate-profile-scripts.md). + diff --git a/docs/platform/2_Delegates/delegate-reference/example-harness-delegate-yaml.md b/docs/platform/2_Delegates/delegate-reference/example-harness-delegate-yaml.md new file mode 100644 index 00000000000..d8cc4d2768a --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/example-harness-delegate-yaml.md @@ -0,0 +1,285 @@ +--- +title: Example -- harness-delegate.yaml +description: This example harness-delegate.yaml file implements the approach of using the Kubernetes emptyDir object with an initialization ( INIT ) container to move binaries to the delegate image. For more info… +# sidebar_position: 2 +helpdocs_topic_id: 2ayo3dqret +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This example harness-delegate.yaml file implements the approach of using the Kubernetes `emptyDir` object with an initialization (`INIT`) container to move binaries to the delegate image. + +For more information, see [Install Delegates with Third-Party Tools](../delegate-guide/install-delegates-with-third-party-tools.md) in the *Delegate Guide*. + + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: harness-delegate-ng + +--- + +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: harness-delegate-ng-cluster-admin +subjects: + - kind: ServiceAccount + name: default + namespace: harness-delegate-ng +roleRef: + kind: ClusterRole + name: cluster-admin + apiGroup: rbac.authorization.k8s.io + +--- + +apiVersion: v1 +kind: Secret +metadata: + name: markom-secret-account-token + namespace: harness-delegate-ng +type: Opaque +data: + ACCOUNT_SECRET: "ZTUzNzllZGUzNjk0ZWVmYTA1N2JmMmI1ZTEzNjQ1YzU=" + +--- + +# If delegate needs to use a proxy, please follow instructions available in the documentation +# https://ngdocs.harness.io/article/5ww21ewdt8-configure-delegate-proxy-settings + +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + harness.io/name: markom-secret + name: markom-secret + namespace: harness-delegate-ng +spec: + replicas: 1 + selector: + matchLabels: + harness.io/name: markom-secret + template: + metadata: + labels: + harness.io/name: markom-secret + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "3460" + prometheus.io/path: "/api/metrics" + spec: + terminationGracePeriodSeconds: 600 + restartPolicy: Always + securityContext: + fsGroup: 1001 + runAsUser: 1001 + containers: + - image: harness/delegate:22.07.75836.minimal + imagePullPolicy: Always + name: delegate + ports: + - containerPort: 8080 + resources: + limits: + cpu: "0.5" + memory: "2048Mi" + requests: + cpu: "0.5" + memory: "2048Mi" + livenessProbe: + httpGet: + path: /api/health + port: 3460 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + failureThreshold: 2 + startupProbe: + httpGet: + path: /api/health + port: 3460 + scheme: HTTP + initialDelaySeconds: 30 + periodSeconds: 10 + failureThreshold: 15 + envFrom: + - secretRef: + name: markom-secret-account-token + env: + - name: JAVA_OPTS + value: "-Xms64M" + - name: ACCOUNT_ID + value: D3fzqqYxSmGYPzWMvroIWw + - name: MANAGER_HOST_AND_PORT + value: https://qa.harness.io/gratis + - name: DEPLOY_MODE + value: KUBERNETES + - name: DELEGATE_NAME + value: markom-secret + - name: DELEGATE_TYPE + value: "KUBERNETES" + - name: DELEGATE_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: INIT_SCRIPT + value: "" + - name: DELEGATE_DESCRIPTION + value: "" + - name: DELEGATE_TAGS + value: "" + - name: NEXT_GEN + value: "true" + - name: CLIENT_TOOLS_DOWNLOAD_DISABLED + value: "true" + - name: LOG_STREAMING_SERVICE_URL + value: "https://qa.harness.io/gratis/log-service/" + volumeMounts: + - mountPath: /opt/harness-delegate/client-tools + name: client-tools + initContainers: + - name: install-kubectl + image: curlimages/curl + command: ['sh', '-c', "mkdir -m 777 -p /client-tools/kubectl/v1.13.2 \ + && curl -#s -L -o /client-tools/kubectl/v1.13.2/kubectl https://app.harness.io/public/shared/tools/kubectl/release/v1.13.2/bin/linux/amd64/kubectl \ + && chmod +x /client-tools/kubectl/v1.13.2/kubectl"] + args: + - chown 1001 /client-tools; + volumeMounts: + - mountPath: /client-tools + name: client-tools + - name: install-helm3 + image: curlimages/curl + command: ['sh', '-c', "mkdir -m 777 -p /client-tools/helm/v3.8.0 \ + && curl -#s -L -o /client-tools/helm/v3.8.0/helm https://app.harness.io/public/shared/tools/helm/release/v3.8.0/bin/linux/amd64/helm \ + && chmod +x /client-tools/helm/v3.8.0/helm"] + args: + - chown 1001 /client-tools; + volumeMounts: + - mountPath: /client-tools + name: client-tools + volumes: + - name: client-tools + emptyDir: {} + +--- + +apiVersion: v1 +kind: Service +metadata: + name: delegate-service + namespace: harness-delegate-ng +spec: + type: ClusterIP + selector: + harness.io/name: markom-secret + ports: + - port: 8080 + +--- + +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: upgrader-cronjob + namespace: harness-delegate-ng +rules: + - apiGroups: ["batch", "apps", "extensions"] + resources: ["cronjobs"] + verbs: ["get", "list", "watch", "update", "patch"] + - apiGroups: ["extensions", "apps"] + resources: ["deployments"] + verbs: ["get", "list", "watch", "create", "update", "patch"] + +--- + +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: markom-secret-upgrader-cronjob + namespace: harness-delegate-ng +subjects: + - kind: ServiceAccount + name: upgrader-cronjob-sa + namespace: harness-delegate-ng +roleRef: + kind: Role + name: upgrader-cronjob + apiGroup: "" + +--- + +apiVersion: v1 +kind: ServiceAccount +metadata: + name: upgrader-cronjob-sa + namespace: harness-delegate-ng + +--- + +apiVersion: v1 +kind: Secret +metadata: + name: markom-secret-upgrader-token + namespace: harness-delegate-ng +type: Opaque +data: + UPGRADER_TOKEN: "ZTUzNzllZGUzNjk0ZWVmYTA1N2JmMmI1ZTEzNjQ1YzU=" + +--- + +apiVersion: v1 +kind: ConfigMap +metadata: + name: markom-secret-upgrader-config + namespace: harness-delegate-ng +data: + config.yaml: | + mode: Delegate + dryRun: false + workloadName: markom-secret + namespace: harness-delegate-ng + containerName: delegate + delegateConfig: + accountId: D3fzqqYxSmGYPzWMvroIWw + managerHost: https://qa.harness.io/gratis + +--- + +apiVersion: batch/v1beta1 +kind: CronJob +metadata: + labels: + harness.io/name: markom-secret-upgrader-job + name: markom-secret-upgrader-job + namespace: harness-delegate-ng +spec: + schedule: "0 */1 * * *" + concurrencyPolicy: Forbid + startingDeadlineSeconds: 20 + jobTemplate: + spec: + suspend: true + template: + spec: + serviceAccountName: upgrader-cronjob-sa + restartPolicy: Never + containers: + - image: us.gcr.io/qa-target/upgrader:1.0.0 + name: upgrader + imagePullPolicy: Always + envFrom: + - secretRef: + name: markom-secret-upgrader-token + volumeMounts: + - name: config-volume + mountPath: /etc/config + volumes: + - name: config-volume + configMap: + name: markom-secret-upgrader-config + +``` diff --git a/docs/platform/2_Delegates/delegate-reference/example-kubernetes-manifest-harness-delegate.md b/docs/platform/2_Delegates/delegate-reference/example-kubernetes-manifest-harness-delegate.md new file mode 100644 index 00000000000..bad9c8e5950 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/example-kubernetes-manifest-harness-delegate.md @@ -0,0 +1,258 @@ +--- +title: Example Kubernetes manifest for Harness Delegate +description: The following provides an example of a Kubernetes manifest used to configure Harness Delegate. apiVersion -- v1 kind -- Namespace metadata -- name -- harness-delegate-ng --- apiVersion -- rbac.authorization.k8… +# sidebar_position: 2 +helpdocs_topic_id: cjtk5rw8z4 +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following provides an example of a Kubernetes manifest used to configure Harness Delegate. + +`apiVersion: v1` + +`kind: Namespace` + +`metadata:` + +`name: harness-delegate-ng` + +`---` + +`apiVersion: rbac.authorization.k8s.io/v1` + +`kind: ClusterRoleBinding` + +`metadata:` + +`name: harness-delegate-ng-cluster-admin` + +`subjects:` + +`- kind: ServiceAccount` + +`name: default` + +`namespace: harness-delegate-ng` + +`roleRef:` + +`kind: ClusterRole` + +`name: cluster-admin` + +`apiGroup: rbac.authorization.k8s.io` + +`---` + +`apiVersion: v1` + +`kind: Secret` + +`metadata:` + +`name: immutable-delegate-account-token` + +`namespace: harness-delegate-ng` + +`type: Opaque` + +`data:` + +`ACCOUNT_SECRET: ` + +`---` + +`# If delegate needs to use a proxy, please follow instructions available in the documentation` + +`# https://ngdocs.harness.io/article/5ww21ewdt8-configure-delegate-proxy-settings` + +`apiVersion: apps/v1` + +`kind: Deployment` + +`metadata:` + +`labels:` + +`harness.io/name: ` + +`name: ` + +`namespace: harness-delegate-ng` + +`spec:` + +`replicas: 2` + +`selector:` + +`matchLabels:` + +`harness.io/name: ` + +`template:` + +`metadata:` + +`labels:` + +`harness.io/name: ` + +`annotations:` + +`prometheus.io/scrape: "true"` + +`prometheus.io/port: "3460"` + +`prometheus.io/path: "/api/metrics"` + +`spec:` + +`terminationGracePeriodSeconds: 600` + +`restartPolicy: Always` + +`containers:` + +`- image: ` + +`imagePullPolicy: Always` + +`name: delegate` + +`ports:` + +`- containerPort: 8080` + +`resources:` + +`limits:` + +`cpu: "0.5"` + +`memory: "2048Mi"` + +`requests:` + +`cpu: "0.5"` + +`memory: "2048Mi"` + +`livenessProbe:` + +`httpGet:` + +`path: /api/health` + +`port: 3460` + +`scheme: HTTP` + +`initialDelaySeconds: 120` + +`periodSeconds: 10` + +`failureThreshold: 2` + +`envFrom:` + +`- secretRef:` + +`name: immutable-delegate-account-token` + +`env:` + +`- name: JAVA_OPTS` + +`value: "-Xms64M"` + +`- name: ACCOUNT_ID` + +`value: ` + +`- name: MANAGER_HOST_AND_PORT` + +`value: ` + +`- name: DEPLOY_MODE` + +`value: KUBERNETES` + +`- name: DELEGATE_NAME` + +`value: ` + +`- name: DELEGATE_TYPE` + +`value: "KUBERNETES"` + +`- name: DELEGATE_NAMESPACE` + +`valueFrom:` + +`fieldRef:` + +`fieldPath: metadata.namespace` + +`- name: INIT_SCRIPT` + +`value: ""` + +`- name: DELEGATE_DESCRIPTION` + +`value: ""` + +`- name: DELEGATE_TAGS` + +`value: ""` + +`- name: DELEGATE_ORG_IDENTIFIER` + +`value: ""` + +`- name: DELEGATE_PROJECT_IDENTIFIER` + +`value: ""` + +`- name: NEXT_GEN` + +`value: "true"` + +`- name: CLIENT_TOOLS_DOWNLOAD_DISABLED` + +`value: "true"` + +`- name: LOG_STREAMING_SERVICE_URL` + +`value: "https://app.harness.io/log-service/ OR https://app.harness.io/gratis/log-service/"` + +`---` + +`apiVersion: v1` + +`kind: Service` + +`metadata:` + +`name: delegate-service` + +`namespace: harness-delegate-ng` + +`spec:` + +`type: ClusterIP` + +`selector:` + +`harness.io/name: ` + +`ports:` + +`- port: 8080` + + + + + diff --git a/docs/platform/2_Delegates/delegate-reference/sample-create-a-permanent-volume-nfs-server.md b/docs/platform/2_Delegates/delegate-reference/sample-create-a-permanent-volume-nfs-server.md new file mode 100644 index 00000000000..4ba3d775231 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/sample-create-a-permanent-volume-nfs-server.md @@ -0,0 +1,80 @@ +--- +title: Sample - Create a permanent volume - NFS server +description: This Kubernetes manifest creates a permanent volume for NFS. apiVersion -- apps/v1 kind -- Deployment metadata -- name -- nfs-server spec -- replicas -- 1 selector -- matchLabels -- role -- nfs-server template -- metada… +# sidebar_position: 2 +helpdocs_topic_id: 3onmos2n3v +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This Kubernetes manifest creates a permanent volume for NFS. + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nfs-server +spec: + replicas: 1 + selector: + matchLabels: + role: nfs-server + template: + metadata: + labels: + role: nfs-server + spec: + containers: + - name: nfs-server + image: k8s.gcr.io/volume-nfs:0.8 + ports: + - name: nfs + containerPort: 2049 + - name: mountd + containerPort: 20048 + - name: rpcbind + containerPort: 111 + securityContext: + privileged: true + volumeMounts: + - mountPath: /exports + name: markom-pvc + volumes: + - name: markom-pvc + persistentVolumeClaim: + claimName: nfs-pv-markom + +--- + +kind: Service +apiVersion: v1 +metadata: + name: nfs-server +spec: + ports: + - name: nfs + port: 2049 + - name: mountd + port: 20048 + - name: rpcbind + port: 111 + selector: + role: nfs-server + +--- + +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: nfs-pv-markom + labels: + demo: nfs-pv-provisioning +spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: 1Gi + +``` diff --git a/docs/platform/2_Delegates/delegate-reference/sample-harness-delegate-yaml-with-nfs-volume-mounted.md b/docs/platform/2_Delegates/delegate-reference/sample-harness-delegate-yaml-with-nfs-volume-mounted.md new file mode 100644 index 00000000000..7d10b317137 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/sample-harness-delegate-yaml-with-nfs-volume-mounted.md @@ -0,0 +1,157 @@ +--- +title: Sample harness-delegate.yaml with NFS volume mounted +description: This sample harness-delegate.yaml declares a mounted NFS volume. apiVersion -- v1 kind -- Namespace metadata -- name -- harness-delegate-ng --- apiVersion -- rbac.authorization.k8s.io/v1 kind -- ClusterRoleBindi… +# sidebar_position: 2 +helpdocs_topic_id: hipzqa4ntk +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This sample harness-delegate.yaml declares a mounted NFS volume. + + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: harness-delegate-ng + +--- + +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: harness-delegate-ng-cluster-admin +subjects: + - kind: ServiceAccount + name: default + namespace: harness-delegate-ng +roleRef: + kind: ClusterRole + name: cluster-admin + apiGroup: rbac.authorization.k8s.io + +--- + +apiVersion: v1 +kind: Secret +metadata: + name: markom-secret-account-token + namespace: harness-delegate-ng +type: Opaque +data: + ACCOUNT_SECRET: "ZTUzNzllZGUzNjk0ZWVmYTA1N2JmMmI1ZTEzNjQ1YzU=" + +--- + +# If delegate needs to use a proxy, please follow instructions available in the documentation +# https://ngdocs.harness.io/article/5ww21ewdt8-configure-delegate-proxy-settings + +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + harness.io/name: markom-secret + name: markom-secret + namespace: harness-delegate-ng +spec: + replicas: 1 + selector: + matchLabels: + harness.io/name: markom-secret + template: + metadata: + labels: + harness.io/name: markom-secret + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "3460" + prometheus.io/path: "/api/metrics" + spec: + terminationGracePeriodSeconds: 600 + restartPolicy: Always + containers: + - image: harness/delegate:22.07.75836.minimal + imagePullPolicy: Always + name: delegate + ports: + - containerPort: 8080 + resources: + limits: + cpu: "0.5" + memory: "2048Mi" + requests: + cpu: "0.5" + memory: "2048Mi" + livenessProbe: + httpGet: + path: /api/health + port: 3460 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 10 + failureThreshold: 2 + startupProbe: + httpGet: + path: /api/health + port: 3460 + scheme: HTTP + initialDelaySeconds: 30 + periodSeconds: 10 + failureThreshold: 15 + envFrom: + - secretRef: + name: markom-secret-account-token + env: + - name: JAVA_OPTS + value: "-Xms64M" + - name: ACCOUNT_ID + value: D3fzqqYxSmGYPzWMvroIWw + - name: MANAGER_HOST_AND_PORT + value: https://qa.harness.io/gratis + - name: DEPLOY_MODE + value: KUBERNETES + - name: DELEGATE_NAME + value: markom-secret + - name: DELEGATE_TYPE + value: "KUBERNETES" + - name: DELEGATE_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: INIT_SCRIPT + value: "" + - name: DELEGATE_DESCRIPTION + value: "" + - name: DELEGATE_TAGS + value: "" + - name: NEXT_GEN + value: "true" + - name: CLIENT_TOOLS_DOWNLOAD_DISABLED + value: "true" + - name: LOG_STREAMING_SERVICE_URL + value: "https://qa.harness.io/gratis/log-service/" + volumeMounts: + - mountPath: "/opt/harness-delegate/client-tools" + name: nfs + volumes: + - name: nfs + persistentVolumeClaim: + claimName: nfs-ng + +--- + +apiVersion: v1 +kind: Service +metadata: + name: delegate-service + namespace: harness-delegate-ng +spec: + type: ClusterIP + selector: + harness.io/name: markom-secret + ports: + - port: 8080 + +``` diff --git a/docs/platform/2_Delegates/delegate-reference/sample-kubernetes-manifest-nfs-volume.md b/docs/platform/2_Delegates/delegate-reference/sample-kubernetes-manifest-nfs-volume.md new file mode 100644 index 00000000000..0a366577e21 --- /dev/null +++ b/docs/platform/2_Delegates/delegate-reference/sample-kubernetes-manifest-nfs-volume.md @@ -0,0 +1,45 @@ +--- +title: Sample -- Kubernetes manifest - NFS volume +description: This Kubernetes manifest creates an NFS volume. For a sample manifest for an NFS server, see Sample -- Create a Permanent Volume - NFS Server. apiVersion -- v1 kind -- PersistentVolumeClaim metadata -- name -- … +# sidebar_position: 2 +helpdocs_topic_id: 6929n499sf +helpdocs_category_id: vm60533pvt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This Kubernetes manifest creates an NFS volume. For a sample manifest for an NFS server, see [Sample: Create a Permanent Volume - NFS Server](sample-create-a-permanent-volume-nfs-server.md). + + +``` +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: nfs-ng +spec: + accessModes: + - ReadWriteMany + storageClassName: "" + resources: + requests: + storage: 1Gi + volumeName: nfs-ng + +--- + +apiVersion: v1 +kind: PersistentVolume +metadata: + name: nfs-ng +spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteMany + nfs: + server: nfs-server.default.svc.cluster.local + path: "/" + mountOptions: + - nfsvers=4.2 + +``` diff --git a/docs/platform/2_Delegates/delegate-reference/static/delegate-requirements-and-limitations-00.png b/docs/platform/2_Delegates/delegate-reference/static/delegate-requirements-and-limitations-00.png new file mode 100644 index 00000000000..9125b15f8cb Binary files /dev/null and b/docs/platform/2_Delegates/delegate-reference/static/delegate-requirements-and-limitations-00.png differ diff --git a/docs/platform/2_Delegates/delegate-reference/static/delegate-requirements-and-limitations-01.png b/docs/platform/2_Delegates/delegate-reference/static/delegate-requirements-and-limitations-01.png new file mode 100644 index 00000000000..7c7263d19bb Binary files /dev/null and b/docs/platform/2_Delegates/delegate-reference/static/delegate-requirements-and-limitations-01.png differ diff --git a/docs/platform/2_Delegates/delegates-overview.md b/docs/platform/2_Delegates/delegates-overview.md new file mode 100644 index 00000000000..0eddee3f100 --- /dev/null +++ b/docs/platform/2_Delegates/delegates-overview.md @@ -0,0 +1,230 @@ +--- +title: Delegate overview +description: Harness Delegate is a service you run in your local network or VPC to connect your artifact, infrastructure, collaboration, verification and other providers with Harness Manager. +sidebar_position: 1 +helpdocs_topic_id: 2k7lnc7lvl +helpdocs_category_id: sy6sod35zi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Delegate is a service you run in your local network or VPC to connect your artifacts, infrastructure, collaboration, verification and other providers, with Harness Manager. + +The first time you connect Harness to a third-party resource, Harness Delegate is installed in your target infrastructure, for example, a Kubernetes cluster.  + +After the Delegate is installed, you connect to third-party resources. The Delegate performs all operations, including deployment and integration. + +Harness Delegate is built for parallelism and performs tasks and deployments in parallel. The following table includes performance benchmarks for one NG Delegate executing perpetual tasks and parallel deployments in a Kubernetes environment. + + + +| | | | | +| --- | --- | --- | --- | +| **Delegate** | **Compute resources** | **Task type** | **Running in parallel** | +| NextGen Delegate | 0.5 CPU, 2 GiB | Perpetual | 40 tasks | +| NextGen Delegate | 0.5 CPU, 2 GiB | Kubernetes deployment | 10 deployments | +| NextGen Delegate | 1.0 CPU, 4 GiB | Kubernetes deployment | 20 deployments | + +### Limitations and requirements + +See [Delegate Requirements and Limitations](delegate-reference/delegate-requirements-and-limitations.md). + +### Data the delegate sends to Harness Manager + +Harness Delegate connects to Harness Manager over an outbound HTTPS/WSS connection. + +![](./static/delegates-overview-00.png) +The Delegate connects to Harness Manager (via SaaS) over a Secure WebSockets channel (WebSockets over TLS). The channel is used to send notifications of Delegate task events and to exchange connection heartbeats. The channel is not used to send task data itself. + +* **Heartbeat** - The Delegate sends a [heartbeat](https://en.wikipedia.org/wiki/Heartbeat_(computing)) to notify Harness Manager that it is running. +* **Deployment data** - The Delegate sends information retrieved from API calls to Harness Manager for display on the **Deployments** page. +* **Time series and log data for Continuous Verification** - The Delegate connects to the verification providers you configure and sends the data retrieved from those providers to Harness Manager for display in Harness Continuous Verification. + +### Where do I install the delegate? + +* **Evaluating Harness** - When evaluating Harness, you might want to install the Delegate locally. Ensure that it has access to the artifact sources, deployment environments, and verification providers you want to use with Harness. +* **Development, QA, and Production** - The Delegate should be installed behind your firewall and in the same VPC as the micro-services you are deploying. The Delegate must have access to the artifact servers, deployment environments, and cloud providers it needs. + +### Root vs non-root + +Harness Delegate does not have a root image. There are two non-root images that use the same tag. For example: + +* `harness/delegate:22.03.74411` +* `harness/delegate:22.03.74411.minimal` + +The first image includes client tools like `kubectl`, Helm, and ChartMuseum. The second image, for which the `minimal` tag is appended, does not include those client tools. + +If you want to add tools to the image, Harness recommends the creation of a custom image. + +### Install Harness Delegate + +For basic information on installing Harness Delegate, see the following topics: + +* [Install Harness Delegate on Kubernetes](delegate-install-kubernetes/install-harness-delegate-on-kubernetes.md) +* [Install Harness Delegate Using Helm](delegate-install-kubernetes/install-harness-delegate-using-helm.md) +* [Install a Docker Delegate](delegate-install-docker/install-a-docker-delegate.md) +* [Install a Legacy Kubernetes Delegate](delegate-guide/install-a-kubernetes-delegate.md) + +For advanced installation topics, see the following: + +* [Automate Delegate Installation](delegate-guide/automate-delegate-installation.md) +* [Non-Root Delegate Installation](delegate-guide/non-root-delegate-installation.md) +* [Install a Delegate with Third-Party Custom Tool Binaries](delegate-guide/install-a-delegate-with-3-rd-party-tool-custom-binaries.md) + +### Delegate sizes + +One Delegate size does not fit all use cases, so Harness let's you pick from several options: + +![](./static/delegates-overview-01.png) +Remember that the memory and CPU requirements are for the Delegate only. Your Delegate host/pod/container will need more computing resources for its operations systems and other services such as Docker or Kubernetes. + +### How does Harness Manager identify delegates? + +All Delegates are identified by your Harness account ID. Depending on the type of Delegate, there are additional factors. + +For Delegates running on virtual machines, such as the Shell Script and Docker Delegates running on an AWS EC2 instance, the Delegate is identified by the combination of **Hostname** and **IP**. + +Therefore, if the hostname or IP changes on the VM, the Delegate cannot be identified by the Harness Manager. The IP used is the private IP. The Delegate connects to the Harness Manager, but the Harness Manager does not initiate a connection to the Delegate, and so the public IP address of the Delegate is not needed, typically. + +For Kubernetes Delegates, the IP can change if a pod is rescheduled, for example. Consequently, Kubernetes Delegates are identified by a suffix using a unique six letter code in their **Hostname** (the first six letters that occur in your account ID). + +### How does Harness Manager pick delegates? + +Delegates are used by Harness for all operations. For example: + +* **Connectors:** Connectors are used for all third-party connections. +* **Pipeline Services and Infrastructure:** Connectors are used in Pipeline Service connections to repos and Pipeline Infrastructure connections to target environments (deployment targets, build farms, etc). +* **Pipeline Steps:** you can select a Delegate in each Pipeline step to ensure that the step only uses that Delegate to perform its operation. + +In the case of all these Delegate uses, you can select that one or more specific Delegates to perform the operation (using Delegate Tags). If you do not specify specific Delegates, then Harness will assign the task to a Delegate. + +#### Task assignment + +In cases where you have selected specific Delegates to perform the task, Harness uses those Delegate only. If these Delegates cannot perform the task, Harness does not use another Delegate. + +In cases where you do not select specific Delegates, Harness uses any available Delegate to perform the task. Harness uses the follow process and criteria to pick a Delegate. + +When a task is ready to be assigned, the Harness Manager first validates its lists of Delegates to see which Delegate should be assigned the task. + +The following information describes how the Harness Manager validates and assigns tasks to a Delegate: + +* **Heartbeats** - Running Delegates send heartbeats to the Harness Manager in 1 minute intervals. If the Manager does not have a heartbeat for a Delegate when a task is ready to be assigned, it will not assign the task to that Delegate. +* **Tags** - For more information, see [Select Delegates with Tags](delegate-guide/select-delegates-with-selectors.md). +* **Allowlisting** - Once a Delegate has been validated for a task, it is allowlisted for that task and will likely be used again for that task. The allowlisting criteria is the URL associated with the task, such as a connection to a cloud platform, repo, or API. A Delegate is allowlisted for all tasks using that URL. The Time-To-Live (TTL) for the allowlisting is 6 hours, and the TTL is reset with each successful task validation. +* **Blocklisting** - If a Delegate fails to perform a task that Delegate is blocklisted for that task and will not be tried again. TTL is 5 minutes. This is true if there is only one Delegate and even if the Delegate is selected for that task with a Selector, such as with a Shell Script step in a Stage. + +#### Delegate selection in pipelines + +As stated above, Delegates are selected in Service and Infrastructure Connectors and in steps. + +For example, in the **Infrastructure** section of a stage, there is a **Connector** setting. For Harness CD, this is the Connector to the target infrastructure. For Harness CI, this is Connector to the build farm. + +![](./static/delegates-overview-02.png) +When you add Connectors to Harness, you can select several or all Delegates for the Connector to use. + +Each CD step in the stage Execution has a **Delegate Selector** setting. + +![](./static/delegates-overview-03.png) +Here you use Delegate Tags to select the Delegate(s) to use. + +#### Which delegate is used during pipeline execution? + +The Delegates assigned to Connectors and steps are used during Pipeline execution. + +If no Delegates are selected, then the Delegates are selected as described in [Task Assignment](delegates-overview.md#task-assignment). + +If no Delegates are selected for a CD step in its **Delegate Selector** setting, Harness prioritizes the Delegate used successfully for the Infrastructure Connector. + +Harness will try this Delegate first for the step task because this Delegate has been successful in the target environment. + +Most CI steps use Connectors to pull the image of the container where the step will run. The Delegates used for the step's Connector are not necessarily used for running the step. In general, the Delegate(s) used for the Connector in the **Infrastructure** build farm is used to run the step. + +### Delegate high availability (HA) + +You might need to install multiple Delegates depending on how many Continuous Delivery tasks you do concurrently, and on the compute resources you are providing to each Delegate. Typically, you will need one Delegate for every 300-500 service instances across your applications. + +In addition to compute considerations, you can enable High Availability (HA) for Harness Delegates. HA simply involves installing multiple Delegates in your environment. + +For example, your Kubernetes deployment could include two Kubernetes Delegates, each running in its own pod in the same target cluster. To add Delegates to your deployment, increase the desired count of Delegate replica pods in the **spec** section of the harness-kubernetes.yaml file that you download from Harness: + + +``` +... +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + labels: + harness.io/app: harness-delegate + harness.io/account: xxxx + harness.io/name: test + name: test-zeaakf + namespace: harness-delegate +spec: + replicas: 2 + selector: + matchLabels: + harness.io/app: harness-delegate +... +``` +For the Kubernetes Delegate, you only need one Delegate in the cluster. Simply increase the number of replicas, and nothing else. Do not add another Delegate to the cluster in an attempt to achieve HA.If you want to install Kubernetes Delegates in separate clusters, do not use the same **harness-kubernetes.yaml** and name for both Delegates. Download a new Kubernetes YAML spec from Harness for each Delegate you want to install. This will avoid name conflicts.In every case, Delegates must be identical in terms of permissions, keys, connectivity, etc.With two or more Delegates running in the same target environment, HA is provided by default. One Delegate can go down without impacting Harness' ability to perform deployments. If you want more availability, you can set up three Delegates to handle the loss of two Delegates, and so on. + +Two Delegates in different locations with different connectivity do not support HA. For example, if you have a Delegate in a Dev environment and another in a Prod environment, the Dev Delegate will not communicate with the Prod Delegate or vice versa. If either Delegate went down, Harness would not operate in their environment. + +### Delegate scope + +Delegates are scoped in two ways: + +#### Project/Org/Accounts + +You can add Delegates at the Project, Org, and Account level. Delegate availability then becomes subject to Harness implicit Project, Org, and Account hierarchy. + +For example, let's look at two users, Alex and Uri, and the Delegates (D*n*) available to them: + +![](./static/delegates-overview-04 +Alex's Pipelines can use Delegates D1, D2, or D4. + +Uri's Pipelines can use Delegates D1, D3, or D5. + +### Delegate tags + +When Harness makes a connection via its Delegates, it will select the best Delegate according to [How Does Harness Manager Pick Delegates?](#how_does_harness_manager_pick_delegates). + +To ensure a specific Delegate is used by a Harness entity, you can add Tags to Delegates and then reference the Tags in commands and Connectors. + +See [Select Delegates with Tags](delegate-guide/select-delegates-with-selectors.md). + +### Delegate log file + +The Delegate creates a new log file each day, named **delegate.log**, and its maximum size is 50MB. + +Every day the log file is saved with the day's date and a new log file is created. + +If a log file grows beyond 50MB in a day, the log file is renamed with today's date and a new log file is created. + +Harness keeps log files for today and the previous 10 days (up to one 1GB). + +### Delegate permissions + +You can set permissions on Delegates using [Harness RBAC](../4_Role-Based-Access-Control/1-rbac-in-harness.md). + +You create roles and then assign them to Harness Users. + +There are role permissions for Delegates: + +The permissions are: + +* **Delegate permissions:** Create/Edit, Delete, View. +* The Delegate **View** permission cannot be disabled. Every user has the permission to view the Delegate. + +Access to a Delegate can also be restricted by downstream resource types: + +* **Pipelines:** Execute +* **Secrets:** Access +* **Connectors:** Access + +This means that if a role does not have these permissions the User with that role cannot use the related Delegates in these Pipelines, Secrets, or Connectors. + +### Third-party tools installed with the delegate + +See [Supported Platforms and Technologies](https://docs.harness.io/article/1e536z41av-supported-platforms-and-technologies). + diff --git a/docs/platform/2_Delegates/install-delegate/_category_.json b/docs/platform/2_Delegates/install-delegate/_category_.json new file mode 100644 index 00000000000..07dcaffe965 --- /dev/null +++ b/docs/platform/2_Delegates/install-delegate/_category_.json @@ -0,0 +1 @@ +{"label": "Kubernetes Delegates", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Kubernetes Delegates"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "uimq4rlif9", "helpdocs_parent_category_id": "seizygxv7b"}} \ No newline at end of file diff --git a/docs/platform/2_Delegates/static/delegates-overview-00.png b/docs/platform/2_Delegates/static/delegates-overview-00.png new file mode 100644 index 00000000000..bb68cc40467 Binary files /dev/null and b/docs/platform/2_Delegates/static/delegates-overview-00.png differ diff --git a/docs/platform/2_Delegates/static/delegates-overview-01.png b/docs/platform/2_Delegates/static/delegates-overview-01.png new file mode 100644 index 00000000000..2b9e42bf9d7 Binary files /dev/null and b/docs/platform/2_Delegates/static/delegates-overview-01.png differ diff --git a/docs/platform/2_Delegates/static/delegates-overview-02.png b/docs/platform/2_Delegates/static/delegates-overview-02.png new file mode 100644 index 00000000000..68ba4f41bad Binary files /dev/null and b/docs/platform/2_Delegates/static/delegates-overview-02.png differ diff --git a/docs/platform/2_Delegates/static/delegates-overview-03.png b/docs/platform/2_Delegates/static/delegates-overview-03.png new file mode 100644 index 00000000000..00434f84f41 Binary files /dev/null and b/docs/platform/2_Delegates/static/delegates-overview-03.png differ diff --git a/docs/platform/2_Delegates/static/delegates-overview-04.png b/docs/platform/2_Delegates/static/delegates-overview-04.png new file mode 100644 index 00000000000..e9821c6464a Binary files /dev/null and b/docs/platform/2_Delegates/static/delegates-overview-04.png differ diff --git a/docs/platform/3_Authentication/1-authentication-overview.md b/docs/platform/3_Authentication/1-authentication-overview.md new file mode 100644 index 00000000000..fae2aa07382 --- /dev/null +++ b/docs/platform/3_Authentication/1-authentication-overview.md @@ -0,0 +1,112 @@ +--- +title: Authentication Overview +description: An overview of how to control access to your organization's Harness account by SSO (single sign-on) provider, email domain, 2FA (two-factor authentication), and password policies (strength, expiration, and lockout). +# sidebar_position: 2 +helpdocs_topic_id: gdob5gvyco +helpdocs_category_id: sy6sod35zi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides an overview of Authentication in Harness. It describes various ways to authenticate users. + +### Before you begin + +* Make sure you have permissions to **Create/Edit, Delete** Authentication Settings. + +### Review: Authentication Settings + +Harness Access control includes: + +* Authentication — This checks who the user is. +* Authorization — This checks what the user can do. +* Auditing — This logs what the user does. + +This topic focuses on Authentication. For more on Authorization, see [Access Management (RBAC) Overview](../4_Role-Based-Access-Control/1-rbac-in-harness.md). + +Users in Administrator groups can use Authentication Settings to restrict access to an organization's Harness account. The options you choose will apply to all your account's users. These options include: + +* [Enable Public OAuth Providers](#enable-public-oauth-providers) +* [Enable SAML Providers](#enable-security-assertion-markup-language-saml-providers) +* [Enforce Password Policies](#enforce-password-policies) + * [Enforce Password Strength](#enforce-password-strength) + * [Enforce Password Expiration](#enforce-password-expiration) + * [Enforce Lockout After Failed Logins](#enforce-lockout-after-failed-logins) +* [Enforce Two Factor Authentication](#enforce-two-factor-authentication) +* [Restrict Email Domains](#restrict-email-domains) + +### Configure Authentication + +* In **Home**, Click **Authentication** under **ACCOUNT SETUP.** +* The **Authentication: Configuration** page appears.![](./static/authentication-overview-41.png) +* You can choose one of the below as the default Authentication method: + + Login via a Harness Account or Public OAuth Providers + + SAML Provider + +#### Enable Public OAuth Providers + +In the **Use Public OAuth Providers** section, you can enable Harness logins via a range of single sign-on mechanisms. Enable this slider to expose sliders for enabling individual OAuth partners. +For more on OAuth Providers, see [Single Sign-On with OAuth](../3_Authentication/4-single-sign-on-sso-with-oauth.md).![](./static/authentication-overview-42.png) +#### Enable Security Assertion Markup Language (SAML) Providers + +Select **SAML Provider** to enable a SAML Provider. To do this, you should first disable any configured public OAuth providers. +For more on adding a SAML Provider, see [Single Sign-On with SAML](../3_Authentication/3-single-sign-on-saml.md). + +### Enforce Password Policies + +You'll see specific controls to govern the following password requirements: + + Enforce password strength + + Periodically expire passwords + + Enforce Two Factor Authentication + +#### Enforce Password Strength + +Select **Enforce password strength** to open the dialog shown below.![](./static/authentication-overview-43.png) +* Here you can specify and enforce any or all of the below options: + + Minimum password length. + + Include at least one uppercase letter. + + Include at least one lowercase letter. + + Include at least one digit. + + Include at least one special character. + +If you enforce **Have at least one special character**, each password must include one (or more) of the following characters: `~!@#$%^&*_-+=`|\(){}[]:;"'<this-tag>,.?/` + +#### Enforce Password Expiration + +Select **Periodically expire passwords** to set an interval at which users must refresh their Harness passwords. In the same dialog, you can also set an advance notification interval. + +![](./static/authentication-overview-44.png) +#### Enforce Lockout After Failed Logins + +Select **Enforce lockout policy** to open the dialog shown below. It offers independent controls over the lockout trigger (how many failed logins), lockout time (in days), and notifications to locked-out users and to Harness user groups. + +![](./static/authentication-overview-45.png) +You can see a summary on the main Authentication page: + +![](./static/authentication-overview-46.png) +### Enforce Two Factor Authentication + +Select **Enforce Two Factor Authentication** to enforce 2FA for all users in Harness. This option will govern all logins — whether through SSO providers or Harness username/password combinations. For more information on Two-Factor Authentication see [Two-Factor Authentication](../3_Authentication/2-two-factor-authentication.md). + +![](./static/authentication-overview-47.png) +### Set Up Vanity URL + +You can access `app.harness.io` using your own unique subdomain URL. + +The subdomain URL will be in the following format, with `{company}` being the name of your account: + + `https://{company}.harness.io` + +Contact [Harness Support](mailto:support@harness.io) to set up your Account's subdomain URL. The subdomain URL cannot be changed later.Harness automatically detects your Account ID from the subdomain URL and redirects you to the Account's login mechanism. + +### Restrict Email Domains + +Select **Only allow users with the following email domains:** to allow (whitelist) only certain domains as usable in login credentials. In the dialog shown below, build your allowlist by simply typing your chosen domains into the **Domains** multi-select field. + +![](./static/authentication-overview-48.png) +Click **Save**. You can see the success message - **Domain restrictions have been updated successfully** displayed on top of the page and the domains you have whitelisted in the panel. + +![](./static/authentication-overview-49.png) +Your resulting allowlist will impose a further filter on logins to Harness via both SSO providers and Harness username/passwords.You can modify your domain selections by clicking the Edit icon. + +![](./static/authentication-overview-50.png) \ No newline at end of file diff --git a/docs/platform/3_Authentication/2-two-factor-authentication.md b/docs/platform/3_Authentication/2-two-factor-authentication.md new file mode 100644 index 00000000000..f95c4adc99c --- /dev/null +++ b/docs/platform/3_Authentication/2-two-factor-authentication.md @@ -0,0 +1,76 @@ +--- +title: Two-Factor Authentication +description: This document explains Two-Factor Authentication. +# sidebar_position: 2 +helpdocs_topic_id: ipsux8n7gm +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add an extra layer of security by using a 2-step-verification, also known as Two-Factor Authentication (2FA). + +This document explains the basic steps to set up Two-Factor Authentication in Harness. + +### Before you begin + +* Make sure you have permissions to **Create/Edit, Delete** Authentication Settings. + +### Set Up Two-Factor Authentication + +You can manage Two-Factor Authentication (2FA) in two ways: + +* **Individual user:** you can set up 2FA for your own **User Profile** without impacting other user accounts. +* **All account users:** if you have **Create/Edit** permissions for Authentication Settings, you can enforce 2FA for all users in Harness. First, you set up 2FA for your own account, and then you can enforce 2FA account-wide in the Harness account's **Login Settings**. + + +:::note +If 2FA is disabled at the account level, you can still enable 2FA for your user account. If 2FA is enabled account-wide, you cannot turn it off for your user account.When you enforce 2FA, users receive an email where they can scan a QR Code using their smartphones and a token generator app. The next time they log in with their username and password, they are prompted to use 2FA to complete the log in. + +::: + +### Set Up Two-Factor Authentication For Your Profile + +1. Click on your **User Profile** icon at the bottom-left corner to go to the Profile page. + +![](./static/two-factor-authentication-00.png) + +2. The Profile page appears. +3. Toggle the **Two-Factor Authentication** indicator. The **Enable Two-Factor Authentication** page appears. +4. Using your smartphone's 2FA token generator app, such as Google Authenticator, scan the QR Code and add it to the list in your app. +You can now see **Harness-Inc** in your 2FA token generator app, which provides authentication codes. +2FA token generator apps also include a method for adding a site using a Secret Key in cases where you cannot scan the QR Code. The 2FA dialog includes a Secret Key for those cases. +5. Click **Enable**. The next time you log in by entering your username and password, you are prompted to provide the 2FA authentication code. +6. Obtain the code from your 2FA token generator app, and enter it. You can then log into your Harness account. + +### Set Up Account-Wide Two-Factor Authentication + +Once you have set up 2FA for your account, you can enforce it for all users and groups in the account. When 2FA is enforced, account users will experience the following changes: + +* **New members** will need to set up 2FA during signup. +* **Existing members** who do not have 2FA enabled will receive an email with a QR Code, and instructions on how to set up 2FA. + +To require that all account users and groups use 2FA, do the following: + +1. Enable 2FA for your account as described in [Set Up Two-Factor Authentication for Your Profile](#set-up-two-factor-authentication-for-your-profile). + +2. Select **ACCOUNT SETUP** > **Authentication**. The **Authentication: Configuration** page appears. + + ![](./static/two-factor-authentication-01.png) + +3. Slide the **Enforce Two Factor Authentication** setting on. + + If you have not yet [set up 2FA for your own profile](#set-up-two-factor-authentication-for-your-profile), this prompt reminds you to protect your own login before proceeding: + + ![](./static/two-factor-authentication-02.png) + +4. Click **Go to settings** to display a QR Code and secret key that you can store to make sure your own ability to log in: +![](./static/two-factor-authentication-03.png) + +5. Return to **ACCOUNT SETUP** > **Authentication** to enable account-wide, two-factor authentication. +6. Slide the **Enforce Two Factor Authentication** setting on. This prompt asks for a confirmation to enable 2FA for all the users: + + ![](./static/two-factor-authentication-04.png) + +7. Click **Confirm**. + diff --git a/docs/platform/3_Authentication/3-single-sign-on-saml.md b/docs/platform/3_Authentication/3-single-sign-on-saml.md new file mode 100644 index 00000000000..506dd8acbc3 --- /dev/null +++ b/docs/platform/3_Authentication/3-single-sign-on-saml.md @@ -0,0 +1,772 @@ +--- +title: Single Sign-On (SSO) with SAML +description: This document explains single sign-on with SAML provider. +# sidebar_position: 2 +helpdocs_topic_id: mlpksc7s6c +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports Single Sign-On (SSO) with SAML, integrating with your SAML SSO provider to enable you to log your users into Harness as part of your SSO infrastructure. This document explains how to set up SAML authentication. + + +:::note +If the [Harness Self-Managed Enterprise Edition](https://docs.harness.io/article/tb4e039h8x-harness-on-premise-overview) version is not accessed using the HTTPS load balancer, SAML authentication will fail. Make sure you access the Harness Self-Managed Enterprise Edition version using an HTTPS load balancer, and not an HTTP load balancer. +::: + + +### Support Formats + +The XML SAML file used with Harness must use UTF-8. + +UTF-8 BOM is not supported. Some text editors like Notepad++ save in UTF-8 BOM by default. + +### SAML SSO With Harness Overview + +To set up SAML SSO with Harness, you add a SAML SSO provider to your Harness account and enable it as the default authentication method. + +Harness SAML SSO involves the following: + +* **Harness User email addresses** - Users are invited to Harness using their email addresses. Once they log into Harness, their email addresses are registered with Harness as Harness Users. To use SAML SSO, Harness Users must use the same email addresses to register in Harness and the SAML provider. + + +:::note +Ensure that you have at least two corresponding user accounts when setting up and testing SAML SSO in Harness. This allows you to set up the account with a Harness Administrator account and test it with a Harness user account. +::: + + +* **SAML provider user email addresses** - To use the SAML provider to verify Harness Users, the email addresses used in the SAML provider must match the email addresses for the registered Harness Users you want to verify. +* **Harness SAML Endpoint URL** - This URL is where the SAML provider will post the SAML authentication response to your Harness account. This URL is provided by Harness in the **Single Sign-On (SSO) Provider** dialog. You enter this URL in your SAML SSO provider app to integrate it with Harness. +* **SAML metadata file** - This file is provided by your SAML provider app. You upload this file into the Harness **Single Sign-On (SSO) Provider** dialog to integrate the app with Harness. + +### SAML SSO With Okta + +To set up Harness with Okta as a SAML SSO provider, you exchange the necessary information between your Okta app and Harness. + + +:::note +Users are not created as part of the SAML SSO integration. Users are invited to Harness using their email addresses. Once they log into Harness, their email addresses are registered as Harness Users. For more information, see [SAML SSO with Harness Overview](#saml-sso-with-harness-overview). +::: + + +This section describes the steps you must perform to use an Okta app for Harness SAML SSO: + +#### Okta User Accounts + +To set up a SAML support in your Okta Harness app, ensure that the app has corresponding Users in Harness: + +1. In Harness, add the users you want to set up for SAML SSO by inviting them to Harness using the same email addresses that they use in your SAML provider. +2. In Okta, assign them to your SAML provider app. + + +:::tip +The only user property that must match between a Harness User and its corresponding SAML provider user account is its **email address**.
    +Sometimes users might have mixed case email addresses in Okta. In these situations, Harness converts the email address to lowercase when adding them to Harness. +::: + + +#### Create App Integration in Okta + +1. Log in to your Okta administrator account and click **Applications**. +2. Click **Create App Integration**. + + ![](./static/single-sign-on-saml-53.png) + +3. The **Create a new app integration** dialogue appears. Select **SAML 2.0** and click **Next**. + + ![](./static/single-sign-on-saml-54.png) + +4. In **General Settings**, enter a name in the **Application label** field, and click **Next**. + + ![](./static/single-sign-on-saml-55.png) + +5. You are redirected to the **Configure SAML** tab. Copy the SAML Endpoint URL from Harness and paste it into **Single sign on URL**. To get this URL from Harness, perform the following steps: + 1. Login to Harness. + 2. In **Home**, click **Authentication** under **ACCOUNT SETUP**. The **Authentication: Configuration** page appears. + 3. Select **+SAML Provider**. The **Add SAML Provider** dialog appears. + 4. Enter a name for your SAML Provider in the **Name** field. + 5. Under **Select a SAML Provider**, select the SAML Provider you want to set up (in this case **Okta**). + + ![](./static/single-sign-on-saml-56.png) + + 6. Once you do this, you can see additional controls to set up the SAML Provider. + + ![](./static/single-sign-on-saml-57.png) + + 7. Copy the Endpoint URL under **Enter the SAML Endpoint URL as your Harness application's ACS URL** and paste it in **Single sign on URL** in your Okta SAML provider app's **Configure SAML** tab. + + ![](./static/single-sign-on-saml-58.png) + +6. In **Audience URI (SP Entity ID)**, enter **app.harness.io**. The SAML application identifier should be always `app.harness.io`. +7. In **Default RelayState**, enter a valid URL. This is the page where users land after a successful sign-in using SAML into the SP. +8. In **Name ID format**, enter the username format you are sending in the SAML Response. The default format is **Unspecified**. +9. In **Application username**, enter the default value to use for the username with the application. +10. In **Attribute Statements (optional)**, enter name in the **Name** field, select **Name Format** as **Basic,** and select the **Value** as **user.email**. +When you create a new SAML integration or modify an existing one, you can define custom attribute statements. These statements are inserted into the SAML assertions shared with your app. For more information, see [Define Attribute Statements](https://help.okta.com/oie/en-us/Content/Topics/Apps/Apps_App_Integration_Wizard_SAML.htm#). +11. In **Group Attribute Statements (optional)**, enter a name in the **Name** field, select **Name format (optional)** as **Basic**, select an appropriate **Filter** and enter its value. +If your Okta org uses groups to categorize users, you can add group attribute statements to the SAML assertion shared with your app. For more information, see [Define Group Attribute Statements](https://help.okta.com/oie/en-us/Content/Topics/Apps/Apps_App_Integration_Wizard_SAML.htm#). + +![](./static/single-sign-on-saml-59.png) + +12. Click **Next** and then click **Finish.** + +#### Okta SAML Metadata File + +You must download the **Identity Provider metadata** XML from your Okta app and upload the file into Harness. To do this perform the following steps: + +1. In the Harness Okta app that you just created, click the **Sign On** tab, and then click **Edit**. +2. Click on **Actions** to download the SAML metadata file from your Okta provider app. + + ![](./static/single-sign-on-saml-60.png) + +3. In Harness' **Add SAML Provider** dialog, under **Upload the Identity Provider metadata XML, downloaded from your Okta app**, click **Choose File**, and add the SAML metadata file you downloaded from your Okta application. +4. Uncheck **Enable Authorization**. +5. Select **Add Entity ID** and enter your custom Entity ID. The default Entity ID is **app.harness.io**. The value you enter here will override the default Entity ID. +6. Click **Add**. The new SSO provider is displayed under **Login via SAML**. + + ![](./static/single-sign-on-saml-61.png) + +#### Enable and Test SSO with Okta + +Now that Okta is set up in Harness as a SAML SSO provider, you can enable and test it. + +1. To enable the SSO provider, select **Login via SAML**. +2. In the resulting **Enable SAML Provider** dialog, click **TEST** to verify the SAML connection you've configured. + + ![](./static/single-sign-on-saml-62.png) + + A new browser tab will prompt you to log into your SAML provider (in this case **Okta**). + +3. Upon a successful test, Harness will display the **SAML test successful** banner on top. + + ![](./static/single-sign-on-saml-63.png) + +4. Click **CONFIRM** to enable SAML SSO in Harness. + + +:::warning +If you are locked out of Harness because of an SSO issue, you can log into Harness using the [Harness Local Login](#harness-local-login). +::: + +5. To fully test SSO, log into Harness using another User account. Login using a Chrome Incognito window to test SSO so that you can disable SSO in your Harness Administrator account if there are any errors. +6. Open an Incognito window in Chrome. +7. Log into Harness using a Harness User account that has a corresponding user account email address in the SAML SSO provider. You will be redirected to your SAML SSO provider's login page. +8. Log into your SSO Provider using an email address for a Harness User. The password does not have to be the same. +You are automatically returned to the Harness Manager, and logged into the dashboard using your SSO credentials. + +#### SAML Authorization with Okta + +Once you have enabled Harness SSO with your Okta app, you can set up and enable SAML authorization in Harness using Okta. + +To set up SAML authorization in Harness, link a Harness User Group to a user group in Okta. When a user from Okta logs into Harness, they are automatically added to the linked Harness User Group and inherit all the RBAC settings for that User Group. + +There are two Okta components you need to set up SAML authorization in Harness: + +* **Group Name** - The name of the Okta group containing the users you want to authorize in Harness. The email addresses registered in this group must be the same as the email addresses these users have registered in Harness. +* **Group Attribute Name** - The Group Attribute Name associated with the Okta app you use for authentication. The Group Attribute Name is different from the Okta group name. Your company might have many groups set up in Okta. The Group Attribute Name is used to filter groups. + +In Harness, you will enter the Group Attribute Name in the SAML SSO provider settings, and then you will enter the group name in the Harness User Group to link it to the Okta group. + +![](./static/single-sign-on-saml-64.png) + + +:::note +Remember that email addresses are how Harness identifies Users. Always ensure that the email addresses of your registered Harness Users match the Okta users you want to authenticate and authorize. +::: + + +To set up SAML Authorization with Okta, do the following: + +1. Set up SAML SSO in Harness as described in [SAML SSO with Okta](#saml-sso-with-okta). +You will be authorizing the same Harness Users that are authenticated using your SAML provider, so ensure that the email addresses registered in Harness are the same email addresses registered in your SAML provider. +2. Add a user group to your SAML provider, and then add users to the group. The group name will be used to link the Harness User Group later. To do this perform the following steps: + 1. Log in to Okta using Admin Account. + 2. Click **Groups**, under **Directory**. Click **Add Group**. The **Add Group** dialog appears. + + ![](./static/single-sign-on-saml-65.png) + + 3. Enter a **Name** and **Group Description** for your group. Click **Add Group**. + 4. You are redirected to the **Groups** page. Search for the group you created and click on it. + 5. Click **Manage People**. Find and add members to your group. + + ![](./static/single-sign-on-saml-66.png) + + After adding the members to the group we just created, the screen would look like this: + + ![](./static/single-sign-on-saml-67.png) + + Both members are already registered in Harness using the same email addresses in both Harness and the SAML provider. +3. Ensure that the SAML provider group is assigned to the same SAML provider app you use for Harness SAML SSO. To do this: + 1. In the Okta app, Click on **Groups**, under **Directory**. + 2. Find and select the group that you just created. + 3. Click **Manage Apps**. + + ![](./static/single-sign-on-saml-68.png) + + 4. In the subsequent screen, find the okta app that you had created earlier and click **Assign**. + 5. Click **Done**. + 6. Click on **Applications** under **Applications**. + 7. Find and select the okta application that you have created. + 8. Under **Assignments**, click **Groups**. You can see your group listed here. + + ![](./static/single-sign-on-saml-69.png) + +4. Configure the **Group Attribute Name** in your SAML app. You will use the Group Attribute Name later in Harness when you enable SAML authorization. + + Here are the steps for adding a **Group Attribute Name** in Okta: + 1. In **Okta**, click **Applications**, and then click the name of the app you use for Harness SAML SSO to open its settings. + 2. In the app, click the **General** tab. + 3. For **SAML Settings**, click **Edit**. + + ![](./static/single-sign-on-saml-70.png) + + 4. In **General Settings**, click **Next**. + + ![](./static/single-sign-on-saml-71.png) + + 5. In **GROUP ATTRIBUTE STATEMENTS (OPTIONAL)**, in **Name**, enter the Group Attribute Name you want to use, such as **dept**. + 6. In **Filter**, select **Matches regex** and enter period asterisk (**.\***) in the field. When you are done, it will look something like this: + + ![](./static/single-sign-on-saml-72.png) + + 7. Click **Next** and then click **Finish**. +5. Enable Authorization in Harness. Now that you have assigned a group to your SAML app and added a Group Attribute Name, you can enable authorization in Harness. + 1. In **Home**, click **Authentication** under **ACCOUNT SETUP**. **The Authentication: Configuration** page appears. + 2. Click to expand the **Login via SAML** section. + + ![](./static/single-sign-on-saml-73.png) + + 3. You can see the SSO Provider you have set up listed in this section. Click the vertical ellipsis (**︙**) next to the SSO Provider you have set up for SSO authentication, and click **Edit**. + + ![](./static/single-sign-on-saml-111.png) + + 4. In the **Edit SAML Provider** dialog, click **Enable Authorization**. + 5. In **Group Attribute Name,** enter the Group Attribute Name you earlier added to your SAML app in your SAML provider. + ![](./static/single-sign-on-saml-74.png) + + 6. Click **Add**. The SAML SSO Provider is now set up to use the Group Attribute Name for authorization. +6. Link the SAML SSO Provider to the Harness User Group. You can create a new User Group or use an existing Group so long as your Harness User account is a member and that User account is registered using the same email address you used to register with your SAML provider. For detailed instructions on creating a User Group in Harness, see [Add and Manage User Groups](../4_Role-Based-Access-Control/4-add-user-groups.md). To link your group, perform the following steps: + 1. In **Home**, click **Access Control** under **ACCOUNT SETUP**. + 2. Click **User Groups** and then click on the User Group you want to link the SAML SSO Provider to**.** + 3. Click **Link to SSO Provider Group**. + + ![](./static/single-sign-on-saml-75.png) + + 4. In the **Link to SSO Provider Group** Dialog, in **Search SSO Settings**, select the SAML SSO Provider you have set up. + 5. In the **Group Name**, enter the name of the group you assigned to your app in your SAML provider. + + ![](./static/single-sign-on-saml-76.png) + + 6. Click **Save**. +7. Test the SAML Authorization by repeating steps 5-8 from [Enable and Test SSO with Okta](#enable-and-test-sso-with-okta). +8. In your Harness account in the other browser window, check the User Group you linked with your SAML provider. The user that logged in is now added to the User Group, receiving the authorization associated with that User Group. +You can link multiple Harness User Groups with the SAML SSO Provider you set up in Harness. +You can also remove a link between a Harness User Group and a Harness SAML SSO Provider without losing the User Group members. In the Harness User Group, simply click **Delink Group**: + +![](./static/single-sign-on-saml-77.png) + +The **Delink Group** dialog appears. + +![](./static/single-sign-on-saml-78.png) + +Click **Retain all members in the user group** and click **Save**. The link to the SAML SSO Provider is removed and the Users are still members of the User Group. + + +:::note +You cannot delete a SAML SSO Provider from Harness that is linked to a Harness Group. You must first remove the link to the SSO Provider from the Group. +::: + + +### SAML SSO with Azure + +The section describes the Azure-specific steps you must perform to use an Azure app for Harness SAML SSO: + + +:::note +Make sure the email address used in Harness matches the email address in the Azure app for every user. + +::: + +The following image shows the basic exchange of information between Harness and your Azure app's Single sign-on settings: + +![](./static/single-sign-on-saml-79.png) + +#### Azure User Accounts + +The Harness User accounts and their corresponding Azure user accounts must have the same email addresses. + +1. Ensure that you have at least two corresponding user accounts in both Harness and your Azure app when setting up and testing SAML SSO. This allows you to set up the account with a Harness Administrator account and test it with a Harness user account. + +The following image shows a Harness User Group containing two users and their corresponding accounts in the Azure app that will be used for SAML SSO. + +![](./static/single-sign-on-saml-80.png) + +#### Endpoint URL for Azure + +You must enter the **Harness SAML Endpoint URL** from Harness in your Azure app **Reply URL**: + +1. In your Azure app, click **Single sign-on**. The SSO settings for the Azure app are displayed. + + ![](./static/single-sign-on-saml-81.png) + +2. In **Basic SAML Configuration**, click the edit icon (pencil). +3. Enter **app.harness.io** in the **Identifier (Entity ID)** field. + + ![](./static/single-sign-on-saml-82.png) + +Next, you will use the **SAML SSO Provider** settings in Harness to set up your Azure app **Single sign-on**. + + +:::note +For [Harness Self-Managed Enterprise Edition](https://docs.harness.io/article/tb4e039h8x-harness-on-premise-overview), replace **app.harness.io** with your custom URL. +If you use a custom Harness subdomain in any Harness version, like **example.harness.io**, use that URL. +::: + +4. In **Home**, click **Authentication** under **ACCOUNT SETUP**. **The Authentication: Configuration** page appears. +5. Click **+SAML Provider**. The **Add SAML Provider** page appears. + + ![](./static/single-sign-on-saml-83.png) + +6. In **Name**, enter a name for the SAML SSO Provider. +7. Select **Azure** under **Select a SAML Provider**. The settings for Azure setup are displayed: + + ![](./static/single-sign-on-saml-84.png) + +8. Copy the **Harness SAML Endpoint URL** from the **Add SAML Provider** dialog, and paste it in the **Reply URL** in your Azure app. + + ![](./static/single-sign-on-saml-85.png) + +9. Click **Save** on the Azure App SAML Settings page. + +#### User Attributes and Claims + +Next, you need to ensure that Harness Users' email addresses will identify them when they log in via Azure. To do this, you set up the **Single sign-on** section of your Azure app to use the **User name** email address as the method to identify users. + +The Azure users that are added to your Azure app must have their email addresses listed as their **User name.** + +To set this **User name** email address as the method for identifying users, in the Azure app **Single sign-on** section, the Azure app must use the **user.userprincipalname** as the **Unique User Identifier**, and **user.userprincipalname** must use **Email address** as the **name identifier format**. + +To set this up in your Azure app, do the following: + +1. In your Azure app, in the **Single sign-on** blade, in **User Attributes & Claims**, click the edit icon (pencil). The **User Attributes & Claims** settings appear. + + ![](./static/single-sign-on-saml-86.png) + +2. For **Unique User identifier value**, click the edit icon. The **Manage claims** settings appear. + + ![](./static/single-sign-on-saml-87.png) + +3. Click **Choose name identifier format**, and select **Email address**. +4. In **Source attribute**, select **user.userprincipalname**. +5. Click **Save**, and then close **User Attributes & Claims**. + + +:::note +If your Azure users are set up with their email addresses in some field other than **User name**, just ensure that the field is mapped to the **Unique User Identifier** in the Azure app and the **name identifier format** is **Email address**. +::: + + +#### Azure SAML Metadata File + +You must download the **Federation Metadata XML** from your Azure app and upload the file into Harness. + +1. Download the **Federation Metadata XML** from your Azure app and upload it using **Upload the identity Provider metadata xml downloaded from your Azure App** in the **Add SAML Provider** settings in Harness. + + ![](./static/single-sign-on-saml-88.png) + +2. Select **Add Entity ID** and enter your custom Entity ID. The default Entity ID is **app.harness.io**. The value you enter here will override the default Entity ID. +3. Click **Add**. The new Azure SAML Provider is added. + + ![](./static/single-sign-on-saml-89.png) + +#### Enable and Test SSO with Azure + +Now that Azure is set up in Harness as a SAML SSO provider, you can enable and test it. + +You can test the Azure app SSO from within Azure if you are logged into Azure using an Azure user account that has the following: + +* A corresponding Harness User account with the same email address. +* The Azure user account is in the Azure app **Users and groups** settings. +* The Azure user account has the Global Administrator Directory role in Azure. + +To test Azure SSO using Azure, do the following: + +1. In the Azure app, click **Single sign-on**, and then at the bottom of the **Single sign-on** settings, click **Test**. + + ![](./static/single-sign-on-saml-90.png) + +2. In the **Test** panel, click **Sign in as current user**. If the settings are correct, you are logged into Harness. If you cannot log into Harness, the **Test** panel will provide debugging information. See also [Debug SAML-based single sign-on to applications in Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-v1-debug-saml-sso-issues?WT.mc_id=UI_AAD_Enterprise_Apps_Testing_Experience) from Azure. + +To test Azure SSO using Harness, do the following: + +1. In **Harness**, in **ACCOUNT SETUP**->**Authentication**, select **Login via SAML**, to enable SAML SSO using the Azure provider. +2. Open a new Chrome Incognito window to test the SSO login using a Harness User account other than the one you are currently logged in with. +3. Using one of the user account email addresses that are shared by Harness and Azure, log into Harness. When you log into Harness, you are prompted with the Microsoft Sign in dialog. +4. Enter the Azure user name for the user (most often, the email address), enter the Azure password, and click **Sign in**. + +#### SAML Authorization with Azure + +Once you have enabled Harness SSO with your Azure app, you can set up and enable SAML authorization in Harness using Azure. + +To set up SAML authorization in Harness, you link a Harness User Group to a user group assigned to your Azure app. When a user from your Azure app logs into Harness, they are automatically added to the linked Harness User Group and inherit all the RBAC settings for that Harness User Group. + +Below is the Harness SAML settings you need from Azure to set up SAML authorization in Harness: + +* **Group Attribute Name** - In Azure, this value is obtained from the **Group Claims** in the Azure app **User Attributes & Claims** settings. + +For Harness **Group Attribute Name**, here is the Harness **SAML Provider** setting on the left and their corresponding Azure **Group Claims** settings on the right: + +![](./static/single-sign-on-saml-91.png) + +To set up Azure Authorization in Harness, do the following: + +1. In Azure, add the **Group Claim** (Name and Namespace) to the Azure app. + 1. In your Azure app, click **Single sign-on**, and then in **User Attributes & Claims**, click edit (pencil icon). + + ![](./static/single-sign-on-saml-92.png) + + 2. In **User Attributes & Claims**, edit the groups claim. The **Group Claims** settings appear. + + ![](./static/single-sign-on-saml-93.png) + + 3. Click the **All groups** option and then enable **Customize the name of the group claim**. + 4. In **Name**, enter a name to use to identify the Harness Group Attribute Name. + 5. In **Namespace**, enter a namespace name. + 6. Click **Save**. **User Attributes & Groups** now display the group claim you created. + 7. Close **User Attributes & Groups**. +2. In Harness, enter the Group Claim name and namespace in the SAML SSO Provider **Group Attribute Name** field. + 1. Open the SAML SSO Provider dialog, and enable the **Enable Authorization** setting. You need to enable **Enable Authorization** in order to select this SSO Provider when you link a Harness User Group for authorization. + 2. Enter the Group Claim name and namespace in the **Group Attribute Name** field in the same format as a Claim Name (`namespace/name`). The SAML SSO Provider dialog will look something like this: + + ![](./static/single-sign-on-saml-94.png) + + 3. Click **Save**. Authorization and the Group Attribute Name are set up. Next, you need to set up your Azure and Harness groups. +3. In Azure, ensure the Azure users with corresponding Harness accounts belong to an Azure group. Here is an Azure group named **ExampleAzureGroup** with two members: + + ![](./static/single-sign-on-saml-95.png) + +4. Ensure that the Azure group is assigned to the Azure app. Here you can see the **ExampleAzureGroup** group in the Azure app's **Users and groups**: + + ![](./static/single-sign-on-saml-96.png) + +5. Link the Harness User Group to the Azure group using the Azure group Object ID. + 1. In Azure, copy the Azure group **Object ID**. + + ![](./static/single-sign-on-saml-97.png) + + 2. In Harness, create a new User Group or open an existing User Group. + 3. In **Home**, click **Access Control** under **ACCOUNT SETUP**. + 4. Click **User Groups** and then click on the User Group you want to link the SAML SSO Provider to**.** + 5. Click **Link to SSO Provider Group**. + + ![](./static/single-sign-on-saml-98.png) + + 6. In the **Link to SSO Provider Group** dialog, in **SSO Provider**, select the Azure SSO Provider you set up, and in **Group Name**, paste the Object ID you copied from Azure. When you are done, the dialog will look something like this: + + ![](./static/single-sign-on-saml-99.png) + + 7. Click **Save**. The User Group is now linked to the SAML SSO Provider and Azure group Object ID. +6. Test Authorization. + 18. Open a new Chrome Incognito window to test the authorization using a Harness User account other than the one you are currently logged in with. + 19. Log into Harness using the user email address, and sign in using the Azure username and password. If you are already logged into Azure in Chrome, you might be logged into Harness automatically. + 20. In the linked Harness User Group, ensure that the Harness User account you logged in with was added. + +The Harness User is now added and the RBAC settings for the Harness User Group are applied to its account. For more information, see [Add and manage User Groups](../4_Role-Based-Access-Control/4-add-user-groups.md). + +### SAML SSO with Azure Active Directory + +You can use Azure Active Directory (AD) for SAML SSO with Harness, enabling AD users to log into Harness using their AD credentials. + +For detailed steps on adding SAML SSO with Active Directory, follow the steps in the tutorial [Azure Active Directory single sign-on (SSO) integration with Harness](https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/harness-tutorial) from Microsoft. + + +:::note +Users are not created as part of the SAML SSO integration. Users are invited to Harness using their email addresses. Once they log into Harness, their email addresses are registered as Harness Users. For more information, see [SAML SSO with Harness Overview](#saml-sso-with-harness-overview). +::: + + +#### Users in over 150 Groups + +When users have large numbers of group memberships, the number of groups listed in the token can grow the token size. Azure Active Directory limits the number of groups it will emit in a token to 150 for SAML assertions. + +If a user is a member of a larger number of groups, the groups are omitted and a link to the Graph endpoint to obtain group information is included instead. + +To invoke the API Harness will need **Client ID** and **Client Secret** for your registered app. + +To get this information, perform the following steps: + +1. In your Azure account, go to **App registrations**. +2. Click on your app. Copy the Application (client) ID and paste it in **Client ID** in your Harness account. +3. In your Azure account, go to **App registrations**. Click **Certificates and Secrets**. +4. Click New Client Secret. +5. Add a description and click Add. +6. Make sure to copy this secret and save it as an encrypted text secret. For detailed steps to create an encrypted text in Harness, see [Use Encrypted text Secrets](../6_Security/2-add-use-text-secrets.md). +7. Select the above secret reference in the Client Secret field in your Harness account. + + ![](./static/single-sign-on-saml-100.png) + +### SAML SSO with OneLogin + +To set up OneLogin as a SAML SSO provider on Harness, you exchange the necessary information between the OneLogin Harness application and Harness. The following sections cover Authentication steps, followed by Authorization steps. + +#### OneLogin Authentication on Harness + +Enabling OneLogin authentication on Harness requires configuration on both platforms, as described in these sections: + +##### Exchange Harness Consumer URL and OneLogin Metadata + +1. In **Home**, click **Authentication** under **ACCOUNT SETUP**. **The Authentication: Configuration** page appears. +2. Click **+SAML Provider**. The **Add SAML Provider** page appears. +3. In**Name**, enter a name for the SAML SSO Provider. +4. Select **OneLogin** under **Select a SAML Provider**. The settings for OneLogin setup are displayed. +5. Copy the provided URL under **Ener the SAML Endpoint URL, as your Harness OneLogin application's ACS URL**, to clipboard. +6. In OneLogin, add the **Harness** app (for SaaS setup) or **Harness (On Prem)** app for Harness Self-Managed Enterprise Edition setup. To do so, perform the following steps: + 1. Log in to OneLogin. + 2. Click **Administration**. + 3. Under the **Applications** tab, click **Applications**. + 4. Click **Add App**. + 5. Find **Harness** or **Harness (On Prem)** based on your setup, and then click the app. + + ![](./static/single-sign-on-saml-101.png) + +7. In **Configuration**, paste this URL into the **SCIM Base URL** field. + + ![](./static/single-sign-on-saml-102.png) + +8. Skip all other **Application Details** fields, and click **Save**. +9. Navigate to OneLogin's **Applications** > **SSO** tab. At the upper right, select **More Actions** > **SAML Metadata**. + + ![](./static/single-sign-on-saml-103.png) + +10. From the resulting dialog, download the .xml authentication file that you'll need to upload to Harness. + +##### Assign Users to Roles + +1. In OneLogin, select **Users** > **Users**. + + +:::tip + If you prefer to assign *groups* to roles, instead start at **Users** > **Groups**, and modify the following instructions accordingly. + +::: + +2. Search for a user that you want to add to Harness. + + ![](./static/single-sign-on-saml-104.png) + +3. Click to select the user. +4. The **Users** page appears. Click the **Applications** tab. +5. Click the **+** button at the upper right to assign an Application. +6. Select the Application, then click **Continue**. +7. Repeat this section for other users (or groups) that you want to add to Harness. + +##### Enable OneLogin as a Harness SSO Provider + +1. In **Home**, click **Authentication** under **ACCOUNT SETUP**. **The Authentication: Configuration** page appears. +2. Click to expand the **Login via SAML** section. + + ![](./static/single-sign-on-saml-105.png) + +3. You can see the SSO Provider you have setup listed in this section. Click the vertical ellipsis (**︙**) next to the SSO Provider you have set up for SSO authentication, and click **Edit**. +4. Use **Choose File** to upload the .xml file that you obtained from OneLogin. +5. Deselect **Enable Authorization**. +6. Select **Add Entity ID** and enter your custom Entity ID. The default Entity ID is **app.harness.io**. The value you enter here will override the default Entity ID. +7. Click **Add**. +8. Click **Login via SAML** toggle, to enable your new provider. +9. In the resulting **Enable SAML Provider** dialog, click **TEST** to verify the SAML connection you've configured. + + ![](./static/single-sign-on-saml-106.png) + +10. Once the test is successful, click **Confirm** to finish setting up OneLogin authentication. + +#### OneLogin Authorization on Harness + +Once you've enabled OneLogin authentication on Harness, refer to the below sections to enable authorization between the two platforms: + +##### Assign Roles to Users + +Harness’ SAML authorization replicates OneLogin Roles as Harness User Groups. Here is how to begin mapping between these entities. + +1. From OneLogin's, menu, select **Users** > **Users**. +2. Find and select a user, assigned to Harness, to assign appropriate OneLogin Roles. +3. Click the **Applications** tab. +4. Select the specific Roles you want to assign to this user. +5. Click **Save User** at the upper right. +6. Repeat this section for other users to whom you want to assign Roles. + +#### Define Parameters + +1. Select **Applications** > **Parameters**, then select the `+` button to add a new Parameter. +2. In the resulting **New Field** dialog, assign a **Field name** (for example **Groups**). + + ![](./static/single-sign-on-saml-107.png) + +3. Select **Include in SAML assertion** and **Multi-value parameter**. Then click **Save**. +4. Back on the **Parameters** tab, select your new **Groups** field. +5. In the resulting **Edit Field Groups** dialog, set **Default if no value selected** to **User Roles**. Below that, select **Semicolon Delimited input (Multi-value output)**. Then select **Save**. + + ![](./static/single-sign-on-saml-108.png) + +6. Select **Save** again at the **Parameters** page's upper right. + +##### Sync Users in Harness + +1. In **Home**, click **Authentication** under **ACCOUNT SETUP**. **The Authentication: Configuration** page appears. +2. Click to expand the **Login via SAML** section. + + ![](./static/single-sign-on-saml-109.png) + +3. You can see the SSO Provider you have set up listed in this section. Click the vertical ellipsis (**︙**) next to the SSO Provider you have set up for SSO authentication, and click **Edit**. + + ![](./static/single-sign-on-saml-111.png) + +4. In the **Edit SAML Provider** dialog, click **Enable Authorization**. +5. In **Group Attribute Name**, enter the name of the **Field Group** you configured in OneLogin. +6. Click **Save**. +7. Under **ACCOUNT SETUP c**lick **User Groups.** +8. Click on the User Group you want to link the SAML SSO Provider to**.** +9. Click **Link to SSO Provider Group**. +10. In the **Link to SSO Provider Group** Dialog, in **Search SSO Settings**, select the SAML SSO Provider you have set up. +11. In the **Group Name**, enter the name of the Field Group you configured in OneLogin. +12. Click **Save**. + +##### Test the Integration + +After you've synced Users between OneLogin and Harness, users will be assigned to the designated Harness User Group upon your next login to Harness. To test whether OneLogin authentication and authorization on Harness are fully functional do the following: + +1. In Chrome, open an Incognito window, and navigate to Harness. +2. Log into Harness using the email address of a Harness User that is also used in the SAML provider group linked to the Harness User Group. +When the user submits their email address in Harness Manager, the user is redirected to the SAML provider to log in. +3. Log into the SAML provider using the same email that the user is registered with, within Harness. +Once the user logs in, the user is redirected to Harness and logged into Harness using the SAML credentials. +4. In your Harness account in the other browser window, check the User Group you linked with your SAML provider. The user that logged in is now added to the User Group, receiving the authorization associated with that User Group. + + +:::note +You cannot delete a SAML SSO Provider from Harness that is linked to a Harness Group. You must first remove the link to the SSO Provider from the Group. +::: + + +### SAML SSO with Keycloak + +To set up SAML support in your Keycloack Harness app, make sure that the app has corresponding Users in Harness:​ + + +:::note +Users are not created as part of the SAML SSO integration. Users are invited to Harness using their email addresses. Once they log into Harness, their email addresses are registered as Harness Users. For more information, see SAML SSO with Harness Overview.​ +::: + + +This section describes the steps you must perform to use a Keycloak app for Harness SAML SSO:​ + +#### Keycloak User Accounts + +1. In Harness, add the users you want to set up for SAML SSO by inviting them to Harness using the same email addresses that they use in your SAML provider.​ +2. In Keycloak, assign them to your SAML provider app.​ + +#### Set Up a Client in Keycloak + +1. Log in to your Keycloak account. +2. In your [Master Realm](https://wjw465150.gitbooks.io/keycloak-documentation/content/server_admin/topics/realms/master.html), click **Clients**. For steps to create a new Realm, see [Create a New Realm](https://wjw465150.gitbooks.io/keycloak-documentation/content/server_admin/topics/realms/create.html). + + ![](./static/single-sign-on-saml-113.png) + +3. Click **Create Client**. The **Create Client** settings appear. +4. In **Client type**, select **SAML**. +5. In **Client ID**, enter `app.harness.io`. +6. In **Name**, enter a name for your client. +7. Turn off **Always display in console**. Turning this option off will make sure that this client is not displayed in the Account Console, when you do not have an active session. + + ![](./static/single-sign-on-saml-114.png) + +8. In **Root URL**, **Home URL**, and **Valid post logout redirect URIs** enter `https://devtest.harnesscse.com`. +9. In **Master SAML Processing URL**, enter your app's redirect YAML login URL. +For example, `https://app.harness.io/gateway/api/users/saml-login?accountId=`. +10. Click **Save**. Your client is now listed in Clients. +11. Click on the client you just created. The client details appear. +12. Make sure the **Name ID format** is set to **email**. +13. Make sure the following settings are turned on: + 1. Force POST binding + 2. Include AuthnStatement + 3. Sign assertions + + ![](./static/single-sign-on-saml-115.png) + +14. In **Signature Algorithm**, select `RSA_SHA256`. +15. In **SAML signature key name**, select **NONE**. +16. In **Canonicalization method**, select **Exclusive**. +17. Click **Save** + +#### Create a Role + +1. In your Client, click **Roles**. +2. Click **Create Role**. +3. In **Role Name**, enter a name for the role. Click **Save**. + +#### Create a User + +1. In your Keycloak account, click **Users**. +2. Click **Add user**. The **Create User** settings appear. +3. In **Email**, enter the email address of the user. +4. Turn on **Email verified**. +5. In **First name**, enter the first name of the user. +6. In **Last name**, enter the last name of the user. +7. Turn on **Enabled**. This is to make sure that a disabled user cannot log in. +8. Click **Join Groups**. Search for your user groups and join. +9. Click **Create**. +10. Click on the user you just created and click **Credentials**. +11. Add password for this user. +12. Click **Role mapping**. Assign **admin** role to this user. + +#### Set up Keycloak SAML SSO in Harness + +1. In your Harness Account, got to **Account SETUP** and click **Authentication**. +2. Click **SAML Provider**. The **Add SAML Provider** settings appear. +3. In **Name**, enter a name for your SAML provider. +4. In **Select a SAML Provider**, click **Other**. +Once you do this, you can see additional controls to set up the SAML Provider.​ +5. Copy the Endpoint URL under **Enter the SAML Endpoint URL as your Harness application's ACS URL** and paste it in **Assertion Consumer Service POSTBinding URL** in your Keycloak client's **Advanced** tab. + + ![](./static/single-sign-on-saml-116.png) + +6. You must download the Identity Provider metadata XML from your Keycloak realm and upload the file into Harness.​ +To do this, in your Keycloak account, click **Realm Settings**. +7. Click **SAML 2.0 Identity Provider Metadata**. Save the metadata file. + + ![](./static/single-sign-on-saml-117.png) + +8. In Harness' Add SAML Provider dialog, under **Upload the Identity Provider metadata XML**, click **Upload.** +9. Add the SAML metadata file you downloaded from your Keycloak realm settings. +10. Select **Add Entity ID** and enter your custom Entity ID.​ The default Entity ID is `app.harness.io`. The value you enter here will override the default Entity ID. +11. Click **Add**. +The new SSO provider is displayed under **Login via SAML**.​ + + +:::note +Harness does not support authorization with Keycloak. +::: + + +#### Enable and Test SSO with Keycloak + +Now that Keycloak is set up in Harness as a SAML SSO provider, you can enable and test it.​ + +1. To enable the SSO provider, select **Login via SAML**. +2. In the resulting **Enable SAML Provider** dialog, click **TEST** to verify the SAML connection you've configured.​ +3. Upon a successful test, Harness will display the **SAML test successful** banner on top.​ +4. Click **CONFIRM** to enable SAML SSO in Harness.​ + +### Harness Local Login + +To prevent lockouts or in the event of OAuth downtime, a User in the Harness Administrators Group can use the [**Local Login**](http://app.harness.io/auth/#/local-login) URL (http://app.harness.io/auth/#/local-login) to log in and update the OAuth settings. + +![](./static/single-sign-on-saml-118.png) + +1. Log in using **Harness Local Login**. +2. Change the settings to enable users to log in. + + +:::note +You can disable Local Login using the feature flag `DISABLE_LOCAL_LOGIN`. Contact [Harness Support](mailto:support@harness.io) to enable the feature flag. +::: diff --git a/docs/platform/3_Authentication/4-single-sign-on-sso-with-oauth.md b/docs/platform/3_Authentication/4-single-sign-on-sso-with-oauth.md new file mode 100644 index 00000000000..51f8054da47 --- /dev/null +++ b/docs/platform/3_Authentication/4-single-sign-on-sso-with-oauth.md @@ -0,0 +1,127 @@ +--- +title: Single Sign-On (SSO) with OAuth +description: This document explains single sign-on with various OAuth providers. +# sidebar_position: 2 +helpdocs_topic_id: rb33l4x893 +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports Single Sign-On (SSO) with OAuth 2.0 identity providers, such as GitHub, Bitbucket, GitLab, LinkedIn, Google, and Azure. This integration allows you to use an OAuth 2.0 provider to authenticate your Harness Users. + +![](./static/single-sign-on-sso-with-oauth-119.png) +Once OAuth 2.0 SSO is enabled, Harness Users can simply log into Harness using their GitHub, Google, or other provider's email address. + +### Before you begin + +* See Authentication Overview. +* See Access Management (RBAC) Overview. +* For information on OAuth 2.0 see [OAuth 2 Simplified](https://aaronparecki.com/oauth-2-simplified/) from Aaron Parecki. + +### Requirements + +To set up OAuth 2.0 successfully, the following requirements should be met: + +* Each Harness User should be registered with Harness using their email address. Users are registered once they have logged into Harness. Harness Users are required to register the first time they log into Harness. +* A Harness User's email address should also be used to authenticate with the OAuth 2.0 provider you plan to enable in Harness for SSO. + +For example, if a Harness User is registered with Harness using the email address **JohnOAuth20@outlook.com**, and OAuth SSO is enabled in Harness using Bitbucket as the provider, then the user must also be registered with Bitbucket using **JohnOAuth20@outlook.com**. + +#### GitHub Primary Email Required For Harness Login + +GitHub supports [primary](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/managing-email-preferences/changing-your-primary-email-address) and secondary email addresses: + +![](./static/single-sign-on-sso-with-oauth-120.png) +If you use GitHub for Harness OAuth 2.0 SSO with Harness, the primary email must be used for the Harness account and login. + +### Setup Overview + +Only Harness Users that are members of the User Groups with Create/Edit permissions for Authentication Settings can set up and enable OAuth 2.0 SSO.Setting up Harness OAuth 2.0 SSO involves the following high-level steps: + +1. Ensure that the email addresses of registered Harness Users are also registered with the OAuth 2.0 provider you will use for Harness OAuth 2.0 SSO. This applies to all users you plan to invite to Harness after you enable Harness OAuth 2.0 SSO. +2. Enable Harness OAuth 2.0 SSO, and select the OAuth 2.0 providers to use for SSO. +3. Test SSO by having a user log into Harness using each enabled OAuth 2.0 provider. + +#### How Do I Prevent Lockouts? + +The following steps can help you prevents lockouts when setting up SSO in Harness: + +* When you enable OAuth 2.0 SSO, using a Harness User account that is a member of the Administrator Group, remain logged in until you have tested SSO using a separate User account. If there is any error, you can disable OAuth 2.0 SSO. +* Ensure that one or more Harness Users in the Administrators Group are registered with Harness using the same email address they use to log into the OAuth 2.0 provider you plan to use for SSO. Repeat this test for each enabled OAuth 2.0 provider. + +If you accidentally get locked out of Harness, email [support@harness.io](mailto:support@harness.io), call 855-879-7727, or contact [Harness Sales](https://harness.io/company/contact-sales). + +#### Harness Local Login + +To prevent lockouts or in the event of OAuth downtime, a User in the Harness Administrators Group can use the [**Local Login**](http://app.harness.io/auth/#/local-login) URL (http://app.harness.io/auth/#/local-login) to log in and update the OAuth settings. + +![](./static/single-sign-on-sso-with-oauth-121.png) +1. Log in using **Harness Local Login**. +2. Change the settings to enable users to log in. + +You can disable Local Login using the feature flag `DISABLE_LOCAL_LOGIN`. Contact [Harness Support](mailto:support@harness.io) to enable the feature flag. + +### Set Up OAuth 2.0 SSO + +To set up OAuth 2.0 SSO, do the following: + +1. Log into Harness using a Harness User account that is a member of the Administrator User Group with Create/Edit, Delete permissions for Authentication Settings. For information on Harness RBAC, see [Access Management (RBAC) Overview](../4_Role-Based-Access-Control/1-rbac-in-harness.md). + + The email address used to log into Harness should also be registered with the OAuth 2.0 providers you intend to enable for Harness SSO. + +2. Click **Home**, and then click **Access Control** under **ACCOUNT SETUP**. The **Access Control** page appears. + + ![](./static/single-sign-on-sso-with-oauth-122.png) + +3. In the **Users** tab, we can see the list of all the **Active Users** and their **Email**. +4. Before you set up SSO, confirm that your users' email addresses registered with Harness are the same email addresses they use to log into the OAuth 2.0 provider you're enabling for Harness SSO. +5. Click **Authentication** under **ACCOUNT SETUP**. The **Authentication: Configuration** page appears. +6. If it's not already enabled, enable **Use Public OAuth Providers**. +7. Enable each public OAuth 2.0 provider you want to use for SSO. + + ![](./static/single-sign-on-sso-with-oauth-123.png) + + +### Log In With An OAuth 2.0 Provider + +The first time you log into Harness using OAuth 2.0 SSO, you will be redirected to the OAuth 2.0 provider. Enter the same email address you used for Harness, along with the OAuth 2.0 provider-specific password. Next, you are redirected back to Harness and automatically logged in. + +For all future logins, if you are already logged into your OAuth 2.0 provider in the same browser as Harness, simply enter your email address in Harness and log in automatically. + +Let's look at an example: + +**ExampleUser** is registered in Harness with the email address **exampleharnessUser@gmail.com**: + +![](./static/single-sign-on-sso-with-oauth-124.png) +The email address **exampleharnessUser@gmail.com** is also registered with Google: + +![](./static/single-sign-on-sso-with-oauth-125.png) +And Google is enabled as the Harness SSO provider. + +ExampleUser logs into Harness with the email address **exampleharnessUser@gmail.com**: + +![](./static/single-sign-on-sso-with-oauth-126.png) +When the user clicks Google, the browser is redirected to the Google website: + +![](./static/single-sign-on-sso-with-oauth-127.png) +The user enters the email address as **exampleharnessUser@gmail.com** and clicks **Next**. The user enters the password and clicks **Next**. + +Google verifies the email address and password and returns the browser to Harness, where Example User is logged in automatically. + +Harness OAuth 2.0 login successful! + +Each time you use the OAuth provider to log into Harness, you will be required to log into the OAuth provider first. This is the standard OAuth process. + +### Limit OAuth 2.0 SSO Domain Names + +By default, any member invited to Harness by a Harness Administrator can log in using an OAuth 2.0 SSO identity provider that's enabled on Harness. However, you can limit which email domain names can be used to log into Harness. + +For example, you might set up Google as a Harness OAuth 2.0 SSO provider, but you want only users who have **example.io** in their (login) email address to be able to log in via Google. + +To filter domain names in this way, see our [Authentication Overview](../3_Authentication/1-authentication-overview.md) topic's section on [Restrict Email Domains](../3_Authentication/1-authentication-overview.md#restrict-email-domains). + +### Next steps + +* [Two-Factor Authentication](../3_Authentication/2-two-factor-authentication.md) + diff --git a/docs/platform/3_Authentication/5-single-sign-on-sso-with-ldap.md b/docs/platform/3_Authentication/5-single-sign-on-sso-with-ldap.md new file mode 100644 index 00000000000..947b0a76237 --- /dev/null +++ b/docs/platform/3_Authentication/5-single-sign-on-sso-with-ldap.md @@ -0,0 +1,360 @@ +--- +title: Provision Users and Single Sign-On (SSO) with LDAP +description: This topic explains how to configure Single Sign-On with LDAP in Harness. +# sidebar_position: 2 +helpdocs_topic_id: 142gh64nux +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is behind the feature flag `NG_ENABLE_LDAP_CHECK`. Contact Harness Support to enable the feature. + +::: + +Harness supports Single Sign-On (SSO) with LDAP implementations, including Active Directory and OpenLDAP. Integrating Harness with your LDAP directory enables you to log your LDAP users into Harness as part of Harness' SSO infrastructure. + +Once you integrate your Harness account with LDAP, you can create a Harness User Group and sync it with your LDAP directory users and groups. Then the users in your LDAP directory can log into Harness using their LDAP emails and passwords. + + +### Important + +* Make sure that the FirstGen Delegate is active to configure LDAP settings. + +### Lightweight Directory Access Protocol (LDAP) overview + +Lightweight Directory Access Protocol (LDAP) is an application protocol for working with various directory services. + +Directory services, such as Active Directory, store user and account information, and security information like passwords. + +The service then allows the information to be shared with other devices on the network. + +This lets you use LDAP to authenticate, access, and find information. + +Harness supports Single Sign-On through Active Directory and OpenLDAP. + +### Harness LDAP setup overview + +Here is an overview of the steps to set up SSO with LDAP in Harness. + +![](./static/single-sign-on-sso-with-ldap-21.png) + +Here are the steps for setting up Harness SSO with LDAP: + +1. Add LDAP as a SSO Provider in Harness. This step involves authenticating with your LDAP server and defining how Harness will query it for users and groups. +2. Add a Harness User Group and link it to your LDAP directory. Harness syncs all the users in that LDAP user group automatically and manages user authorization. +3. ​Enable the LDAP Provider you set up in Harness as the Harness SSO provider. +4. To verify the LDAP SSO, log into Harness using one of the synchronized LDAP users. + +### Ports and permissions + +The following ports and permissions are required to add LDAP as a Harness SSO provider. + +#### Ports + +The Harness LDAP connection is between the Harness delegate and your LDAP server. The delegate uses the following ports: + + + +| | | +| --- | --- | +| **HTTPS** | 443 | +| **LDAP without SSL** | 389 | +| **Secure LDAP (LDAPS)** | 636 | + + +:::note +By default, LDAP traffic is transmitted unsecured. For Windows Active Directory, you can make LDAP traffic confidential and secure by using SSL/TLS. You can enable LDAP over SSL by installing a certificate from a Microsoft certification authority (CA) or a non-Microsoft CA. + +::: + +#### Permissions + +Authentication with an LDAP server is called the Bind operation. The Bind operation exchanges authentication information between the LDAP client (Harness delegate) and your LDAP server. The security-related semantics of this operation are in RFC4513. + +When you configure Harness with LDAP, you will enter a Bind DN (distinguished name) for the LDAP directory user account used to authenticate. + +The specific permissions needed by Harness depend on the LDAP directory service you are using. + +* **Windows Active Directory:** By default, all Active Directory users in the **Authenticated Users** group have Read permissions to the entire Active Directory infrastructure. If you have limited this, ensure that the account used to connect Harness may enumerate the Active Directory LDAP users and groups by assigning it **Read MemberOf** rights to **User** objects. Changing the default is not a trivial task and requires you to change the basic authorization settings of your Active Directory. For more information, see [Configure User Access Control and Permissions](https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-center/configure/user-access-control) from Microsoft. +* **OpenLDAP:** The default access control policy is allow read by all clients. If you change this default, ensure that the account used to connect Harness to OpenLDAP is granted the **Authenticated users** entity. For more information, see [Access Control](https://www.openldap.org/doc/admin24/access-control.html) from OpenLDAP. + +### Add LDAP SSO provider + +Adding your LDAP Provider to Harness initially involves establishing a connection from Harness (specifically, the Harness delegate) and querying your LDAP directory for the users and groups you want to sync with Harness for SSO. + +#### Query your LDAP directory + +If you need to query your LDAP server before or during the Harness LDAP SSO setup, use the **ldapsearch** CLI tool (Linux/Mac), [LDAP Admin](http://www.ldapadmin.org/) (Windows), the **dsquery** CLI tool (Windows), **Active Directory Users and Computers** (Windows), or [Windows PowerShell](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee617195(v=technet.10)). + +For example, the following ldap search will query an Active Directory LDAP directory running on a AWS EC2 instance and return LDAP Data Interchange Format (LDIF) output, which you can pipe to a file if needed: + + +``` +ldapsearch -h example.com -p 389 -x -b "DC=example,DC=com" +``` +The output will include the distinguished names, objectClass, and canonical names for the objects in the LDAP directory. + +The same query using dsquery to query Active Directory is: + + +``` +dsquery * -limit 0 >>all-objects.txt +``` +To query for all users using dsquery: + + +``` +dsquery * -limit 0 -filter "&(objectClass=User)(objectCategory=Person)" -attr * >>all-users.txt +``` +#### Add LDAP for Harness SSO + +To add your LDAP directory as a Harness SSO provider, perform the following steps: + +1. In your Harness Account, click **Account Settings**. +2. Click **Authentication**. +3. Select **LDAP Provider**. + + ![](./static/single-sign-on-sso-with-ldap-22.png) + +The LDAP Provider settings appear. +4. Enter a **Name** for your LDAP Provider. +5. To use the LDAP SSO configuration for authorization, select **Enable Authorization**. +Use this setting if you want to synchronize LDAP users into Harness through linked Harness user groups. If you link the LDAP SSO configuration with a Harness user group without enabling authorization, Harness does not synchronize LDAP users into the user group periodically. The manual synchronization option also remains unavailable. You can choose to leave authorization disabled when creating the LDAP configuration and enable it at a later time. +6. Click **Continue**. + +#### Add Connection Settings + +1. In **Host**, enter the hostname for the LDAP server. Harness uses DNS to resolve the hostname. You can also use the public IP address of the host. +2. In **Port**, enter `389` for standard LDAP. If you want to connect over Secure LDAP (LDAPS), use port 636, and enable the **Use SSL** setting. +3. Select **Use SSL** if you entered port 636 in **Port** and are connecting over Secure LDAP (LDAPS). +4. Select **Enable Referrals** if you have referrals configured for your LDAP authentication. +5. In **Max Referral Hops e**nter the number of referrals. +6. In **Connection Timeout**, enter the number of milliseconds to wait for an LDAP connection before timing out. For example, `5000` is equal to 5 seconds. +7. Enable/disable nested LDAP queries to optimize LDAP Group Sync performance. If you uncheck the **Recursive Membership Search** setting Harness will not run nested LDAP query and only do a flat group search. +8. In **Response Time**, enter the number of milliseconds to wait for an LDAP response before timing out. For example, `5000` is equal to 5 seconds. +9. In **Bind DN**, enter the distinguished name of the directory object used for the Bind operation. + The Bind operation is the authentication exchange between Harness and the LDAP server. Typically, this is the user object for the administrator. + For example: + `cn=Administrator,CN=Users,DC=example,DC=com` + This user will be used for all LDAP queries performed by Harness. +10. In **Password**, enter the password to log into the LDAP host. + This is the password associated with the user identified in **Bind DN**.! + + [](./static/single-sign-on-sso-with-ldap-23.png) + +11. Click **Test Connection**. Once the connection is successful, click **Continue**. + +#### Add a User Query + +The details you enter in this section will be used to search for users in the LDAP directory. These users are added to Harness. + +With User Queries, Harness lets you set the scope within which it can perform the LDAP user search. + +1. Click **New User Query**. +2. In **Base DN**, enter the relative distinguished name (RDN) for the Users object in the directory. +If you are logged into the Active Directory server, you can enter **dsquery user** at the command line and you will see a distinguished name for a user object, such as: + + ``` + CN=John Doe,CN=Users,DC=mycompany,DC=com + ``` + The **Base DN** is the relative distinguished name (RDN) following the user common name (CN). Typically, this is the Base DN you should enter: + + ``` + CN=Users,DC=mycompany,DC=com + ``` + Once you have the Base DN, you can ensure that it provides all of the attributes for your LDAP users with the **dsquery** command piped to **dsget user**: + + ``` + dsquery user dc=mycompany,dc=com | dsget user -samid -fn -ln -dn + ``` + The result will include all of the users and help you with setting up your query. + +3. In **Search Filter**, enter the search filter for the attribute to use when looking for users belonging to the **Base DN**. +The search filter defines the conditions that must be fulfilled for the LDAP search using the entry in **Base DN**. +Typically, **Search Filter** is either: +**(objectClass=user)** or **(objectClass=person)** +In dsquery, if the command **dsquery \* -filter "(objectClass=user)"** returns the LDAP users then **(objectClass=user)** is the correct filter. +4. In **Name Attribute**, enter the common name attribute for the users in your LDAP directory. Typically, this is **cn**. To list all attributes for a user, enter the following dsquery: + + ``` + dsquery*"CN=users,DC=mycompany,DC=com"-filter"(samaccountname=*user\_name*)"-attr* + ``` +5. In **Email Attribute**, enter the LDAP user attribute that contains the users' email address. Harness uses email addresses to identify users. +Typically, the attribute name is **userPrincipalName** (most common), **email** or **mail**. +6. In **Group Membership Attribute**, enter **memberOf** to return a list of all of the groups of which each user is a member. The dsquery for all groups the user John Doe (john.doe) is a member of would be: + + ``` + dsquery user -samid john.doe | dsget user -memberof | dsget group -samid + ``` +7. Click **Test**. + + ![](./static/single-sign-on-sso-with-ldap-24.png) + +8. Once your test is successful, click **Continue**. + +#### Add a Group Query + +The details you enter in this section will be used to search for user groups in the LDAP directory. These user groups are added to Harness. + +With Group Queries, Harness lets you set the scope within which it can perform the LDAP user group search. + +1. In **Base DN**, enter the distinguished name of the LDAP group you want to add. This should be the LDAP group containing the users you searched for in **User Queries**. + To see a list of all the groups in your LDAP directory, use this dsquery command: + ``` + dsquery group -o dn DC=mycompany,DC=com + ``` + To ensure that the group contains the members you want, use the dsget command: + ``` + dsgetgroup"CN=*Group\_Name*,CN=Users,DC=mycompnay,DC=com" -members| dsget user -samid -upn -desc + ``` + Typically, you will want to pick the Users group that gives future searches for groups a wide scope. For example: + ``` + CN=Users,DC=mycompany,DC=com + ``` + Later, when you search for LDAP groups as part of adding group members to Harness, your search will be performed within the scope of the group you set in **Base DN**. + +2. In **Search Filter**, enter **(objectClass=group)** because you are searching for an LDAP group. +3. In **Name Attribute**, enter cn for the name attribute. +4. In **Description Attribute**, enter **description** to sync the LDAP group description. + To see the description in your LDAP directory, use dsquery: + ``` + dsquery * -Filter "(objectCategory=group)" -attr sAMAccountName description + ``` +5. Click **Test**. + +6. ![](./static/single-sign-on-sso-with-ldap-25.png) + +7. Once your test is successful, click **Save**. + +Your new LDAP Provideris listed in the SSO Providers. + +![](./static/single-sign-on-sso-with-ldap-26.png) + + +:::note +Once LDAP is set up and enabled in Harness, you cannot add a second LDAP SSO entry in Harness. The UI for adding LDAP will be disabled. + +::: + +### Add a Harness User Group with LDAP users + +Once you have configured an LDAP SSO Provider for Harness, you can create a Harness User Group and sync it to your LDAP directory. + +To do this perform the following steps: + +1. In your Harness Account, click **Account Settings**. +2. Click **Access Control** and click **User Groups**. +3. Click **New User Group**. +4. Enter a **Name** for your User Group and click **Save**. +Your User Group is listed in User Groups. +5. Click on your User Group and then click **Link to SSO Provider Group**. +6. Search and select your LDAP Provider. +7. In **LDAP Group Search Query**, search LDAP group. + +8. ![](./static/single-sign-on-sso-with-ldap-27.png) + +9. Select your LDAP group from the list and click **Save**. + + +:::note +Once you link your SSO Provider Group in Harness, it will take a few minutes to sync the LDAP group users with the Harness group. Harness syncs with the LDAP server every 15 minutes. If you add users to your LDAP directory you will not see it immediately in Harness. Once Harness syncs with your LDAP directory, the users are added to the Harness group. + +::: + + +:::note +If you want to use the LDAP SSO configuration for authorization, enable authorization on the LDAP SSO configuration and link the SSO configuration to the user group. Harness synchronizes LDAP users into the user group only if you enable authorization. + +::: + +Later, when you enable LDAP SSO in Harness, and users in this group log into Harness, Harness will verify their email addresses and passwords using its connection to the LDAP provider. + + +:::note +Harness treats LDAP group names as case-sensitive. QA, Qa, qA, will all create new groups. + +::: + +Users added to the LDAP-linked Harness User Group are also added as Harness Users. + + +:::note +If the Harness User Group is removed, the User account remains, and when the User logs into Harness, its email address and password are verified by the LDAP provider. The User can also be added to any other Harness User Group. + +::: + +### Enable LDAP for SSO in Harness + +You can enable the LDAP SSO Provider you configured in Harness and begin using LDAP as the login method for Harness users. + + +:::warning +Before you enable LDAP for SSO and log out of Harness to test it, ensure that your LDAP users have the passwords associated with their email addresses. If they do not have the passwords, they will be locked out of Harness. Active Directory passwords are stored using non-reversible encryption. You can also add a new user to your LDAP group, record its password, wait 15 minutes for the corresponding Harness group to refresh, and then log into Harness using the new user. +Contact Harness Support at [support@harness.io](mailto:support@harness.io) if there is a lockout issue. +::: + + +To enable the LDAP provider you just added, perform the following steps: + +1. In your Harness Account, click **Account Settings**. +2. Click **Authentication**. +3. Select **Login via** **LDAP**. +4. Verify your **Email** and **Password** in **Verify and Enable LDAP Configuration**. Click **Test**. +5. Click **Enable** once your test is successful. + + +:::note +Users provisioned with LDAP are added to the Account scope and are sent an email invitation to log into Harness. If SAML is also set up with Harness, then can log in via SAML. See [Single Sign-On (SSO) with SAML](../3_Authentication/3-single-sign-on-saml.md). + +::: + +With a Harness user group synced with an LDAP group and LDAP SSO enabled, you can now log into Harness using LDAP users from the LDAP group. + +### Delink a User Group from LDAP + +To delink a Harness user group from its linked LDAP provider, perform the following steps: + +1. In your Harness Account, click **Account Settings**. +2. Click **Access Control** and click **User Groups**. +3. Click the User Group you wish to delink. +4. Click **Delink Group**. The **Delink Group** confirmation appears. + +5. ![](./static/single-sign-on-sso-with-ldap-28.png) + +6. To retain the members in the Harness User Group, select **Retain all members in the User Group**. +If LDAP SSO is enabled in Harness, the users can still log into Harness. If LDAP SSO is disabled, then the user cannot log into Harness. +6. Click **Save.** + + +:::note +Delinking a User does not remove the User from Harness. It removes them from the LDAP-linked User Group. To removes the User, go to the **Users** page, find the individual User account, and delete the User. +::: + + +#### Synchronize LDAP users into a user group manually + +Harness provides you with an option to synchronize LDAP users with a Harness user group manually.  + +Before you begin synchronization, make sure that you have linked the Harness user group to the LDAP SSO configuration. + +To synchronize LDAP users with a Harness user group manually, perform the following tasks: + +1. In your Harness account, click **Account Settings**, and then click **Authentication**. +2. In the **Login via LDAP** section, click the three dots shown on the  LDAP SSO configuration, and then click **Synchronize User Groups**. +3. Verify that Harness is synchronizing LDAP users into the user group. +In **Account Settings**, click **Access Control**, and then click the **User Groups** tab. Then, click the user group that is linked to the LDAP SSO configuration, and verify that LDAP users are listed in the group. + +### Harness Local Login + +To prevent lockouts, a User in the Harness Administrators Group can use the [**Local Login**](http://app.harness.io/auth/#/local-login) URL to log in and update the settings. + +1. Log in using **Harness Local Login**. +2. Change the settings to enable users to log in. + + +:::note +You can disable Local Login using the feature flag `DISABLE_LOCAL_LOGIN`. Contact [Harness Support](mailto:support@harness.io) to enable the feature flag. +::: diff --git a/docs/platform/3_Authentication/6-provision-users-with-okta-scim.md b/docs/platform/3_Authentication/6-provision-users-with-okta-scim.md new file mode 100644 index 00000000000..572eacf1cad --- /dev/null +++ b/docs/platform/3_Authentication/6-provision-users-with-okta-scim.md @@ -0,0 +1,246 @@ +--- +title: Provision Users with Okta (SCIM) +description: Explains how to provision and manage Harness Users and User Groups using Okta's SCIM integration. +# sidebar_position: 2 +helpdocs_topic_id: umv2xdnofv +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +System for Cross-Domain Identity Management (SCIM) is an open standard protocol for the automation of user provisioning. + +Automatic provisioning refers to creating users and user groups in Harness. In addition to creating these, automatic provisioning includes the maintenance and removal of users and user groups as and when required. + +This topic describes how to build a SCIM endpoint using OKTA and integrate it with Harness. + +### Before you begin + +* This topic assumes you understand System for Cross-domain Identity Management (SCIM). For an overview, see the article [Introduction to System for Cross-domain Identity Management (SCIM)](https://medium.com/@pamodaaw/system-for-cross-domain-identity-management-scim-def45ea83ae7). +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Access Management (RBAC) Overview](../4_Role-Based-Access-Control/1-rbac-in-harness.md) +* Make sure you are an Administrator in your Okta account and have the **Account Admin** permissions in Harness. +* Make sure you have a Harness [API Key](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md) and a valid Token under it. The API Key must have all permissions on the Users and User Groups. + +### Review: Harness Okta SCIM Integration + +By using Okta as your identity provider, you can efficiently provision and manage users in your Harness Account, Org and Project. Harness' [SCIM](https://www.okta.com/blog/2017/01/what-is-scim/) integration enables Okta to serve as a single identity manager, for adding and removing users, and for provisioning User Groups. This is especially efficient for managing many users. + +In exchange for the convenience of Okta-provisioned users and groups, you must configure several aspects of Okta, as described in the following sections. You will also have restrictions on modifying Okta-provisioned users and groups natively within Harness, as described in [Limitations](#limitations). + +#### Features Supported + +Once you have set up the SCIM integration between Okta and Harness (as described below), Administrators will be able to perform the following Harness actions within Okta: + +* [Create users](#option-create-users), individually, in your Harness app. +* [Assign Okta-defined groups](#option-assign-groups) to your Harness app. +* [Group push](#group-push-to-harness) already-assigned groups to Harness. +* [Update User Attributes](#option-update-user-attributes) from Okta to Harness. +* [Deactivate Users](#deactivate-users) in Okta and Harness. + +### Limitations + +When you provision Harness User Groups and users from Okta, you will not be able to modify some of their attributes in Harness Manager. You must do so in Okta. + +Operations that you *cannot* perform on Okta-provisioned User Groups within Harness Manager are: + +* Managing users within the User Group. +* Adding users to the User Group. +* Removing users from the User Group. +* Renaming the User Group. +* Deleting the User Group. + +If a User Group provisioned from Okta duplicates the name of an existing Harness User Group, Harness will maintain both groups. To prevent confusion, you are free to rename the native User Group (but not the Okta-provisioned group). + +Where a User Group has been provisioned from Okta, you cannot use Harness Manager to edit the member users' details (**Email Address**, **Full Name**, or **User Groups** assignments). + +You must use Okta to assign these users to other User Groups (to grant corresponding permissions). You must also use Okta to delete these users from Harness, by removing them from the corresponding Okta app. + +When you use Okta to directly assign users to Harness, those users initially have no User Group assignments in Harness. With this method, you are free to use Harness Manager to add and modify User Group assignments. + +### Step 1: Create App Integration in Okta + +To automate the provisioning of users and groups, you must add a Harness app to your Okta administrator account. To do that perform the following steps + +Log in to your Okta administrator account and click **Applications**. + +Click **Create App Integration**. + +![](./static/provision-users-with-okta-scim-05.png) + +The **Create a new app integration** dialogue appears. Select **SAML 2.0** and click **Next**. + +![](./static/provision-users-with-okta-scim-06.png) + +In **General Settings**, enter a name in the **Application label** field, and click **Next**. + +The SAML settings appear. + +Enter your **Single sign on URL**. To get the Single sign on URL, add your account ID to the end of the following + +URL: `https://app.harness.io/gateway/api/users/saml-login?accountId=` + +In **Audience URI (SP Entity ID)**, enter `app.harness.io`. + +In **Attribute Statements (optional)**, enter name in the **Name** field, select **Name Format** as **Basic,** and select the **Value** as **user.email**. + +In **Group Attribute Statements (optional)**, enter a name in the **Name** field, select **Name format (optional)** as **Basic**, select an appropriate **Filter**, and enter its value. + +Click **Next**. + +The **Feedback** options appear. Select option and click **Finish**. + +![](./static/provision-users-with-okta-scim-08.png) +Click **General** and then click **Edit** in **App Settings.** + +Select **Enable SCIM provisioning** in **Provisioning**. Click **Save**. + +![](./static/provision-users-with-okta-scim-09.png) +### Step 2: Authorize Okta Integration + +In your Okta administrator account and click **Applications > Applications**. + +Search your Application. + +Click **Provisioning** and then click **Integration**. + +Click **Edit**. + +In **SCIM connector base URL** enter the Base URL for your API endpoint. + +To get the **SCIM connector base URL**, add your account ID to the end of the following + +URL: `https://app.harness.io/gateway/ng/api/scim/account/` + +Enter `userName` in **Unique identifier field for users** and select **Supported provisioning actions**. + +Select **Authentication Mode** as HTTP Header and enter your API Token in **Bearer**. + +For information on how to create an API Token in Harness, see [Add and Manage API Keys](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md). + +![](./static/provision-users-with-okta-scim-10.png) +Click **Test Connection** and then **Save** after the test is successful. + +![](./static/provision-users-with-okta-scim-11.png) +Your Okta app is now authorized with Harness. + +Next, click **To App** settings in **Provisioning** and enable **Create Users**, **Update User Attributes** and **Deactivate Users**. + +![](./static/provision-users-with-okta-scim-12.png) +Click **Save**. + +### Option: Create Users + +To directly assign your Harness app to individual (existing) Okta users, thereby provisioning the users in your Harness Account perform the following steps: + +In your Okta application, click **Assignments**. + +Click **People**. + +Click **Assign** > **Assign to People**. The Assignments settings appear. + +Select users and click **Assign**. + +Click **Save and Go Back**. + +Click **Done** after you have assigned all the intended users. + +Users with the Harness app assignment now appear in **People**. + +![](./static/provision-users-with-okta-scim-13.png) +You can edit or delete users from here. + +The user is now listed in your Harness account. + +![](./static/provision-users-with-okta-scim-14.png) +### Option: Assign Groups + +To assign the Harness app to Okta-defined groups of users, perform the following steps: + +In your Okta application, click **Assignments**. + +Click **Groups**. + +Click **Assign** > **Assign to Groups**. The Assignments settings appear. + +Select groups and click **Assign**. + +Click **Save and Go Back**. + +Click **Done** after you have assigned all the intended groups. + +Groups with the Harness app assignment now appear in **Groups**. + +You can edit or delete users from here. + +#### Group Push to Harness + +To provision your application's assigned groups in Harness: + +Click **Push Groups** in your application, then select **Push Groups** > **Find Groups by Name.** + +![](./static/provision-users-with-okta-scim-15.png) +Search for the group(s) you want to provision. + +![](./static/provision-users-with-okta-scim-16.png) +Click **Save**. You can see the status of this Push Group in your application. + +![](./static/provision-users-with-okta-scim-17.png) +If an error prevents adding, deleting, or updating an individual user to Harness, you must retry provisioning the user in Okta later, after resolving the issues. For more information, see [Troubleshooting Group Push](https://help.okta.com/en-us/Content/Topics/users-groups-profiles/usgp-group-push-troubleshoot.htm).This group is now listed in your Harness account. + +![](./static/provision-users-with-okta-scim-18.png) + When provisioning user groups through SCIM, Harness replaces any `.`,`-`, or a space in your user group name and uses it as the group identifier. For example, if your group name is `example-group` in your SCIM provider, its identifier in Harness would be `example_group`. + + ### Option: Update User Attributes + +You can edit a user's profile in Okta to update the following attribute values for the corresponding user in Harness: + +* Given name +* Family name +* Primary email +* Primary email type +* Display name + +To update user attributes: + +1. From your Okta administrator account, select **Directory** > **People**. +2. Locate the user you want to edit, and click their name to display their profile. +3. Click the **Profile** tab, then click the **Edit** button. +4. Update the desired attributes, then click **Save**.![](./static/provision-users-with-okta-scim-19.png) + +Only the five fields listed at the top of this section will be synced to Harness users. You can update values in other fields, but those values will be saved for this user only in Okta. They won't be reflected in Harness. +The Display name in Okta is displayed as the user name in Harness. + +#### Deactivate Users + +You can deactivate users in Okta to delete their Harness accounts, as follows: + +1. From Okta's top menu, select **Directory** > **People**, then navigate to the user you want to deactivate. +2. From that user's profile, select **More Actions** > **Deactivate**. +3. Click **Deactivate** in the resulting confirmation dialog. + +Deactivating a user removes them from all their provisioned apps, including Harness. While a user account is deactivated, you cannot make changes to it. However, as shown below, you can reactivate users by clicking **Activate** on their profile page. + +### What If I Already Have App Integration for Harness FirstGen? + +If you currently have a Harness FirstGen App Integration setup in your IDP and are now trying to set up one for Harness NextGen, make sure the user information is also included in the FirstGen App Integration before attempting to log into Harness NextGen through SSO. + +Harness authenticates users using either the FirstGen App Integration or the NextGen App Integration. If you have set up both, Harness continues to use your existing App Integration in FirstGen to authenticate users that attempt to log in using SSO.Let us look at the following example: + +1. An App Integration is already set up for FirstGen with 2 users as members: +`user1@example.com` and `user2@example.com`. +2. Now you set up a separate App Integration for Harness NextGen and add `user1@example.com` and `user_2@example.com` as the members. +3. You provision these users to Harness NextGen through SCIM. +4. `user1@example.com` and `user_2@example.com` try to log in to Harness NextGen through SSO. +5. The FirstGen App Integration is used for user authentication. +`user1@example.com` is a member of the FirstGen App Integration and hence is authenticated and successfully logged in to Harness NextGen. +`user_2@example.com` is not a member of the FirstGen App Integration, hence the authentication fails and the user cannot log in to Harness NextGen.![](./static/provision-users-with-okta-scim-20.png) + +### Assigning Permissions Post-Provisioning + +Permissions can be assigned manually or via the Harness API: + +* [Add and Manage Roles](../4_Role-Based-Access-Control/9-add-manage-roles.md) +* [Add and Manage Resource Groups](../4_Role-Based-Access-Control/8-add-resource-groups.md) +* [Permissions Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) + diff --git a/docs/platform/3_Authentication/7provision-users-and-groups-with-one-login-scim.md b/docs/platform/3_Authentication/7provision-users-and-groups-with-one-login-scim.md new file mode 100644 index 00000000000..d14839300de --- /dev/null +++ b/docs/platform/3_Authentication/7provision-users-and-groups-with-one-login-scim.md @@ -0,0 +1,254 @@ +--- +title: Provision Users and Groups with OneLogin (SCIM) +description: Explains how to provision and manage Harness Users and User Groups using Okta's SCIM integration. +# sidebar_position: 2 +helpdocs_topic_id: y402mpkrxq +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use OneLogin to provision users and groups in Harness. + +Harness' SCIM integration enables OneLogin to serve as a single identity manager for adding and removing users. This is especially efficient for managing large numbers of users. + +This topic describes how to set up OneLogin provisioning for Harness Users and User Groups. + +### Before you begin + +* This topic assumes you understand the System for Cross-domain Identity Management (SCIM). For an overview, see the article [Introduction to System for Cross-domain Identity Management (SCIM)](https://medium.com/@pamodaaw/system-for-cross-domain-identity-management-scim-def45ea83ae7). +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Access Management (RBAC) Overview](../4_Role-Based-Access-Control/1-rbac-in-harness.md) +* Make sure you are an Administrator in your OneLogin account and have the **Account Admin** permissions in Harness. +* Make sure you have a Harness [API Key](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md) and a valid Token under it. The API Key must have all permissions on the Users and User Groups. + +### Limitations + +This integration does not support updating a provisioned user's **Email** in OneLogin. Once the user is provisioned in Harness, the user's email address must remain the same. If you change the email address in OneLogin and then try to remove the user from Harness, the removal will fail. + +Once a user is provisioned in Harness, you cannot delete the user in the Harness Manager. You must delete the user in OneLogin. + +The provisioned user cannot use the Harness OneLogin app to log into Harness unless OneLogin is also set up for [OneLogin SAML authentication in Harness](https://docs.harness.io/article/zy8yjcrqzg-single-sign-on-sso-with-saml#saml_sso_with_one_login). They must use their email address and password. + +### Step 1: Add Harness App to OneLogin​ + +The first step is adding the Harness app to your OneLogin **Applications**. + +1. In **Applications**, click **Add App**. +2. Search for **Harness**. The Harness Application appears. +3. Click the Harness app to open its Configuration page and click **Save**. + +When you are done, the Harness OneLogin app appears. + +For more information on adding apps, see OneLogin's documentation: [Introduction to App Management](https://onelogin.service-now.com/support/?id=kb_article&sys_id=6ac91143db109700d5505eea4b9619a2#add). + +### Step 2: SCIM Base URL + +Next, add a special Harness account URL to the OneLogin app's SCIM Base URL. + +1. Log into your Harness account. +2. Copy the Harness account ID from the **Account Overview** of your Harness account. + + ![](./static/provision-users-and-groups-with-one-login-scim-128.png) + +3. Add your account ID to the end of the following URL: `https://app.harness.io/gateway/ng/api/scim/account/` + + +:::note +For Harness On-Prem, the URL will use your custom domain name and `gateway` is omitted. For example, if your On-Prem domain name is **harness.mycompany.com**: `https://harness.mycompany.com/ng/api/scim/account/` +::: + +4. Copy the full URL. +5. In OneLogin, open the Harness OneLogin app. +6. Click **Configuration**. +7. In **SCIM Base URL**, paste the Harness URL you copied. + +Next, we will use a Harness API access key for the **SCIM Bearer Token** setting in your Harness OneLogin app. + +### Step 3: SCIM Bearer Token + +The SCIM Bearer Token value is used to authenticate requests and responses sent between the OneLogin SCIM provisioning service and Harness. + +1. In Harness Manager, create an API token by following the instructions in [Add and Manage API Keys](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md). +2. Copy the new API token. +3. In OneLogin, paste the API token in the **SCIM Bearer Token** setting in your Harness OneLogin app. +4. Ensure that the API Status is enabled and click **Save**. + +### Step 4: Set Up Harness OneLogin App Provisioning + +Next, you will set the required provisioning settings for the Harness OneLogin app. + +Ensure these settings are set up exactly as shown below. + +1. In the Harness OneLogin app, click **Provisioning**. +2. In **Workflow**, ensure the following are selected: +* Enable provisioning +* Create user +* Delete user +* Update user +* When users are deleted in OneLogin, or the user's app access is removed, perform the below action: **Delete**. +* When user accounts are suspended in OneLogin, perform the following action: **Suspend**. + +When you are done, it will look like this: + +![](./static/provision-users-and-groups-with-one-login-scim-129.png) + +3. Click **Save**. + +### Option: Provision OneLogin Users to Harness + +Next, we will add users to the Harness OneLogin app. Once OneLogin SSO is enabled in Harness, these users will be provisioned in Harness automatically. + +1. In OneLogin, click **Users**. +2. Click a user. +3. In **User Info**, ensure that the user has **First name**, **Last name**, and **Email** completed. + + +:::note +Only **First name**, **Last name**, and **Email** are permitted for Harness OneLogin SCIM provisioning. Do not use any additional User Info settings. +::: + + +4. Click **Applications**. +5. In the **Applications** table, click the add button **(+)**. +6. In the **Assign new login** settings, select the Harness OneLogin App and click **Continue**. +7. In **NameID**, enter the email address for the user. This is the same email address in the **NameID** setting. +8. Click **Save**. The status in the **Applications** table is now **Pending**. +9. Click **Pending**. The **Create User in Application** settings appear. +10. Click **Approve**. The Provisioning status will turn to Provisioned. + +If provisioning fails, you might see something like the following error: + +![](./static/provision-users-and-groups-with-one-login-scim-130.png) + +The most common reason is incorrect **SCIM Base URL** or **SCIM Bearer Token** settings in the OneLogin app. + +If an error prevents adding, deleting, or updating an individual user to Harness, you must retry provisioning the user in OneLogin later, after resolving the issues. For more information, see **Review and Approve Provisioning Tasks for Your SCIM Test App** in [Test Your SCIM Implementation](https://developers.onelogin.com/scim/test-your-scim). + +### Verify Provisioning in Harness + +Now that you have provisioning confirmation from OneLogin, let's verify that the provisioned user is in Harness. + +1. In Harness, click **Account Settings**, and then select **Access Control**. +2. Click **Users**. +3. Locate the provisioned user.![](./static/provision-users-and-groups-with-one-login-scim-132.png) + +The provisioned users will receive an email invite from Harness to sign up and log in. + +### Option: Provision OneLogin Roles to Harness Groups + +You can create, populate, and delete Harness User Groups using OneLogin. + +Due to OneLogin currently not supporting group deletion via SCIM, you must remove User Groups using OneLogin. If you try to delete OneLogin-provisioned User Groups within Harness, you will get the error message, `Cannot Delete Group Imported From SCIM`. Once the group is removed from OneLogin, contact Harness Support to have it removed from Harness.To perform Harness User Group provisioning using OneLogin, you assign the Harness OneLogin app and OneLogin users to a OneLogin role. + +Next, you create a rule in the Harness OneLogin app that creates groups in Harness using the role. + +The OneLogin roles become User Groups in Harness. + +You cannot provision OneLogin users to Harness User Groups if they are already provisioned in Harness. Simply remove them from Harness and then provision them using the step below. + +#### Add User Provisioning to the Harness OneLogin App + +1. Ensure the Harness OneLogin app is added and configured as described in steps 1 through 5 in this topic. +2. In OneLogin, open the Harness OneLogin app. +3. In **Parameters**, in **Optional Parameters**, click on **Groups**. +4. In **Edit Field Groups**, select **Include in User Provisioning** and click **Save**. +5. Click **Save** to save the Harness OneLogin app. + +Next, we'll create the OneLogin role that will be used as your Harness User Group. + +#### Create OneLogin Role + +1. In OneLogin, click **Users** and select **Roles**. +2. Click **New Role**. +3. Enter a name for the new role and click **Save**. +4. In **Roles**, open the new role. +5. Click **Users**. +6. In **Check existing or add new users to this role**, enter the name(s) of the users to add. +7. When you have located each user name, click **Check**. +8. For each user, click **Add to Role**. When you are done, the user(s) are listed in **Users Add Manually**. +9. Click **Save**. You are returned to the Roles page. +10. Open the role. +11. In the role, click **Applications**. +12. Click the **Add Apps** button. +13. In **Select Apps to Add**, click the Harness OneLogin app. +14. Click **Save**. + +Now that the role has users and the Harness OneLogin app, we can add the Harness OneLogin app to each OneLogin user. + +#### Add Harness OneLogin app to Users + +For each of the OneLogin users you have added to the role, you will now add the Harness OneLogin app. + +1. In OneLogin, click **Users**, and then select each user you want to add. +2. On the user's page, click **Applications**. +3. Click the **Add App** button. +4. In **Assign new login**, select the Harness OneLogin app, and click **Continue**. +5. In the **Edit** settings, in **Groups**, select the role you created and click **Add**. +6. Click **Save**. + +Now that each user is associated with the Harness OneLogin app and role, you will learn /add a rule to the Harness OneLogin app. The rule will set groups in the Harness OneLogin app using the role you created. + +#### Add Rule to Harness OneLogin App + +Next, you create a rule in the Harness OneLogin app to create groups using the role you created. + +1. Click **Application**, and then select the Harness OneLogin app. +2. In the app, click **Rules**. +3. Click **Add Rule**. +4. Name the rule. +5. In **Actions**, select **Set Groups in [Application name]**. +6. Select **Map from OneLogin**. +7. In **For each**, select **role**. +8. In **with value that matches**, enter the name of the role you create or enter the regex `.*`. +9. Click **Save**. +10. Click **Save** to save the app. + +If you have created users prior to adding the mapping rule, click Reapply Mappings in your Harness application User settings:Now that the app has a rule to set groups in Harness using the role you created, you can begin provisioning users using the app. + +#### Provision Users in Application + +Each of the OneLogin users that you added the Harness OneLogin app to can now be provisioned. + +1. In the Harness OneLogin app, click **Users**. The users are listed as **Pending**. +2. Click each user and then click **Approve**. + +The Provisioning State for each user is changed to **Provisioned**. + +#### See the Provisioned User Group in Harness + +Now that you have provisioned users using the Harness OneLogin app, you can see the new group and users in Harness. + +1. In Harness, click **Access Management**. +2. Click **User Groups**. +3. Locate the name of the User Group. It is named after the role you created. Click the **User Group**. + +You can see the User Group and Users that are provisioned. + +Repeat the steps in this process for additional users. + +When provisioning user groups through SCIM, Harness replaces any `.`,`-`, or a space in your role name and uses it as the group identifier. For example, if your role name is `example-group`in your SCIM provider, its identifier in Harness would be `example_group`. + +### What If I Already Have App Integration for Harness FirstGen? + +If you currently have a Harness FirstGen App Integration setup in your IDP and are now trying to set up one for Harness NextGen, make sure the user information is also included in the FirstGen App Integration before attempting to log into Harness NG through SSO. + +Harness authenticates users using either the FirstGen App Integration or the NextGen App Integration. If you have set up both, Harness continues to use your existing App Integration in FirstGen to authenticate users that attempt to log in using SSO.Let us look at the following example: + +1. An App Integration is already set up for FirstGen with 2 users as members: +`user1@example.com` and `user2@example.com`. +2. Now you set up a separate App Integration for Harness NextGen and add `user1@example.com` and `user_2@example.com` as the members. +3. You provision these users to Harness NextGen through SCIM. +4. `user1@example.com` and `user_2@example.com` try to log in to Harness NextGen through SSO. +5. The FirstGen App Integration is used for user authentication. +`user1@example.com` is a member of the FirstGen App Integration and hence is authenticated and successfully logged in to Harness NextGen. +`user_2@example.com` is not a member of the FirstGen App Integration, hence the authentication fails and the user cannot log in to Harness NextGen.![](./static/provision-users-and-groups-with-one-login-scim-133.png) + +### Assign Permissions Post-Provisioning + +Permissions can be assigned manually or via the Harness API: + +* [Add and Manage Roles](../4_Role-Based-Access-Control/9-add-manage-roles.md) +* [Add and Manage Resource Groups](../4_Role-Based-Access-Control/8-add-resource-groups.md) +* [Permissions Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) + diff --git a/docs/platform/3_Authentication/8-provision-users-and-groups-using-azure-ad-scim.md b/docs/platform/3_Authentication/8-provision-users-and-groups-using-azure-ad-scim.md new file mode 100644 index 00000000000..705ebbcb2c3 --- /dev/null +++ b/docs/platform/3_Authentication/8-provision-users-and-groups-using-azure-ad-scim.md @@ -0,0 +1,113 @@ +--- +title: Provision Users and Groups using Azure AD (SCIM) +description: Explains how to use Harness' SCIM integration with Azure Active Directory (AD) to automatically provision users and/or groups. +# sidebar_position: 2 +helpdocs_topic_id: 6r8hin2z20 +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +System for Cross-Domain Identity Management (SCIM) is an open standard protocol for the automation of user provisioning. + +Automatic provisioning refers to creating users and user groups in Harness. In addition to creating these, automatic provisioning includes the maintenance and removal of users and user groups as and when required. + +This topic explains how to configure Azure Active Directory (Azure AD) to automatically provision users or groups to Harness. + +### Before you begin + +* This topic assumes you understand the System for Cross-domain Identity Management (SCIM). For an overview, see the article [Introduction to System for Cross-domain Identity Management (SCIM)](https://medium.com/@pamodaaw/system-for-cross-domain-identity-management-scim-def45ea83ae7). +* Make sure you are an Administrator in your Azure AD account and have the **Account Admin** permissions in Harness. +* Make sure you have a Harness [API Key](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md) and a valid Token under it. The API Key must have all permissions on the Users and User Groups. + +### Review: Harness Azure AD SCIM Integration + +By using Azure AD as your identity provider, you can efficiently provision and manage users in your Harness Account, Org and Project. Harness' [SCIM](https://www.okta.com/blog/2017/01/what-is-scim/) integration enables Azure AD to serve as a single identity manager, for adding and removing users, and for provisioning User Groups. This is especially efficient for managing many users. + +In exchange for the convenience of Azure AD-provisioned users and groups, you must configure several aspects of Azure AD, as described in the following sections. You will also have restrictions on modifying Azure AD-provisioned users and groups natively within Harness, as described in [Limitations](#limitations). + +#### Features Supported + +Once you have set up the SCIM integration between Azure AD and Harness (as described below), Administrators will be able to perform the following Harness actions within Azure AD: + +* Create users, individually, in your Harness app. +* Assign Azure AD-defined groups to your Harness app. +* Group push already-assigned groups to Harness. +* Update User Attributes from Azure AD to Harness. +* Deactivate Users in Azure AD and Harness. + +### Limitations + +When you provision Harness User Groups and users from Azure AD, you will not be able to modify some of their attributes in Harness Manager. You must do so in Azure AD. + +Operations that you *cannot* perform on Azure AD-provisioned User Groups within Harness Manager are: + +* Managing users within the User Group. +* Adding users to the User Group. +* Removing users from the User Group. +* Renaming the User Group. +* Deleting the User Group. + +If a User Group provisioned from Azure AD duplicates the name of an existing Harness User Group, Harness will maintain both groups. To prevent confusion, you are free to rename the native User Group (but not the Azure AD-provisioned group). + +Where a User Group has been provisioned from Azure AD, you cannot use Harness Manager to edit the member users' details (**Email Address**, **Full Name**, or **User Groups** assignments). + +You must use Azure AD to assign these users to other User Groups (to grant corresponding permissions). You must also use Azure AD to delete these users from Harness, by removing them from the corresponding Azure AD app. + +When you use Azure AD to directly assign users to Harness, those users initially have no User Group assignments in Harness. With this method, you are free to use Harness Manager to add and modify User Group assignments. + +### Step 1: Add Harness from the Gallery + +Before you configure Harness for automatic user provisioning with Azure AD, you need to add Harness from the Azure AD application gallery to your list of managed SaaS applications. + +1. In the [Azure portal](https://portal.azure.com/), in the left pane, select **Azure Active Directory**.![](./static/provision-users-and-groups-using-azure-ad-scim-29.png) +2. Select **Enterprise applications** > **All applications**. +3. Click **New application** to add a new application.![](./static/provision-users-and-groups-using-azure-ad-scim-30.png) +4. In the search box, enter **Harness**, select **Harness** in the results list, and then select the **Add** button to add the application. You can now provision users to Harness. + +### Step 2: Provision Users to Harness + +1. In your Azure portal, go to Enterprise Applications > All applications. +2. In the applications list, select **Harness**. +3. Select **Provisioning**.![](./static/provision-users-and-groups-using-azure-ad-scim-31.png) +4. In the **Provisioning Mode** drop-down list, select **Automatic**. +5. Under **Admin Credentials**, do the following: + 1. In the **Tenant URL** box, enter `https://app.harness.io/gateway/ng/api/scim/account/`. + You can obtain your Harness account ID from the **Account Overview** of your Harness account.![](./static/provision-users-and-groups-using-azure-ad-scim-32.png) + 2. In the **Secret Token** box, enter the SCIM Authentication Token value. This is your Harness API token within your API Key. Make sure this key's permissions are inherited from the **Account Administrator** User Group. + For more information on how to create API token, see [Add and Manage API Keys](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md). + 3. Select **Test Connection** to ensure that Azure AD can connect to Harness.![](./static/provision-users-and-groups-using-azure-ad-scim-33.png) + If the connection fails, ensure that your Harness account has Admin permissions, and then try again. +6. In **Settings**, in the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications.![](./static/provision-users-and-groups-using-azure-ad-scim-34.png) +7. Select **Save**. +8. Under **Mappings**, enable **Provision Azure Active Directory Groups,** and **Provision Azure Active Directory Users**.![](./static/provision-users-and-groups-using-azure-ad-scim-35.png) +9. Click **Provision Azure Active Directory Users**. +10. Under **Attribute Mappings**, review the user attributes that are synchronized from Azure AD to Harness. The attributes selected as *Matching* are used to match the user accounts in Harness for update operations. Select **Save** to commit any changes.![](./static/provision-users-and-groups-using-azure-ad-scim-36.png) +11. In **Provisioning**, click **Provision Azure Active Directory Groups**.When provisioning user groups through SCIM, Harness replaces any `.`,`-`, or a space in your group name and uses it as the group identifier. For example, if your group name is `example-group`in your SCIM provider, its identifier in Harness would be `example_group`. +12. Under **Attribute Mappings**, review the group attributes that are synchronized from Azure AD to Harness. The attributes selected as *Matching* properties are used to match the groups in Harness for update operations. Select **Save** to commit any changes.![](./static/provision-users-and-groups-using-azure-ad-scim-37.png) +13. To configure scoping filters, see [Attribute-based application provisioning with scoping filters](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts). +14. In **Provisioning**, under **Settings**, to enable the Azure AD provisioning service for Harness, toggle the **Provisioning Status** switch to **On**.![](./static/provision-users-and-groups-using-azure-ad-scim-38.png) +15. Under **Settings**, in the **Scope** drop-down list, select how you want to sync the users or groups that you're provisioning to Harness.![](./static/provision-users-and-groups-using-azure-ad-scim-39.png) +16. Click **Save**. + +This operation starts the initial sync of the users or groups you're provisioning. The initial sync takes longer to perform than later ones. Syncs occur approximately every 40 minutes, as long as the Azure AD provisioning service is running. To monitor progress, go to the **Synchronization Details** section. You can also follow links to a provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Harness. + +For more information about how to read the Azure AD provisioning logs, see [Report on automatic user account provisioning](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/check-status-user-account-provisioning). + +If an error prevents adding, updating, or deleting an individual user to Harness, Azure retries the operation in the next sync cycle. To resolve the failure, administrators must check the [provisioning logs](https://learn.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-provisioning-logs?context=azure/active-directory/manage-apps/context/manage-apps-context) to determine the root cause and take the appropriate action. For more information, see [Errors and retries](https://learn.microsoft.com/en-us/azure/active-directory/app-provisioning/how-provisioning-works#errors-and-retries). + +### What If I Already Have App Integration for Harness FirstGen? + +If you currently have a Harness FirstGen App Integration setup in your IDP and are now trying to set up one for Harness NextGen, make sure the user information is also included in the FirstGen App Integration before attempting to log into Harness NextGen through SSO. + +Harness authenticates users using either the FirstGen App Integration or the NextGen App Integration. If you have set up both, Harness continues to use your existing App Integration in FirstGen to authenticate users that attempt to log in using SSO.Let us look at the following example: + +1. An App Integration is already set up for FirstGen with 2 users as members: +`user1@example.com` and `user2@example.com`. +2. Now you set up a separate App Integration for Harness NextGen and add `user1@example.com` and `user_2@example.com` as the members. +3. You provision these users to Harness NextGen through SCIM. +4. `user1@example.com` and `user_2@example.com` try to log in to Harness NextGen through SSO. +5. The FirstGen App Integration is used for user authentication. +`user1@example.com` is a member of the FirstGen App Integration and hence is authenticated and successfully logged in to Harness NextGen. +`user_2@example.com` is not a member of the FirstGen App Integration, hence the authentication fails and the user cannot log in to Harness NextGen.![](./static/provision-users-and-groups-using-azure-ad-scim-40.png) + diff --git a/docs/platform/3_Authentication/9-switch-account.md b/docs/platform/3_Authentication/9-switch-account.md new file mode 100644 index 00000000000..68547b862f1 --- /dev/null +++ b/docs/platform/3_Authentication/9-switch-account.md @@ -0,0 +1,54 @@ +--- +title: Switch Account +description: This topic explains the authentication mechanism when the Account is switched for a user. +# sidebar_position: 2 +helpdocs_topic_id: 918lei069y +helpdocs_category_id: fe0577j8ie +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can be a part of more than one Harness Accounts. + +This topic explains how to switch between multiple Accounts in Harness. + +### View and switch Accounts + +You can check to see if you are a part of multiple Accounts by clicking on your User Profile. + +![](./static/switch-account-51.png) +Click **Switch Account**. The **Switch Account** settings appear. + +![](./static/switch-account-52.png) +All the Accounts that you are a member of, are listed here. + +You can set a specific Account as default by clicking **Set as Default**. + +Switching Accounts might require re-authentication based on the configured authentication settings.The following table shows which settings will need a re-authentication when you switch accounts: + + + +| | | | +| --- | --- | --- | +| **Authentication Setting of Current Account** | **Authentication Setting of the switch Account** | **Need to Re-authenticate** | +| Username and Password |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| Username and Password + OAuth |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| OAuth (All providers) |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| OAuth (All providers) |
  • OAuth (All Providers)
  • | No | +| OAuth (Google + GitHub) |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| OAuth (Google + GitHub) | * OAuth (Google + GitHub)| No | +| OAuth (Google) |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| OAuth (Google) |
  • OAuth (Google)
  • | No | +| SAML (SSO Settings 1) |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| SAML (SSO Settings 1) |
  • SAML (SSO Settings 1)
  • | No | +| SAML (SSO Settings 2) |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| SAML (SSO Settings 2) |
  • SAML (SSO Settings 2)
  • | No | +| LDAP (Settings 1) |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 2)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| LDAP (Settings 1) |
  • LDAP (Settings 1)
  • | No | +| LDAP (Settings 2) |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • Whitelisted domains
  • 2FA at Account scope + OAuth
  • | Yes | +| LDAP (Settings 2) |
  • LDAP (Settings 2)
  • | No | +| Whitelisted domains  |
  • OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • 2FA at Account scope + OAuth
  • | Yes | +| Whitelisted domains  |
  • Whitelisted domains
  • | No | +| 2FA at Account scope + OAuth |
  • Username and Password
  • Username and Password + OAuth
  • OAuth (All Providers)
  • OAuth (Google + GitHub)
  • OAuth (Google)
  • SAML (SSO Settings 1)
  • SAML (SSO Settings 2)
  • LDAP (Settings 1)
  • LDAP (Settings 2)
  • Whitelisted domains
  • | Yes | +| 2FA at Account scope + OAuth |
  • 2FA at Account scope + OAuth
  • | No | + diff --git a/docs/platform/3_Authentication/_category_.json b/docs/platform/3_Authentication/_category_.json new file mode 100644 index 00000000000..83b27c7fd50 --- /dev/null +++ b/docs/platform/3_Authentication/_category_.json @@ -0,0 +1 @@ +{"label": "Authentication", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Authentication"}, "customProps": {"position": 30, "helpdocs_category_id": "fe0577j8ie"}} \ No newline at end of file diff --git a/docs/platform/3_Authentication/static/authentication-overview-41.png b/docs/platform/3_Authentication/static/authentication-overview-41.png new file mode 100644 index 00000000000..cf4e50eb5a9 Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-41.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-42.png b/docs/platform/3_Authentication/static/authentication-overview-42.png new file mode 100644 index 00000000000..14073373bc9 Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-42.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-43.png b/docs/platform/3_Authentication/static/authentication-overview-43.png new file mode 100644 index 00000000000..38a4f25250f Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-43.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-44.png b/docs/platform/3_Authentication/static/authentication-overview-44.png new file mode 100644 index 00000000000..543a515e971 Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-44.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-45.png b/docs/platform/3_Authentication/static/authentication-overview-45.png new file mode 100644 index 00000000000..abd06dde647 Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-45.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-46.png b/docs/platform/3_Authentication/static/authentication-overview-46.png new file mode 100644 index 00000000000..22ea3a6ef8c Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-46.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-47.png b/docs/platform/3_Authentication/static/authentication-overview-47.png new file mode 100644 index 00000000000..d84b83a9a15 Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-47.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-48.png b/docs/platform/3_Authentication/static/authentication-overview-48.png new file mode 100644 index 00000000000..5d86b1c83ee Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-48.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-49.png b/docs/platform/3_Authentication/static/authentication-overview-49.png new file mode 100644 index 00000000000..76a343e93e7 Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-49.png differ diff --git a/docs/platform/3_Authentication/static/authentication-overview-50.png b/docs/platform/3_Authentication/static/authentication-overview-50.png new file mode 100644 index 00000000000..96645069fb5 Binary files /dev/null and b/docs/platform/3_Authentication/static/authentication-overview-50.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-29.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-29.png new file mode 100644 index 00000000000..8c8fec50206 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-29.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-30.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-30.png new file mode 100644 index 00000000000..6694e34fe30 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-30.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-31.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-31.png new file mode 100644 index 00000000000..1cf64259d5b Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-31.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-32.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-32.png new file mode 100644 index 00000000000..44721f2fca3 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-32.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-33.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-33.png new file mode 100644 index 00000000000..8116c995e80 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-33.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-34.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-34.png new file mode 100644 index 00000000000..8caf14d49f9 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-34.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-35.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-35.png new file mode 100644 index 00000000000..bbedcbf87b9 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-35.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-36.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-36.png new file mode 100644 index 00000000000..d3d505361b9 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-36.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-37.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-37.png new file mode 100644 index 00000000000..0089cd419ae Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-37.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-38.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-38.png new file mode 100644 index 00000000000..3140c2caa87 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-38.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-39.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-39.png new file mode 100644 index 00000000000..7b619886312 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-39.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-40.png b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-40.png new file mode 100644 index 00000000000..37010d53df1 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-using-azure-ad-scim-40.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-128.png b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-128.png new file mode 100644 index 00000000000..44721f2fca3 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-128.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-129.png b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-129.png new file mode 100644 index 00000000000..56ce2475241 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-129.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-130.png b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-130.png new file mode 100644 index 00000000000..81a01aadf8a Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-130.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-131.png b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-131.png new file mode 100644 index 00000000000..81a01aadf8a Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-131.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-132.png b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-132.png new file mode 100644 index 00000000000..7585f70d6ca Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-132.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-133.png b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-133.png new file mode 100644 index 00000000000..37010d53df1 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-and-groups-with-one-login-scim-133.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-05.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-05.png new file mode 100644 index 00000000000..96c1cdd059f Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-05.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-06.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-06.png new file mode 100644 index 00000000000..2d82817919d Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-06.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-07.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-07.png new file mode 100644 index 00000000000..2d82817919d Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-07.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-08.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-08.png new file mode 100644 index 00000000000..68113d1877e Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-08.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-09.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-09.png new file mode 100644 index 00000000000..13f5bfa0c1f Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-09.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-10.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-10.png new file mode 100644 index 00000000000..c9b4df51bff Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-10.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-11.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-11.png new file mode 100644 index 00000000000..786b87feacf Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-11.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-12.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-12.png new file mode 100644 index 00000000000..bed969e647b Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-12.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-13.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-13.png new file mode 100644 index 00000000000..9ac7998e2fa Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-13.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-14.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-14.png new file mode 100644 index 00000000000..cd15020b297 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-14.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-15.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-15.png new file mode 100644 index 00000000000..4af3682d72b Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-15.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-16.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-16.png new file mode 100644 index 00000000000..09621f2880d Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-16.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-17.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-17.png new file mode 100644 index 00000000000..dc802146901 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-17.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-18.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-18.png new file mode 100644 index 00000000000..f42e38ac324 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-18.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-19.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-19.png new file mode 100644 index 00000000000..33fdbdfc9db Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-19.png differ diff --git a/docs/platform/3_Authentication/static/provision-users-with-okta-scim-20.png b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-20.png new file mode 100644 index 00000000000..37010d53df1 Binary files /dev/null and b/docs/platform/3_Authentication/static/provision-users-with-okta-scim-20.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-100.png b/docs/platform/3_Authentication/static/single-sign-on-saml-100.png new file mode 100644 index 00000000000..bc8b66fe7c1 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-100.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-101.png b/docs/platform/3_Authentication/static/single-sign-on-saml-101.png new file mode 100644 index 00000000000..b55fc9a1b8e Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-101.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-102.png b/docs/platform/3_Authentication/static/single-sign-on-saml-102.png new file mode 100644 index 00000000000..4a7f394cea9 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-102.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-103.png b/docs/platform/3_Authentication/static/single-sign-on-saml-103.png new file mode 100644 index 00000000000..319f5b4e79d Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-103.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-104.png b/docs/platform/3_Authentication/static/single-sign-on-saml-104.png new file mode 100644 index 00000000000..6f2bcf7f865 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-104.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-105.png b/docs/platform/3_Authentication/static/single-sign-on-saml-105.png new file mode 100644 index 00000000000..98ef8f1cc20 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-105.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-106.png b/docs/platform/3_Authentication/static/single-sign-on-saml-106.png new file mode 100644 index 00000000000..cfbadcb71a5 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-106.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-107.png b/docs/platform/3_Authentication/static/single-sign-on-saml-107.png new file mode 100644 index 00000000000..a0cd05d663d Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-107.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-108.png b/docs/platform/3_Authentication/static/single-sign-on-saml-108.png new file mode 100644 index 00000000000..580f3233005 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-108.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-109.png b/docs/platform/3_Authentication/static/single-sign-on-saml-109.png new file mode 100644 index 00000000000..98ef8f1cc20 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-109.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-110.png b/docs/platform/3_Authentication/static/single-sign-on-saml-110.png new file mode 100644 index 00000000000..98ef8f1cc20 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-110.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-111.png b/docs/platform/3_Authentication/static/single-sign-on-saml-111.png new file mode 100644 index 00000000000..901b46eaa7b Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-111.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-112.png b/docs/platform/3_Authentication/static/single-sign-on-saml-112.png new file mode 100644 index 00000000000..901b46eaa7b Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-112.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-113.png b/docs/platform/3_Authentication/static/single-sign-on-saml-113.png new file mode 100644 index 00000000000..32d5b0f253a Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-113.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-114.png b/docs/platform/3_Authentication/static/single-sign-on-saml-114.png new file mode 100644 index 00000000000..1dbcbeebe38 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-114.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-115.png b/docs/platform/3_Authentication/static/single-sign-on-saml-115.png new file mode 100644 index 00000000000..afbc1c1cbff Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-115.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-116.png b/docs/platform/3_Authentication/static/single-sign-on-saml-116.png new file mode 100644 index 00000000000..906b441f1a5 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-116.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-117.png b/docs/platform/3_Authentication/static/single-sign-on-saml-117.png new file mode 100644 index 00000000000..9afeb46f4d9 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-117.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-118.png b/docs/platform/3_Authentication/static/single-sign-on-saml-118.png new file mode 100644 index 00000000000..8e8c2cfa4a7 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-118.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-53.png b/docs/platform/3_Authentication/static/single-sign-on-saml-53.png new file mode 100644 index 00000000000..7d7eda03705 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-53.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-54.png b/docs/platform/3_Authentication/static/single-sign-on-saml-54.png new file mode 100644 index 00000000000..2d82817919d Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-54.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-55.png b/docs/platform/3_Authentication/static/single-sign-on-saml-55.png new file mode 100644 index 00000000000..4477b4396d6 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-55.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-56.png b/docs/platform/3_Authentication/static/single-sign-on-saml-56.png new file mode 100644 index 00000000000..73a6771828e Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-56.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-57.png b/docs/platform/3_Authentication/static/single-sign-on-saml-57.png new file mode 100644 index 00000000000..f9c45e8ddb0 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-57.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-58.png b/docs/platform/3_Authentication/static/single-sign-on-saml-58.png new file mode 100644 index 00000000000..3a658bd3748 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-58.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-59.png b/docs/platform/3_Authentication/static/single-sign-on-saml-59.png new file mode 100644 index 00000000000..48d43746dff Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-59.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-60.png b/docs/platform/3_Authentication/static/single-sign-on-saml-60.png new file mode 100644 index 00000000000..be69bbe5497 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-60.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-61.png b/docs/platform/3_Authentication/static/single-sign-on-saml-61.png new file mode 100644 index 00000000000..38aa9d5c224 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-61.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-62.png b/docs/platform/3_Authentication/static/single-sign-on-saml-62.png new file mode 100644 index 00000000000..cfbadcb71a5 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-62.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-63.png b/docs/platform/3_Authentication/static/single-sign-on-saml-63.png new file mode 100644 index 00000000000..4d95d51bbfb Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-63.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-64.png b/docs/platform/3_Authentication/static/single-sign-on-saml-64.png new file mode 100644 index 00000000000..3129eb93489 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-64.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-65.png b/docs/platform/3_Authentication/static/single-sign-on-saml-65.png new file mode 100644 index 00000000000..0fd4be77d90 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-65.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-66.png b/docs/platform/3_Authentication/static/single-sign-on-saml-66.png new file mode 100644 index 00000000000..efb0b7ac352 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-66.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-67.png b/docs/platform/3_Authentication/static/single-sign-on-saml-67.png new file mode 100644 index 00000000000..3195ad3d849 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-67.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-68.png b/docs/platform/3_Authentication/static/single-sign-on-saml-68.png new file mode 100644 index 00000000000..16a7812ae82 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-68.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-69.png b/docs/platform/3_Authentication/static/single-sign-on-saml-69.png new file mode 100644 index 00000000000..0e330d9e6df Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-69.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-70.png b/docs/platform/3_Authentication/static/single-sign-on-saml-70.png new file mode 100644 index 00000000000..7054133866a Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-70.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-71.png b/docs/platform/3_Authentication/static/single-sign-on-saml-71.png new file mode 100644 index 00000000000..6ae17bcf820 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-71.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-72.png b/docs/platform/3_Authentication/static/single-sign-on-saml-72.png new file mode 100644 index 00000000000..84090923353 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-72.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-73.png b/docs/platform/3_Authentication/static/single-sign-on-saml-73.png new file mode 100644 index 00000000000..98ef8f1cc20 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-73.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-74.png b/docs/platform/3_Authentication/static/single-sign-on-saml-74.png new file mode 100644 index 00000000000..acf30e21df7 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-74.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-75.png b/docs/platform/3_Authentication/static/single-sign-on-saml-75.png new file mode 100644 index 00000000000..bf0e843b00e Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-75.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-76.png b/docs/platform/3_Authentication/static/single-sign-on-saml-76.png new file mode 100644 index 00000000000..10358437e61 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-76.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-77.png b/docs/platform/3_Authentication/static/single-sign-on-saml-77.png new file mode 100644 index 00000000000..f588be47c5c Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-77.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-78.png b/docs/platform/3_Authentication/static/single-sign-on-saml-78.png new file mode 100644 index 00000000000..219057ac738 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-78.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-79.png b/docs/platform/3_Authentication/static/single-sign-on-saml-79.png new file mode 100644 index 00000000000..dbe3ba5cfdc Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-79.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-80.png b/docs/platform/3_Authentication/static/single-sign-on-saml-80.png new file mode 100644 index 00000000000..4ebb6d1d719 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-80.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-81.png b/docs/platform/3_Authentication/static/single-sign-on-saml-81.png new file mode 100644 index 00000000000..c39662861a8 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-81.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-82.png b/docs/platform/3_Authentication/static/single-sign-on-saml-82.png new file mode 100644 index 00000000000..826d73e66b2 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-82.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-83.png b/docs/platform/3_Authentication/static/single-sign-on-saml-83.png new file mode 100644 index 00000000000..1abf09398d7 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-83.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-84.png b/docs/platform/3_Authentication/static/single-sign-on-saml-84.png new file mode 100644 index 00000000000..01f8e8b73b9 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-84.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-85.png b/docs/platform/3_Authentication/static/single-sign-on-saml-85.png new file mode 100644 index 00000000000..dbe3ba5cfdc Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-85.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-86.png b/docs/platform/3_Authentication/static/single-sign-on-saml-86.png new file mode 100644 index 00000000000..26d5c7a526c Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-86.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-87.png b/docs/platform/3_Authentication/static/single-sign-on-saml-87.png new file mode 100644 index 00000000000..11a44009a4f Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-87.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-88.png b/docs/platform/3_Authentication/static/single-sign-on-saml-88.png new file mode 100644 index 00000000000..a43d5f44798 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-88.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-89.png b/docs/platform/3_Authentication/static/single-sign-on-saml-89.png new file mode 100644 index 00000000000..1441c4d6266 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-89.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-90.png b/docs/platform/3_Authentication/static/single-sign-on-saml-90.png new file mode 100644 index 00000000000..ec0e8ed766e Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-90.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-91.png b/docs/platform/3_Authentication/static/single-sign-on-saml-91.png new file mode 100644 index 00000000000..16bc08f990b Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-91.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-92.png b/docs/platform/3_Authentication/static/single-sign-on-saml-92.png new file mode 100644 index 00000000000..74734867f22 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-92.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-93.png b/docs/platform/3_Authentication/static/single-sign-on-saml-93.png new file mode 100644 index 00000000000..20d78aee1e9 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-93.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-94.png b/docs/platform/3_Authentication/static/single-sign-on-saml-94.png new file mode 100644 index 00000000000..7fec0ea9363 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-94.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-95.png b/docs/platform/3_Authentication/static/single-sign-on-saml-95.png new file mode 100644 index 00000000000..97b18ff9a19 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-95.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-96.png b/docs/platform/3_Authentication/static/single-sign-on-saml-96.png new file mode 100644 index 00000000000..be688f3579c Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-96.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-97.png b/docs/platform/3_Authentication/static/single-sign-on-saml-97.png new file mode 100644 index 00000000000..6dff4acfd27 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-97.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-98.png b/docs/platform/3_Authentication/static/single-sign-on-saml-98.png new file mode 100644 index 00000000000..188dcd3179a Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-98.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-saml-99.png b/docs/platform/3_Authentication/static/single-sign-on-saml-99.png new file mode 100644 index 00000000000..b4b4d922dd3 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-saml-99.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-21.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-21.png new file mode 100644 index 00000000000..0f1b28dbf69 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-21.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-22.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-22.png new file mode 100644 index 00000000000..58136c498a9 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-22.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-23.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-23.png new file mode 100644 index 00000000000..f303934350e Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-23.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-24.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-24.png new file mode 100644 index 00000000000..128943ae98f Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-24.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-25.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-25.png new file mode 100644 index 00000000000..97385886966 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-25.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-26.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-26.png new file mode 100644 index 00000000000..0268ff23e63 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-26.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-27.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-27.png new file mode 100644 index 00000000000..64d59616664 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-27.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-28.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-28.png new file mode 100644 index 00000000000..999cac636c2 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-ldap-28.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-119.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-119.png new file mode 100644 index 00000000000..057ecfea143 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-119.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-120.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-120.png new file mode 100644 index 00000000000..c0f467cc7b2 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-120.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-121.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-121.png new file mode 100644 index 00000000000..8e8c2cfa4a7 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-121.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-122.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-122.png new file mode 100644 index 00000000000..76584d5754c Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-122.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-123.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-123.png new file mode 100644 index 00000000000..057ecfea143 Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-123.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-124.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-124.png new file mode 100644 index 00000000000..badc23c2e4c Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-124.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-125.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-125.png new file mode 100644 index 00000000000..ef9f2e30a7e Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-125.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-126.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-126.png new file mode 100644 index 00000000000..da89db0f1ff Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-126.png differ diff --git a/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-127.png b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-127.png new file mode 100644 index 00000000000..f9c6f02e9dc Binary files /dev/null and b/docs/platform/3_Authentication/static/single-sign-on-sso-with-oauth-127.png differ diff --git a/docs/platform/3_Authentication/static/switch-account-51.png b/docs/platform/3_Authentication/static/switch-account-51.png new file mode 100644 index 00000000000..ac738bbc822 Binary files /dev/null and b/docs/platform/3_Authentication/static/switch-account-51.png differ diff --git a/docs/platform/3_Authentication/static/switch-account-52.png b/docs/platform/3_Authentication/static/switch-account-52.png new file mode 100644 index 00000000000..769875940b9 Binary files /dev/null and b/docs/platform/3_Authentication/static/switch-account-52.png differ diff --git a/docs/platform/3_Authentication/static/two-factor-authentication-00.png b/docs/platform/3_Authentication/static/two-factor-authentication-00.png new file mode 100644 index 00000000000..78d9ef7efc9 Binary files /dev/null and b/docs/platform/3_Authentication/static/two-factor-authentication-00.png differ diff --git a/docs/platform/3_Authentication/static/two-factor-authentication-01.png b/docs/platform/3_Authentication/static/two-factor-authentication-01.png new file mode 100644 index 00000000000..f753a95634c Binary files /dev/null and b/docs/platform/3_Authentication/static/two-factor-authentication-01.png differ diff --git a/docs/platform/3_Authentication/static/two-factor-authentication-02.png b/docs/platform/3_Authentication/static/two-factor-authentication-02.png new file mode 100644 index 00000000000..0dd2c60edd2 Binary files /dev/null and b/docs/platform/3_Authentication/static/two-factor-authentication-02.png differ diff --git a/docs/platform/3_Authentication/static/two-factor-authentication-03.png b/docs/platform/3_Authentication/static/two-factor-authentication-03.png new file mode 100644 index 00000000000..08f0718c278 Binary files /dev/null and b/docs/platform/3_Authentication/static/two-factor-authentication-03.png differ diff --git a/docs/platform/3_Authentication/static/two-factor-authentication-04.png b/docs/platform/3_Authentication/static/two-factor-authentication-04.png new file mode 100644 index 00000000000..1ea6695f910 Binary files /dev/null and b/docs/platform/3_Authentication/static/two-factor-authentication-04.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/1-rbac-in-harness.md b/docs/platform/4_Role-Based-Access-Control/1-rbac-in-harness.md new file mode 100644 index 00000000000..0d3875737b0 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/1-rbac-in-harness.md @@ -0,0 +1,230 @@ +--- +title: Harness Role-Based Access Control Overview +description: This topic explains the concept of Harness Role-Based Access Control. +# sidebar_position: 2 +helpdocs_topic_id: vz5cq0nfg2 +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Access management for resources is a critical function for your Accounts, Organizations, and Projects. Harness Role-Based Access Control (RBAC) helps you manage who has access to your Harness resources, what they can do with those resources, and in what scope they have access. + +### What is Harness Role-Based Access Control? + +Harness RBAC is role-based. This means the permissions to resources are determined by the roles assigned to Users, User Groups, and Service Accounts. + +For example, certain users may have permission to execute pipelines, whereas other users may have permission to view the pipelines but not execute them. + +Harness RBAC is an authorization system that provides fine-grained access management of Harness resources. + +### Visual Summary + +Here is a quick overview of Harness RBAC: + +* The Account Administrator invites Users to the Account. +* The Account Administrator creates User Groups. +* The Account Administrator creates Service Accounts. +* Role Assignments happen on individual Users, User Groups, or Service Accounts. +* Each User can be a member of multiple user groups and hence can have multiple role assignments. +* Each User Group and Service Account can have multiple role assignments. +* You can assign roles at any [scope](#rbac-scope).![](./static/rbac-in-harness-00.png) + +### Harness RBAC Components + +* **Users:** These are individual users within the Harness system. One User can belong to many user groups. +For more information on creating a new User, see [Add and Manage Users](../4_Role-Based-Access-Control/3-add-users.md). +* **User Groups:** User Groups contain multiple Harness Users. Each User Group has assigned roles. You can create User Groups at Account/Org/Project scope. +For more information on creating a new User Group, see [Add and Manage User Groups](../4_Role-Based-Access-Control/4-add-user-groups.md). +* **Service Account:** A Service Account is a set of [API Keys](../4_Role-Based-Access-Control/7-add-and-manage-api-keys.md) with a set of permissions assigned to them via role assignment. API Keys are used for authenticating and authorizing remote services attempting to perform operations in Harness via our APIs. API Keys that are part of a Service Account are assigned permissions (equivalent to users) that Service Accounts inherit. +For more information on creating a new Service Account, see [Add and Manage Service Accounts.](../4_Role-Based-Access-Control/6-add-and-manage-service-account.md) +* **Resource Groups:** A [Resource Group](#resource-group) is a set of Harness resources that a User or User Group can access. You can create Resource Groups at Account/Org/Project scope. +For more information on creating a new Resource Group, see [Add and Manage Resource Groups](../4_Role-Based-Access-Control/8-add-resource-groups.md). +* **Roles:** A [Role](#role) is a group of permissions you assign to a User Group. You can create roles at Account/Org/Project scope. +For more information on creating a new Role, see [Add and Manage Roles](../4_Role-Based-Access-Control/9-add-manage-roles.md). +* **Principal:** A principal can be a **User**, **User Group,** or **Service Account** to which you provide access. [Role assignments](#role-assignment) are done on any of these principals, also known as **Subjects**.![](./static/rbac-in-harness-01.png) + +### What Can You do with RBAC? + +Here are a few examples of what RBAC can be used for: + +* Allow Users, User Groups, or Service Accounts to manage and access the resources through the Account/Org/Project Admin role. +* Allow Users, User Groups, or Service Accounts to view the resources through the Account/Org/Project Viewer role. +* Allow Users, User Groups, or Service Accounts to manage and access specific resources through Custom Roles. + +### How Does RBAC Work? + +The way you control access to resources using RBAC is to assign permissions to users and groups to manage resources. + +* Permissions that you want to assign to a User or User Group or a Service Account are grouped together in a Role. +* Resources that you want to control access to are grouped together in a Resource Group. +* An account administrator assigns a Role and Resource Group to a Principal - User or User Group or Service Account. This assignment is called [Role Assignment](#role-assignment). +* Role Assignment grants the Principal the permissions from the Role on the set of resources in the Resource Group. + +### RBAC Scope + +Harness Accounts allow you to group Organizations and Projects that share the same goal. These have their own scope of access. + +In Harness, you can specify scopes at three levels: + +* Account +* Organization +* Project + +Scopes are structured in a parent-child relationship. You can assign roles at any of these levels of scope. + +![](./static/rbac-in-harness-02.png) +The following table shows what it means to add users and resources at different scopes or hierarchies: + + + +| | | | +| --- | --- | --- | +| **Scope** | **When to add users?** | **When to add resources?** | +| **Account** | To manage administrative functions or have total access and authority over the whole hierarchy, add them to the Account scope. | Add resources to the Account scope to allow sharing across the entire hierarchy. | +| **Organization** | To have visibility and control over all of the projects within this Org, add users to the Org scope. | Add resources to the Org scope to allow sharing across projects within this Org while isolating from other organizations. | +| **Project** | To manage or contribute to this Project, add users to the Project scope. | Add resources to the Project scope to provide total control to the Project teams. | + +To know more about Organizations and Projects, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +### Resource Group + +A Resource Group is a collection of resources that are all managed by the same set of users and have the same access control policies. + +Resource Groups can be of two types: + +* **All Resources**– Collection of all the resources of a given type. +* **Named Resources**– Collection of a specific set of individual resources. + +![](./static/rbac-in-harness-03.png) +Harness includes the following default Resource Groups at each scope: + + + +| | | | +| --- | --- | --- | +| **Scope** | **Resource Group** | **Description** | +| **Account** | **All Resources Including Child Scopes** | Includes all resources within the Account's scope, as well as those within the scope of the Orgs and Projects within the Account. | +| **Account** | **All Account Level Resources** | Includes all resources within the Account's scope. Excludes resources that are within the scope of an Org or Project. | +| **Org** | **All Resources Including Child Scopes** | Includes all the resources within the Org's scope, as well as those within the scope of all Projects within the Org. | +| **Org** | **All Organization Level Resources** | Include all resources within the Org's scope. Excludes resources that are within the scope of a Project. | +| **Project** | **All Project Level Resources** | Includes all resources within the scope of a Project. | + +You can also create custom resource groups within any scope. + +For more information, see [Add and Manage Resource Groups](../4_Role-Based-Access-Control/8-add-resource-groups.md). + +### Role + +A Role is a set of permissions that allow or deny specific operations on a specific set of resources. A Role defines access to resources within a single scope — Project/Org/Account. + +Harness provides the following default roles at the Account, Org, and Project scope: + + + +| | | +| --- | --- | +| **Scope** | **Role** | +| **Account** | Account Admin | +| **Account** | Account Viewer | +| **Account** | Feature Flag Manage Role | +| **Org** | Organization Admin | +| **Org** | Organization Viewer | +| **Org** | Feature Flag Manage Role | +| **Project** | Project Admin | +| **Project** | Project Viewer | +| **Project** | Pipeline Executor | +| **Project** | Feature Flag Manage Role | + +For more information, see [Add and Manage Roles](../4_Role-Based-Access-Control/9-add-manage-roles.md). + +### Role Assignment + +A role assignment consists of the following elements: + +* Principal +* Role +* Resource Group +* Scope + +Following are a few key points for role assignment in Harness: + +* Role Assignment is nothing but assigning a Role and a Resource Group to a Principal. +* The Principal gets access to resources through Role Assignment. +* The Principal on which role assignment is done can be an individual User, User Group, or Service Account. +* Each Principal can have multiple role assignments. +* Depending on where you wish to set up access control, you may assign roles at the Account, Org, or Project scope. + +![](./static/rbac-in-harness-04.png) +The following list explains the different role assignments with the default roles and resource groups: + + + +| | | +| --- | --- | +| **Role Assignment (Default Role + Default Resource Group)** | **Description** | +| **Account Admin +** **All Resources Including Child Scopes** | A User Group with this role assignment has the following permissions:* All permissions on all the resources in the Account scope as well as Organizations and Project scopes within the entire Account. + | +| **Account Admin +** **All Account Level Resources** | A User Group with this role assignment has the following permissions:* All permissions on all the resources in the Account scope only. + | +| **Account Viewer +** **All Resources Including Child Scopes** | A User Group with this role assignment has the following permissions:* View permissions on all the resources in the Account as well as Organizations and Projects scopes within the entire Account. + | +| **Account Viewer +** **All Account Level Resources** | A User Group with this role assignment has the following permissions:* View permissions on all the resources in the Account scope only. + | +| **Feature Flag Manage Role +** **All Resources Including Child Scopes** | A User Group with this role assignment has the following permissions:* Create/Edit permissions on Feature Flags and Target Management in the Account as well as Organizations and Projects scopes within the entire Account. + | +| **Feature Flag Manage Role +** **All Account Level Resources** | A User Group with this role assignment has the following permissions:* Create/Edit permissions on Feature Flags and Target Management in the Account scope only. + | +| **Organization Admin +** **All Resources Including Child Scopes** | A User Group with this role assignment has the following permissions:* All permissions on all the resources in the Organization as well as Projects within the Organization. + | +| **Organization Admin + All Organization Level Resources** | A User Group with this role assignment has the following permissions:* All permissions on all the resources in the Organization scope only. + | +| **Organization Viewer +** **All Resources Including Child Scopes** | A User Group with this role assignment has the following permissions:* View permissions on all the resources in the Organization as well as Projects within the Organization. + | +| **Organization Viewer + All Organization Level Resources** | A User Group with this role assignment has the following permissions:* View permissions on all the resources in the Organization scope only. + | +| **Feature Flag Manage Role +** **All Resources Including Child Scopes** | A User Group with this role assignment has the following permissions:* Create/Edit permissions on Feature Flags and Target Management in the Organizations, and Projects within the entire Organization. + | +| **Feature Flag Manage Role + All Organization Level Resources** | A User Group with this role assignment has the following permissions:* Create/Edit permissions for Feature Flags and Target Management in the Organization scope only. + | +| **Project Admin + All Project Level Resources** | A User Group with this role assignment has the following permissions:* All permissions on all the resources within the Project scope. + | +| **Project Viewer + All Project Level Resources** | A User Group with this role assignment has the following permissions:* View permissions on all the resources in the Project. + | +| **Feature Flag Manage + All Project Level Resources** | A User Group with this role assignment has the following permissions:* Create/Edit permissions for Feature Flags and Target Management within the Project scope. + | +| **Pipeline Executor + All Project Level Resources** | A User Group with this role assignment has the following permissions:* View permission on Resource Group, Project, Users, User Groups, and Roles +* View and Access permissions on Secrets, Connectors, Environments, Services +* View and Execute permissions on Pipelines + | + +### Permissions + +When a Harness User is a member of multiple User Groups, the sum of all the role assignments determines the effective permissions for the user. + +For example, let us consider a user with the following role assignments: + +* **Account Admin** role for **All Resources Including Child Scopes**. +* **Organization Viewer** role for **All Resources Including Child Scopes**. + +The sum of these role assignments is effectively the **Account Admin** role for **All Resources Including Child Scopes.** Therefore, in this case, the **Organization Viewer** role for **All Resources Including Child Scopes** has no impact. + +By default, users will have **View** permissions for all resources at all scopes (Account/Org/Project). + +### Blog Post + +The following blog post walks you through User and Role Management in Harness: + +[User and Role Management in the Harness Software Delivery Platform](https://harness.io/blog/continuous-delivery/user-role-management/) + +### Next steps + +* [Get Started with RBAC](https://docs.harness.io/article/e1ww0jmacp-getting-started-with-rbac) +* [Add and Manage Users](../4_Role-Based-Access-Control/3-add-users.md) +* [Add and Manage User Groups](../4_Role-Based-Access-Control/4-add-user-groups.md) +* [Add and Manage Service Accounts](../4_Role-Based-Access-Control/6-add-and-manage-service-account.md) +* [Add and Manage Resource Groups](../4_Role-Based-Access-Control/8-add-resource-groups.md) +* [Add and Manage Roles](../4_Role-Based-Access-Control/9-add-manage-roles.md) +* [Attribute-Based Access Control](../4_Role-Based-Access-Control/2-attribute-based-access-control.md) +* [Permissions Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) + diff --git a/docs/platform/4_Role-Based-Access-Control/10-set-up-rbac-pipelines.md b/docs/platform/4_Role-Based-Access-Control/10-set-up-rbac-pipelines.md new file mode 100644 index 00000000000..98487c7b386 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/10-set-up-rbac-pipelines.md @@ -0,0 +1,140 @@ +--- +title: Harness Role-Based Access Control Quickstart +description: This document explains how to set up RBAC for Pipelines. +# sidebar_position: 2 +helpdocs_topic_id: lrz2e4t1ko +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Role-Based Access Control (RBAC) helps you manage who has access to your Harness resources, what they can do with those resources, and in what scope they have access. + +[Role Assignments](./1-rbac-in-harness.md#role-assignment) to Users, User Groups, and Service Accounts at a specific scope, determine their permissions. + +This quickstart shows how to configure Role-Based Access Control (RBAC) for Pipeline Creation, Execution, and Connector Admin. + +### Objectives + +You will learn how to: + +* Create custom Roles. +* Create custom Resource Groups. +* Set up role-based access control for Pipeline Owner. +* Set up role-based access control for Connector Admin. + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md) +* For information on creating a Pipeline and adding a Stage, see [Add a Stage](../8_Pipelines/add-a-stage.md#step-1-create-a-pipeline). +* Make sure you have **Admin** rights for the Account/Org/Project where you want to configure Access Management. + +### Prerequisites + +* You must have **View**, **Manage**, and **Invite** permissions for **Users**. +* You must have **View** and **Manage** permissions for **User Groups**. +* You must have **View**, **Create/Edit**, and **Delete** permissions for **Resource Groups**. +* You must have **View**, **Create/Edit**, and **Delete** permissions for **Roles**. +* You must have created your Organizations and Projects. See [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +### RBAC Components + +To manage access control in Harness, you must have the following components in place: + +* **Principal**: can be a [User](./3-add-users.md), [User Group](./4-add-user-groups.md), or [Service Account](./6-add-and-manage-service-account.md). +* **Resource Group**: is a list of resources within a specific scope on which a Principal can perform actions. See [Add and Manage Resource Groups](./8-add-resource-groups.md). +* **Roles**: is a set of permissions that is assigned to a Principal for specific Resource Groups. See [Add and Manage Roles](./9-add-manage-roles.md). + +Harness provides a set of built-in Resource Groups and Roles for you to easily manage access control. For more information, see [Role Assignments](./1-rbac-in-harness.md#role-assignment). + +However, you can always create your own custom Resource Groups and Roles to manage access control as per your needs. + +For example, you can give access to **Create** Pipelines within all the Projects under Org O1, but not **Delete** or **Execute** them. + +Let us look at a few examples to create a few custom Resource Groups and Roles and set up RBAC accordingly. + +### Set Up RBAC for Pipeline Owner + +Let us set up access control for a custom Role called Pipeline Owner. + +Following are the components required for this RBAC setup: + +* **Principal**: a User Group named `Pipeline Owners`. +* **Resource Group**: a custom Resource Group named `All Pipeline Resources`. +* **Role**: a custom Role named `Pipeline Admin`. + +The following table shows the Role Assignment for a Pipeline Owner: + + + +| | | | | | +| --- | --- | --- | --- | --- | +| **Custom Role Name** | **Custom Resource Group Name** | **Resource Scope** | **Resources** | **Permissions** | +| **Pipeline Admin** | **All Pipeline Resources** | **All (including all Organizations and Projects)** |
  • Pipelines
  • Secrets
  • Connectors
  • Delegates
  • Environments      
  • Templates
  • Variables
  • |
  • View, Create/Edit, Delete, Execute Pipelines
  • View, Create/Edit, Access Secrets
  • View, Create/Edit, Delete, Access Connectors
  • View, Create/Edit Delegates
  • View, Create/Edit, Access Environments
  • View, Create/Edit, Access Templates
  • View, Create/Edit Variables
  • | + +#### Step 1: Create a User Group + +1. In your Harness Account, click **Account Settings**. +2. Click **Access Control**. +3. In **User Groups,** click **New User** **Group**. The New User Group settings appear. +4. Enter a **Name** for your **User Group**. In this case, enter Pipeline Owners. +5. Enter **Description** and [**Tags**](../20_References/tags-reference.md) for your **User Group**. +6. Select Users under **Add Users**. +7. Click **Save.** + +Your User Group is now listed under User Groups. + +#### Step 2: Create a Custom Resource Group + +1. In your Harness Account, click **Account Settings**. +2. Click **Access Control**. +3. In **Resource Groups**, click **New Resource** **Group**. The New Resource Group settings appear. +4. Enter a **Name** for your **Resource Group**. In this case, enter **All Pipeline Resources**. +5. Enter **Description** and **Tags** for your **Resource Group**. +6. Click **Save**. +7. In **Resource Scope**, select **All (including all Organizations and Projects)**. This would mean the Principal can access the specified resources within the Account as well as those within the Organizations and their Projects.![](./static/set-up-rbac-pipelines-41.png) +8. In Resources, select **Specified**.![](./static/set-up-rbac-pipelines-42.png) +9. Select the following resources: + 1. Environments + 2. Variables + 3. Templates + 4. Secrets + 5. Delegates + 6. Connectors + 7. Pipelines +10. Click **Save**. + +#### Step 3: Create a Custom Role + +1. In your Harness Account, click **Account Settings**. +2. Click **Access Control**. +3. In **Roles**, click **New Role**. The New Role settings appear. +4. Enter a **Name** for your **Role**. In this case, enter **Pipeline Admin.** +5. Enter optional **Description** and **Tags** for your **Role**. +6. Click **Save**. +7. Select the following permissions for the resources: + 1. View, Create/Edit, Delete, Execute Pipelines + 2. View, Create/Edit, Access Secrets + 3. View, Create/Edit, Delete, Access Connectors + 4. View, Create/Edit Delegates + 5. View, Create/Edit, Access Environments + 6. View, Create/Edit, Access Templates + +#### Step 4: Assign Role Permission to the User Group + +Let us now complete the [Role Assignment](./1-rbac-in-harness.md#role-assignment) for the User Group to complete the RBAC set up for Pipeline Owner. + +1. In your Harness Account, click **Account Settings**. +2. Click **Access Control**. +3. In **User Groups,** locate the User Group you just created and click on **Role**.![](./static/set-up-rbac-pipelines-43.png) +The **Add Role** settings appear. +4. In **Assign Role Bindings**, click **Add**. +5. In **Role**, select the custom Role that you created. +6. In **Resource** **Group**, select the custom Resource Group you just created.![](./static/set-up-rbac-pipelines-44.png) +7. Click **Apply**. + +### Next steps + +* [Permissions Reference](ref-access-management/permissions-reference.md) + diff --git a/docs/platform/4_Role-Based-Access-Control/2-attribute-based-access-control.md b/docs/platform/4_Role-Based-Access-Control/2-attribute-based-access-control.md new file mode 100644 index 00000000000..8f6591a1b47 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/2-attribute-based-access-control.md @@ -0,0 +1,113 @@ +--- +title: Attribute-Based Access Control +description: This topic explains Attribute-Based access control. +# sidebar_position: 2 +helpdocs_topic_id: uzzjd4fy67 +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Attribute-Based Access Control (ABAC) lets you grant access to your Resources based on attributes associated with the your Harness Resources. + +This topic shows you how to configure ABAC in Harness. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Create Organizations and Projects​](../1_Organizations-and-Projects/2-create-an-organization.md) +* Make sure you have Admin rights for the Account/Org/Project where you want to configure Access Management.​ + +### Attribute-Based Access Control Overview + +Harness [Role-Based Access Control](../4_Role-Based-Access-Control/1-rbac-in-harness.md) (RBAC) helps you manage who has access to your Harness resources, what they can do with those resources, and in what scope they have access. + +RBAC is role-based. This means permissions to resources are determined by the roles assigned to Users, User Groups, and Service Accounts. + +Harness ABAC builds on top of Harness RBAC by adding [role assignment](../4_Role-Based-Access-Control/1-rbac-in-harness.md#role-assignment) conditions based on attributes in the context of specific actions. It lets you provide more fine-grained access management or simplify the management of hundreds of role assignments.​ + +With Harness ABAC, you can have an additional check based on the type of your Harness Resources and then do the Role Assignments. The Resource type filters down the Resource Groups for access control. + +For example, you can create a Resource Group with all the Production Environments and grant access to this Resource Group through Role Assignment. + +### Why and when should I use Harness ABAC? + +Harness ABAC helps you achieve the following: + +* Provide more fine-grained access control. +* Help reduce the number of Role Assignments. +* Use attributes that have specific business meaning. + +Following are a few examples of scenarios to configure ABAC: + +* Create/Edit, Delete and Access permissions only for Non-Production Environments. +* Create/Edit, Delete and Access permissions only for Secret Managers. +* View permission for Code Repositories. + +### Where can I configure ABAC? + +You can add attributes to configure ABAC for the following Harness resources: + +* Connectors: Type of Connectors +* Environments: Type of Environments + +The following table shows the attributes you can select for Harness ABAC: + + + +| | | +| --- | --- | +| **Resource** | **Attributes** | +| **Connectors** |
  • Cloud Providers
  • Secret Managers
  • Cloud Costs
  • Artifact Repositories
  • Code Repositories
  • Monitoring and Logging Systems
  • Ticketing Systems
  • | +| **Environments** |
  • Production
  • Non-Production
  • | + +### Step 1: Add a new Resource Group + +You can configure ABAC in the Account, Org, or Project scope.​ + +This topic shows you how to configure ABAC in the Project scope. + +To configure ABAC for Resources, perform the following steps: + +1. In your Harness Account, go to **PROJECT SETUP** in your Project. +2. Click **Access Control** and click **Resource Groups**. +3. Click **New Resource Group**. +4. Enter a **Name** for your Resource Group. +5. Click **Save**. + +### Step 2: Select Resources and add Attributes + +1. In **Resources**, select **ENVIRONMENTS**. +2. In **SHARED RESOURCES**, select **Connectors**.![](./static/attribute-based-access-control-05.png) +3. You can filter and include resources in your Resource Group in the following ways: + 1. All: you can select all the resources of the selected type for the chosen scope. + 2. By Type: you can select a specific type of resource for the chosen scope. + 3. Specified: you can select specific resources for the chosen scope.Select **By Type** against Environments and click **Add**. The Add Environment Types settings appear.![](./static/attribute-based-access-control-06.png) + +1. Select **Non-Production** and click **Add.** +2. Select **By Type** against Connectors and click **Add**. The Add Connector Type settings appear.![](./static/attribute-based-access-control-07.png) +Select **Secret Managers** and click **Add**.​ +3. Click **Save**. + +### Step 3: Add a new Role + +1. In your Harness Account, go to **PROJECT SETUP** in your Project. +2. Click **Access Control** and click​ **Roles**. +3. Click **New Role**. The New Role settings appear.​ +4. Enter a **Name** for your Role and click **Save**. +5. Select all permissions for Environments and Connectors.![](./static/attribute-based-access-control-08.png) +6. Click **Apply Changes**. + +### Step 4: Assign Role and Resource Group + +Let us now complete the [Role Assignment](../4_Role-Based-Access-Control/1-rbac-in-harness.md#role-assignment) for the User Group to complete the ABAC setup. + +1. In your Harness Account, go to **PROJECT SETUP** in your Project. +2. Click **Access Controls** and click **User Groups**. +3. Locate your User Group to assign the Role and Resource Group you just created. +4. Click **Role**.![](./static/attribute-based-access-control-09.png) +5. In **Role,** select the Role that you created. +6. In **Resource Group,** select the Resource Group you just created.​ +7. Click **Apply**.![](./static/attribute-based-access-control-10.png) +The members of the User group now have all permissions for Non-Production Environments and connectors that are of type Secret Managers. + diff --git a/docs/platform/4_Role-Based-Access-Control/3-add-users.md b/docs/platform/4_Role-Based-Access-Control/3-add-users.md new file mode 100644 index 00000000000..69b9019807f --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/3-add-users.md @@ -0,0 +1,106 @@ +--- +title: Add and Manage Users +description: This document shows steps to create a new user. +# sidebar_position: 2 +helpdocs_topic_id: hyoe7qcaz6 +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Harness User is any individual registered with Harness with a unique email address. A User can be a part of multiple Accounts. + +This topic will explain the steps to create and manage Users within Harness. + +### Before you begin + +* Make sure you have **Manage** Permissions for Users. + +### Step: Add New User + +You must first invite Users to your Account/Org/Project to add them to User Groups and assign Role Bindings accordingly. For more information on User Groups and Role Bindings, see [Add and Manage User Groups](./4-add-user-groups.md) and [Role Assignment](./1-rbac-in-harness.md#role-assignment). + +Click **Account Settings**, and click **Access Control**. + +Click **New User** in **Users**. The New User settings appear. + +![](./static/add-users-11.png) +Enter the email address(es) that the User will use to log into the Harness platform. + +If you have Roles and Resource Groups defined, select the Roles and Resource Groups for this user. To add Roles and Resource Groups, see [Add Roles](./9-add-manage-roles.md) and [Add Resource Groups](./8-add-resource-groups.md). + +Click **Save**. The user will receive a verification email at the address(es) you provided. When the user logs into Harness, the user creates a password, the email address is verified, and the user name is updated. + +You can add up to 50000 users in Harness Non-Community Edition. + +#### User invites + +For any new user that you add to your Harness Account, Org, or Project, Harness checks the following and sends invites accordingly: + +1. If your authentication mechanism is set to **Login via a Harness Account or Public OAuth Providers**, the invited user gets an email invitation. The user is added to the **Pending Users** list until the user accepts the invitation. +2. If your authentication mechanism is set to SAML, LDAP, or OAuth, and the feature flag `PL_NO_EMAIL_FOR_SAML_ACCOUNT_INVITES` is enabled, Harness adds the invited user to the Active Users list. +Harness does not send any emails to the user when this feature flag is enabled. +3. If your authentication mechanism is set to SAML, LDAP, or OAuth, and the feature flag `AUTO_ACCEPT_SAML_ACCOUNT_INVITES` is enabled, Harness sends a notification email to the user and adds the user to the Active Users list. + +If you enable both feature flags, the feature flag`PL_NO_EMAIL_FOR_SAML_ACCOUNT_INVITES` takes precedence over the feature flag`AUTO_ACCEPT_SAML_ACCOUNT_INVITES`. Harness does not send any emails to users. + +### Step: Delete User + +Click **Users** under **Access** **Control**. + +Click **Delete** on the top right corner to delete a specific user. + +![](./static/add-users-12.png) +### Step: Manage User + +To edit Role Bindings for a User, do the following: + + In **Access Control**, click **Users.** + +Click on the user you want to edit. The user details appear. + +![](./static/add-users-13.png) +Click **Delete** on the right to remove a User Group. + +Click **Role** to change Role Bindings for this User. + +#### Group Memberships + +You can view the group membership of a specific user on the user details page by clicking **Group Memberships**. + +![](./static/add-users-14.png) +Harness lets you select one of the following scopes to view the user's group membership: + +* **All**: lists the user's group membership across all the scopes. +* **Account only**: lists the user's group membership only in the Account scope. +* **Organization** **only**: lists the user's group membership in the scope of the selected Organization. +* **Organization and Projects**: lists the user's group membership in the scope of the selected Organization and Project. + +To add the user to a new user group, click **Add to a new User Group**. + +Click **Remove** to remove the user as a member from a specific user group. + +![](./static/add-users-15.png) +#### Role Bindings + +You can view the role bindings for a specific user on the user details page by clicking **Role Bindings**. + +Here, you can view a given user's role bindings across all scopes and user groups. + +![](./static/add-users-16.png) +Harness lets you select one of the following scopes to view the user's role bindings: + +* **All**: lists the user's role bindings across all the scopes. +* **Account only**: lists the user's role bindings only in the Account scope. +* **Organization** **only**: lists the user's role bindings in the scope of the selected Organization. +* **Organization and Projects**: lists the user's role bindings in the scope of the selected Organization and Project. + +To add a new role binding for a user, click **Role**. + +### See also + +* [Add and Manage User Groups](./4-add-user-groups.md) +* [Add and Manage Roles](./9-add-manage-roles.md) +* [Add and Manage Resource Groups](./8-add-resource-groups.md) +* [Permissions Reference](./ref-access-management/permissions-reference.md) + diff --git a/docs/platform/4_Role-Based-Access-Control/4-add-user-groups.md b/docs/platform/4_Role-Based-Access-Control/4-add-user-groups.md new file mode 100644 index 00000000000..c2e00f9455c --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/4-add-user-groups.md @@ -0,0 +1,119 @@ +--- +title: Add and Manage User Groups +description: This document shows steps to create new user groups and assign roles to them. +# sidebar_position: 2 +helpdocs_topic_id: dfwuvmy33m +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness User Groups help you manage user access. Each member of a User Group inherits the [role bindings](./1-rbac-in-harness.md#role-assignment) assigned to that group. + +This topic explains the steps to create and manage User Groups within Harness. + +### Before you begin + +* Make sure you have **Manage** Permissions for User Groups. + +### Step: Add New User Group + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a User Group in Project/Organization/Account scope. To do this, go to Project SETUP, Organization, or Account Resources. + +In your **Project/Org/Account**, and click **Project/Org/Account SETUP**. + +Click **Access Control**. + +In **User Groups** click **New User Group**. The New User Group settings appear. + +![](./static/add-user-groups-49.png) +Enter a **Name** for your **User Group**. + +Enter **Description** and [**Tags**](../20_References/tags-reference.md) for your **User Group**. + +Select Users under **Add Users**. + +Click **Save**. + +Your User Group is now listed under User Groups. You can assign Roles to your User Group by clicking on **Role**. + +### Step: Delete User Group + +Click **User Groups** under **Access** **Control**. + +Click **Delete** on the top right corner for the User Group you want to delete. + +![](./static/add-user-groups-50.png) +### Step: Manage User Group + +Click the **User Groups** in **Access Control**. + +Click the User Group you want to edit. The User Group details appear. + +![](./static/add-user-groups-51.png) +Click **Members** to invite Users to this Group. + +Click **Remove** to delete User from this Group. + +Click **Role** to change Role Bindings for this User Group. + +### Step: Assign Roles + +Harness lets you inherit User Groups created at a higher scope by using **Assign Roles**. For example, you can inherit and use User Group(s) created at the Account scope in the Org or Project scope. + +![](./static/add-user-groups-52.png) +To inherit the User Group at the child scope, you must have view User Group permissions at the parent scope and manage User Group permissions at the child scope.​​You can modify the inherited User Group's role bindings in the child scope, but not the member or notification settings. Changes to the User Group in the parent scope will be reflected in the child scope as well.​ + +You can inherit a User Group from any parent scope to a child scope. + +This topic shows you how to inherit a User Group from the Account scope to the Project scope. + +In Harness, go to your Project and click **Access Control** in **Project Setup**. + +Click **User Groups**. + +Click **Assign Roles**. The Assign Roles settings appear. + +![](./static/add-user-groups-53.png) +In User Group(s), click **Select User Group(s)**. All the User Group(s) that you have permission to view across the scopes are listed. + +![](./static/add-user-groups-54.png) +Select the User Group(s) that you want to inherit from any of the parent scopes to your Project. Click **Apply Selected**. + +Click **Add** to assign Roles and Resource Groups to this User Group in your Project scope. + +Select **Roles** and **Resource Groups** and click **Apply**. + +The User Group is now listed in User Groups. + +You can get the list of child scopes where the User Group is inherited by clicking on the User Group at the parent scope. + +![](./static/add-user-groups-55.png) +### Option: Notification Preferences + +You can set notification channels for your User Group members using **Notification Preferences**. + +When the User Group is assigned an Alert Notification Rule, the channels you set here will be used to notify them. + +To add notification preferences to Harness User Groups, perform the following steps: + +1. In your **Account**/**Organization**/**Project** click Access Control. +2. Click **User Groups**. +3. Select the User Group to which you want to add notification preferences. +4. In **Notification Preferences**, click **Channel**. +5. Configure one or more notification settings from the following options and click **Save:** + * **Email/Alias** – Enter any group email addresses where Harness can send notifications. For more details, see [Send Notifications Using Email](../5_Notifications/add-smtp-configuration.md#option-send-notifications-for-a-user-group-using-email). + * **Slack Webhook URL** – Enter the Slack channel Incoming Webhook URL. For more details, see [Send Notifications Using Slack](../5_Notifications/send-notifications-using-slack.md). + * **PagerDuty Integration Key** – Enter the key for a PagerDuty Account/Service to which Harness can send notifications. You can copy/paste this key from **Integrations** of your service in **Services** > **Service Directory.**![](./static/add-user-groups-56.png) + * **Microsoft Teams Webhook URL** - Enter the Microsoft Teams Incoming Webhook URL. + +### See also + +* [Add and Manage Users](./3-add-users.md) +* [Harness Default User Groups](./5-harness-default-user-groups.md) +* [Add and Manage Roles](./9-add-manage-roles.md) +* [Add and Manage Resource Groups](./8-add-resource-groups.md) +* [Permissions Reference](./ref-access-management/permissions-reference.md) + diff --git a/docs/platform/4_Role-Based-Access-Control/5-harness-default-user-groups.md b/docs/platform/4_Role-Based-Access-Control/5-harness-default-user-groups.md new file mode 100644 index 00000000000..a482c7a462f --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/5-harness-default-user-groups.md @@ -0,0 +1,67 @@ +--- +title: Harness Default User Groups +description: Describes the default User Groups at each scope. +# sidebar_position: 2 +helpdocs_topic_id: n3cel7d8re +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness has default User Group in each scope. These groups have all the users at the respective scope as their members. + +Whenever you create a new Organization or Project, Harness creates a default User Group in its scope. + +For example, if you add a new Organization to your Account, Harness creates a default User Group in the Organization. This group will have all the users within the scope of the Organization. + +This topic explains the default User Groups at each scope and how you can do the Role Assignment for each of them. + +### Harness default User Groups overview + +Harness adds the scope-specific default User group to all your existing Accounts, Organizations, and Projects as well as to any Organization and Project that you create later. + +The users that you add in the Account scope will **not** be assigned the **Account Viewer** role by default. The user's default role assignment is the same as the role assignment of the default User Group in the Account.The following table explains the default User Group at the individual scopes: + + + +| | | | +| --- | --- | --- | +| **Scope** | **Default User Group Name** | **Description** | +| Account | **All Account Users** | All the users in the Account scope are members of this User Group. | +| Organization | **All Organization Users** | All the users in the Organization scope are members of this User Group. | +| Project | **All Project Users** | All the users in the Project scope are members of this User Group. | + +* When you add a new Organization or Project, the default role assignment for the default User Group is that of **Organization Viewer** and **Project Viewer** respectively.![](./static/harness-default-user-groups-57.png) +* When you add a user to an existing Account, Organization, or Project, Harness adds the user to the default User Group in the scope where you added the user. +For example, if you add a user to your existing Organization `OrgExample`, Harness will add the user to the All Organization Users group within `OrgExample`. +* Admin can do the required [role assignment](./1-rbac-in-harness.md#role-assignment) for the default User Groups. +* When you add a user to your Harness Account, the user's default role assignment is the same as the role assignment of the default User Group in the Account. +For example, the **All Account Users** group in your Account has the role assignment of **Account Viewer**. All the members of this group can view resources within the scope of this Account. Now, you add a new user to this Account. +Harness adds this user to the **All Account Users** group and the role assignment of this user is **Account Viewer** which is inherited from the default User Group of this Account. + +You cannot create, edit, or delete the default User Groups. Harness manages them. + +### Assign Role-Bindings for default User Group in a new Organization + +1. In your Harness Account, click **Account Settings.** +2. Click **Organizations**. +3. Click **New Organization**. The new Organization settings appear. +4. In **Name**, enter a name for your Organization. +5. Enter **Description**, and [Tags](https://harness.helpdocs.io/article/i8t053o0sq-tags-reference) for your new Org. +6. Click **Save and Continue**.![](./static/harness-default-user-groups-58.png) +7. Click **Finish**. +Your Organization now appears in the list of Organizations. +8. Click on your Organization and then click **Access Control**. +9. Click **User Groups**. +**All Organization Users** is the default User Group with a default role assignment of **Organization Viewer**.![](./static/harness-default-user-groups-59.png) +10. To assign another role to this User Group, click **Role**. +The **Add Role** settings appear.![](./static/harness-default-user-groups-60.png) +11. Click **Add**. +12. In **Roles**, select a Harness built-in Role or a custom Role that you have created for the desired permissions. +For more information on built-in roles and custom roles, see [Add and Manage Roles](./9-add-manage-roles.md). +13. In **Resource Groups**, select a Harness built-in Resource Group or a custom Resource Group that you have created for the desired resources. +For more information on built-in roles and custom roles, see [Add and Manage Resource Groups](./8-add-resource-groups.md). +14. Click **Apply**. + +All the existing members and any new members that you add to this Organization will have the role bindings that you just added. + diff --git a/docs/platform/4_Role-Based-Access-Control/6-add-and-manage-service-account.md b/docs/platform/4_Role-Based-Access-Control/6-add-and-manage-service-account.md new file mode 100644 index 00000000000..2040e8abb99 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/6-add-and-manage-service-account.md @@ -0,0 +1,67 @@ +--- +title: Add and Manage Service Accounts +description: Steps to add and manage Service Account. +# sidebar_position: 2 +helpdocs_topic_id: e5p4hdq6bd +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Admin Users of an account can create a Service Account with specific Role Bindings. + + +### Before you begin +* Make sure you are an Account Admin to Create, Edit, Delete, and Manage Service Accounts. +For more details, see [API Permissions Reference](./ref-access-management/api-permissions-reference.md). + +### Create a Service Account + +In Harness, click **Home**. + +In **ACCOUNT SETUP**, click **Access Control**. + +Click **Service Accounts**, click **+ New Service Account**. + +In the **New Service Account** settings page, enter a **Name**. + +Enter **Email**, **Description**, and **Tags** for this Account. + +![](./static/add-and-manage-service-account-45.png) +Click **Save**. Your Service Account is created. + +Click **+Role** to assign Role Bindings to the Service Account you just created. + +![](./static/add-and-manage-service-account-46.png) +For step-by-step instructions to add Roles and Resource Groups, see [Add and Manage Roles](./9-add-manage-roles.md) and [Add and Manage Resource Groups](./8-add-resource-groups.md). + +For step-by-step instructions to add an API key to your Service Account that you just created, see [Add and Manage API Keys](./7-add-and-manage-api-keys.md). + +### Edit a Service Account + +In Harness, click **Home**. + +In **ACCOUNT SETUP**, click **Access Control**. + +Click **Service Accounts**. All the Service Accounts are listed. + +Click the more options button (**︙**) next to the Service Account you want to edit. + +![](./static/add-and-manage-service-account-47.png) +Click **Edit**. + +Follow the steps in [Create a Service Account](./6-add-and-manage-service-account.md#create-a-service-account) to modify any of the configured settings. + +### Delete a Service Account + +In Harness, click **Home**. + +In **ACCOUNT SETUP**, click **Access Control**. + +Click **Service Accounts**. All the Service Accounts are listed. + +Click the more options button (**︙**) next to the Service Account you want to delete. + +![](./static/add-and-manage-service-account-48.png) +Click **Delete**. + diff --git a/docs/platform/4_Role-Based-Access-Control/7-add-and-manage-api-keys.md b/docs/platform/4_Role-Based-Access-Control/7-add-and-manage-api-keys.md new file mode 100644 index 00000000000..479ebfc6ade --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/7-add-and-manage-api-keys.md @@ -0,0 +1,167 @@ +--- +title: Add and Manage API Keys +description: Steps to add and manage API Keys. +# sidebar_position: 2 +helpdocs_topic_id: tdoad7xrh9 +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Before you can access Harness API, you must obtain an access token that grants access to that API. The access token allows you to make authorized API calls to Harness. These can be created at the Account/Org/Project level. + +### Before you begin +* Make sure you are an Account Admin to Create, Edit, Delete, and Manage Service Accounts. +For more details on permissions for API Keys, see [API Permissions Reference](ref-access-management/api-permissions-reference.md). + +### Harness API Key + +You can create API Keys at Account/Org/Project scope and can get multiple access tokens under them. Harness lets you create two kinds of access tokens: + +* **Personal Access Token** - You can create API Key and generate tokens under it, from your user profile. +* **Service Account Token** - As an Account Admin, you can create a Service Account with Specified Role Bindings and can then create API Keys and generate tokens under it. + +Based on its type, the token would inherit permissions from the User/Service Account role binding. + +### Create Personal Access Token + +To generate a Personal Access Token, you need to first create API Key in your user profile. + +1. In Harness, navigate to your **Profile**. +2. Click **API Key**. The API Key settings appear. +3. Enter **Name, Description,** and **Tags** for your API. +4. Click **Save**. The new API Key is created. + +#### Generate Personal Access Token + +1. To generate a Token for this API Key, click **Token** below the API Key you just created. +2. In the **New Token** settings, enter **Name**, **Description**, and **Tags**. +3. To set an expiration date for this token, select **Set Expiration Date**. +4. Enter date in **Expiration Date (mm/dd/yyyy)**. +5. Click **Generate Token**. +6. Your new Token is generated.![](./static/add-and-manage-api-keys-20.png) + +You cannot see this token value after you close this dialog. Make sure to copy and store the generated token value securely. + +### Create Service Account Token + +To generate a Service Access Token, you need to first create a [Service Account](./6-add-and-manage-service-account.md) and create an API Key under it. + +1. In Harness, click **Home**. +2. In **ACCOUNT SETUP**, click **Access Control**. +3. Click **Service Accounts** and then click the service account to which you want to add a new API Key. For step-by-step instructions to add a new Service Account, see [Add and Manage Service Accounts](./6-add-and-manage-service-account.md). +4. In the Service Account's settings page, click **API Key**. +5. In the **New API Key** settings, enter **Name, Description,** and **Tags**. +6. Click **Save**. The new API Key is created. + +#### Generate Service Account Token + +1. To generate a Token for this API Key, click **Token** below the API Key you just created. +2. In the **New Token** settings, enter Name, Description, and Tags. +3. To set an expiration date for this token, select **Set Expiration Date**. +4. Enter date in **Expiration Date (mm/dd/yyyy)**. +5. Click **Generate Token**. +6. Your new Token is generated.![](./static/add-and-manage-api-keys-21.png) + +You cannot see this token value after you close this dialog. Make sure to copy and store the generated token value securely. + +### Edit Token + +#### Edit a Personal Access Token + +1. In Harness, navigate to your profile. +2. In **My API Keys**, expand the token that you want to edit. +3. Click the more options button (**︙**) next to the token you want to edit.![](./static/add-and-manage-api-keys-22.png) +4. Click **Edit**. +5. Follow the steps in [Create Personal Access Token](#create-personal-access-token) to modify any of the configured settings. + +#### Edit a Service Account Token + +1. In your Harness Account, click **Account Settings**. +2. Click **Access Control**. +3. Click **Service Accounts** and then click the service account which has the token you want to edit. All the API keys are listed. +4. Click the API key whose token you want to edit. You can see the list of all the tokens. +5. Click the more options button (**︙**) next to the token you want to edit.![](./static/add-and-manage-api-keys-23.png) +6. Click **Edit**. +7. Follow the steps in [Create Service Account Token](#create-service-account-token) to modify any of the configured settings. + +### Rotate Token + +It is a recommended security practice to periodically rotate your tokens. You can rotate your tokens in Harness for symmetric encryption. + +#### Rotate a Personal Access Token + +1. In Harness, navigate to your profile. +2. In **My API Keys**, expand the token that you want to rotate. +3. Click the more options button (**︙**) next to the token you want to rotate. +4. Click **Rotate Token**.![](./static/add-and-manage-api-keys-24.png) +5. In the Rotate Token Settings screen enter **Expiration Date** and click **Rotate Token**. +6. Your new token gets generated. **Copy and store the token securely before you close this dialog.**![](./static/add-and-manage-api-keys-25.png) + +#### Rotate a Service Account Token + +1. In Harness, click **Home**. +2. In **ACCOUNT SETUP**, click **Access Control**. +3. Click **Service Accounts** and then click the service account which has the token you want to rotate. All the API keys are listed. +4. Click the API key whose token you want to rotate. You can see the list of all the tokens. +5. Click the more options button (**︙**) next to the token you want to rotate. +6. Click **Rotate Token**.![](./static/add-and-manage-api-keys-26.png) +7. In the Rotate Token Settings screen enter **Expiration Date** and click **Rotate Token**. +8. Your new token gets generated. **Copy and store the token securely before you close this dialog.**![](./static/add-and-manage-api-keys-27.png) + +### Delete Token + +#### Delete a Personal Access Token + +1. In Harness, navigate to your profile. +2. In **My API Keys**, expand the token that you want to delete. +3. Click the more options button (**︙**) next to the token you want to delete. +4. Click **Delete**. + +#### Delete a Service Account Token + +1. In Harness, click **Home**. +2. In **ACCOUNT SETUP**, click **Access Control**. +3. Click **Service Accounts** and then click the service account which has the token you want to delete. All the API keys are listed. +4. Click the API key whose token you want to delete. You can see the list of all the tokens. +5. Click the more options button (**︙**) next to the token you want to delete. +6. Click **Delete**.![](./static/add-and-manage-api-keys-28.png) + +### Edit API Key + +To edit API Key in your user profile, perform the following steps: + +1. In Harness, navigate to your **Profile**. +2. Your API Keys are listed under **My API Keys**. +3. Click the more options button (**︙**) next to the API Key that you want to edit.![](./static/add-and-manage-api-keys-29.png) +4. Click **Edit**. The API key settings appear. +5. Follow the steps in [Create Personal Access Token](#create-personal-access-token) to modify any of the configured settings. + +To edit an API Key in a Service Account, perform the below steps: + +1. In Harness, click **Home**. +2. In **ACCOUNT SETUP**, click **Access Control**. +3. Click **Service Accounts.** +4. Click the service account whose API key you want to edit. All the API Keys are listed. +5. Click the more options button (**︙**) next to the API Key that you want to edit.![](./static/add-and-manage-api-keys-30.png) +6. Click **Edit**. +7. Follow the steps in [Create Service Account Token](#create-service-account-token) to modify any of the configured settings. + +### Delete API Key + +To delete an API key from your profile, perform the following steps: + +1. In Harness, navigate to your **Profile**. +2. Your API Keys are listed under **My API Keys**. +3. Click the more options button (**︙**) next to the API Key that you want to delete. +4. Click **Delete**.![](./static/add-and-manage-api-keys-31.png) + +To delete an API key in a Service Account, perform the below steps: + +1. In Harness, click **Home**. +2. In **ACCOUNT SETUP**, click **Access Control**. +3. Click **Service Accounts.** +4. Click the service account whose API key you want to delete. All the API Keys are listed. +5. Click the more options button (**︙**) next to the API Key that you want to delete. +6. Click **Delete**. + diff --git a/docs/platform/4_Role-Based-Access-Control/8-add-resource-groups.md b/docs/platform/4_Role-Based-Access-Control/8-add-resource-groups.md new file mode 100644 index 00000000000..dd095e54478 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/8-add-resource-groups.md @@ -0,0 +1,112 @@ +--- +title: Add and Manage Resource Groups +description: This document shows steps to create and manage resource groups and assign them to user groups. +# sidebar_position: 2 +helpdocs_topic_id: yp4xj36xro +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Resource Group is a set of Harness resources that the permission applies to. Permissions given to a user or user group as part of a Role, apply to the set of resources that are part of the Resource Group. + +This topic will explain the steps to create and manage Resource Groups within Harness System. + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* Make sure you have **Create/Edit/Delete** Permissions for Resource Groups. + +### Visual Summary + +Here is a quick overview of Resource Groups at various scopes: + +* **Account Only** - To include all the resources within the scope of the Account. This does not include resources within the scope of Org or Project. +* **All (including all Organizations and Projects)** - To include all the resources within the scope of the Account, as well as those within the scope of the Orgs and Projects in this Account. +* **Specified Organizations (and their Projects)** - To include all the resources within the scope of specific Organizations and their Projects. + +![](./static/add-resource-groups-32.png) +### Review: Resource Groups and Scopes + +A Resource Group can contain any of the following: + +* All or selected resources from the list of resources in the Resource Group's scope - For example, a Resource Group RG1 created within Account Acc1 can contain all or selected resources created within the same Account Acc1. +* All or selected resources in the scope in which it is defined. For example, all Account level resources, all Account Level Secret Managers, all Connectors in Org A. +* All or specific resources for the entire account - For example, a Resource Group RG1 within Account Acc1 can contain all or selected resources created within Acc1, Organizations within Acc1, Projects within Organizations in Acc1.![](./static/add-resource-groups-33.png) + +Harness includes the following built-in Resource Groups at the Account, Org, and Project scope: + + + +| | | | +| --- | --- | --- | +| Scope | Resource Group | Description | +| Account | **All Resources Including Child Scopes** | Includes all the resources within the scope of the Account, as well as those within the scope of the Orgs and Projects in this Account. | +| Account | **All Account Level Resources** | Includes all the resources within the scope of the Account. This does not include resources within the scope of Org or Project. | +| Org | **All Resources Including Child Scopes** | Includes all the resources within the scope of the Org, as well as those within the scope of all the Projects created within this Org. | +| Org | **All Organization Level Resources** | Includes all the resources within the scope of the Org. This does not include resources within the scope of Projects. | +| Project | **All Project Level Resources** | Includes all the resources within the scope of the Project. | + +### Step 1: Add a New Resource Group + +Select your **Project/Org/Account**, and click **Access Control**. + +Click **Resource Groups** and then click **New Resource Group**. The New Resource Group settings appear. + +Enter a **Name** for your **Resource Group**. + +Enter **Description** and **Tags** for your **Resource Group**. + +![](./static/add-resource-groups-34.png) +Click **Save**. + +### Step 2: Select a Resource Scope + +You must select the scope of the resources that must be included in your new Resource Group after it has been saved. + +![](./static/add-resource-groups-35.png) +You can select one of the following in Resource Group: + +* **Account Only** +* **All (including all Organizations and Projects)** +* **Specified Organizations (and their Projects)**![](./static/add-resource-groups-36.png) +For each Organization you select, you can further select **All** or **Specified** Projects within this Organization to include the resources accordingly.![](./static/add-resource-groups-37.png) + +Click **Apply**. + +### Step 3: Select Resources + +After you have selected Resource Scope, you must select the resources that you want to include in this Resource group. + +You can either Select **All** or **Specified** resources. + +![](./static/add-resource-groups-38.png) +Click **Save**. + +Go back to Resource Groups. Your Resource Group is now listed here. + +![](./static/add-resource-groups-39.png) +### Step: Delete A Resource Group + +Click the **Resource Groups** tab under **Access Control.** + +Click **Delete** on the top right corner to remove a Resource Group. + +![](./static/add-resource-groups-40.png) +### Step: Manage Resource Group + +Click the **Resource Groups** tab under **Access Control.** + +Click the Resource Group you want to edit. The Resource Group details page appears. + +You can add/remove resources from this page. + +Click **Apply Changes**. + +### Next steps + +* [Add and Manage Users](./3-add-users.md) +* [Add and Manage User Groups](./4-add-user-groups.md) +* [Add and Manage Roles](./9-add-manage-roles.md) +* [Permissions Reference](./ref-access-management/permissions-reference.md) + diff --git a/docs/platform/4_Role-Based-Access-Control/9-add-manage-roles.md b/docs/platform/4_Role-Based-Access-Control/9-add-manage-roles.md new file mode 100644 index 00000000000..a437ce6ce95 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/9-add-manage-roles.md @@ -0,0 +1,84 @@ +--- +title: Add and Manage Roles +description: This document shows steps to create new Roles and assign permissions to them. +# sidebar_position: 2 +helpdocs_topic_id: tsons9mu0v +helpdocs_category_id: w4rzhnf27d +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Roles are a group of permissions you can assign to a Harness User Group. These permissions determine what operations a User Group can perform on a specific Harness Resource. + +This topic will explain the steps to create and manage Roles within Harness. + + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* Make sure you have Create/Edit/Delete Permissions for Roles. + +The **Account Admin** Role has permissions for all the resources within the Account scope as well as its child scope (Organizations and Projects within this Account). + +![](./static/add-manage-roles-17.png) +### Step: Add a New Role + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a Role in Project/Organization/Account scope. To do this, go to Project SETUP, Organization, or Account Settings. This topic explains how to create a role in the Account scope. + +Select your **Project/Org/Account**, and click **Access Control**. + +Click **Roles**. + +Click **New Role**. The **New Role** settings appear. + +![](./static/add-manage-roles-18.png) +Enter a **Name** for your **Role**. + +Enter optional **Description** and **Tags** for your **Role**. + +Click **Save**. + +### Step: Delete Role + +Click the **Roles** tab under **Access** **Control**. + +Click **Delete** in the top right corner to delete a specific role. + +### Step: Manage Role + +Click the **Roles** tab under **Access Control**. + +Click on the role you want to edit. The **Update Role Permissions** page appears. + +![](./static/add-manage-roles-19.png) +Add/Remove Resource-specific permissions from this page. Click **Apply Changes**. + +### Harness Built-in Roles + +Harness provides the following default roles at the Account, Org, and Project scope: + + + +| | | +| --- | --- | +| **Scope** | **Role** | +| **Account** | Account Admin | +| **Account** | Account Viewer | +| **Account** | Feature Flag Manage Role | +| **Org** | Organization Admin | +| **Org** | Organization Viewer | +| **Org** | Feature Flag Manage Role | +| **Project** | Project Admin | +| **Project** | Project Viewer | +| **Project** | Pipeline Executor | +| **Project** | Feature Flag Manage Role | + +### See also + +* [Add and Manage Users](./3-add-users.md) +* [Add and Manage User Groups](./4-add-user-groups.md) +* [Add and Manage Resource Groups](./8-add-resource-groups.md) +* [Permissions Reference](./ref-access-management/permissions-reference.md) + diff --git a/docs/platform/4_Role-Based-Access-Control/_category_.json b/docs/platform/4_Role-Based-Access-Control/_category_.json new file mode 100644 index 00000000000..df577d7cc7f --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/_category_.json @@ -0,0 +1 @@ +{"label": "Role-Based Access Control", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Role-Based Access Control"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "w4rzhnf27d"}} \ No newline at end of file diff --git a/docs/platform/4_Role-Based-Access-Control/ref-access-management/_category_.json b/docs/platform/4_Role-Based-Access-Control/ref-access-management/_category_.json new file mode 100644 index 00000000000..8fdf990e927 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/ref-access-management/_category_.json @@ -0,0 +1 @@ +{"label": "Access Management Reference", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Access Management Reference"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "4przngb8nk"}} \ No newline at end of file diff --git a/docs/platform/4_Role-Based-Access-Control/ref-access-management/api-permissions-reference.md b/docs/platform/4_Role-Based-Access-Control/ref-access-management/api-permissions-reference.md new file mode 100644 index 00000000000..16a3ec45174 --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/ref-access-management/api-permissions-reference.md @@ -0,0 +1,212 @@ +--- +title: API Permissions Reference +description: API Keys permissions reference. +# sidebar_position: 2 +helpdocs_topic_id: bhkc68vy9c +helpdocs_category_id: 4przngb8nk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic gives you details of the Permissions available in the Harness system for API Keys and Service Accounts. + +### Service Account + +The following table lists permissions for Service Account. To know more about Service Accounts, see [Add and Manage Service Accounts](../6-add-and-manage-service-account.md). + + + +| | | +| --- | --- | +| **Scope** | **Action** | +| **Account** |
  • core\_serviceaccount\_view
  • core\_serviceaccount\_edit
  • core\_serviceaccount\_delete
  • | + +### API Key + +The following table lists the permissions for Service Account. To know more about Service Accounts, see [Add and Manage API Keys](../7-add-and-manage-api-keys.md). + + + +| | | +| --- | --- | +| **Scope** | **Action** | +| **Account/Org/Project** | core\_serviceaccount\_manageapikey| + +### Harness API Permissions + +The following table lists the permissions for accessing the Harness APIs. + + + +| | | | +| --- | --- | --- | +| **Scope** | **Permission Identifier** | **Description** | +| **Account/Org/Project** | core\_project\_view | View Projects | +| **Account/Org/Project** | core\_project\_create | Create Projects | +| **Account/Org/Project** | core\_project\_edit | Edit Projects | +| **Account/Org/Project** | core\_project\_delete | Delete Projects | +| **Account/Org** | core\_organization\_view | View Organizations | +| **Account/Org** | core\_organization\_create | Create Organizations | +| **Account/Org** | core\_organization\_edit | Edit Organizations | +| **Account/Org** | core\_organization\_delete | Delete Organizations | +| **Account** | core\_account\_view | View Account | +| **Account** | core\_account\_edit | Edit Account | +| **Account** | core\_secret\_view | View Secrets | +| **Account/Org/Project** | core\_secret\_edit | Create or Edit Secrets | +| **Account/Org/Project** | core\_secret\_access | Access Secrets | +| **Account/Org/Project** | core\_secret\_delete | Delete Secrets | +| **Account/Org/Project** | core\_connector\_view | View Connectors | +| **Account/Org/Project** | core\_connector\_edit | Create or Edit Connectors | +| **Account/Org/Project** | core\_connector\_access | Access Connectors | +| **Account/Org/Project** | core\_connector\_delete | Delete Connectors | +| **Account** | core\_smtp\_view | View SMTP Config | +| **Account** | core\_smtp\_edit | Create or Edit SMTP Config | +| **Account** | core\_smtp\_delete | Delete SMTP Config | +| **Account/Org/Project** | core\_delegate\_view | View Delegates | +| **Account/Org/Project** | core\_delegate\_edit | Create or Edit Delegates | +| **Account/Org/Project** | core\_delegate\_delete | Delete Delegates | +| **Account/Org/Project** | core\_delegateconfiguration\_view | View Delegate Configurations | +| **Account/Org/Project** | core\_delegateconfiguration\_edit | Create/Edit Delegate Configurations | +| **Account/Org/Project** | core\_delegateconfiguration\_delete | Delete Delegate Configurations | +| **Account/Org/Project** | core\_pipeline\_view | View Pipelines | +| **Account/Org/Project** | core\_pipeline\_edit | Create/Edit Pipelines | +| **Account/Org/Project** | core\_pipeline\_delete | Delete Pipelines | +| **Account/Org/Project** | core\_pipeline\_execute | Run Pipelines | +| **Account/Org/Project** | core\_service\_view | View Services | +| **Account/Org/Project** | core\_service\_edit | Create/Edit Services | +| **Account/Org/Project** | core\_service\_delete | Delete Services | +| **Account/Org/Project** | core\_service\_access | Runtime access to Services | +| **Account/Org/Project** | core\_environment\_view | View Environments | +| **Account/Org/Project** | core\_environment\_edit | Create/Edit Environments | +| **Account/Org/Project** | core\_environment\_delete | Delete Environments | +| **Account/Org/Project** | core\_environment\_access | Runtime access to Environments | +| **Account/Org/Project** | core\_environmentgroup\_view | View Environment Groups | +| **Account/Org/Project** | core\_environmentgroup\_edit | Create/Edit Environment Groups | +| **Account/Org/Project** | core\_environmentgroup\_delete | Delete Environment Groups | +| **Account/Org/Project** | core\_environmentgroup\_access | Runtime access to Environment Groups | +| **Account/Org/Project** | core\_audit\_view | View Audits | +| **Account/Org/Project** | core\_usergroup\_view | View User Groups | +| **Account/Org/Project** | core\_usergroup\_manage | Manage User Groups | +| **Account/Org/Project** | core\_user\_view | View Users | +| **Account/Org/Project** | core\_user\_manage | Manage Users | +| **Account/Org/Project** | core\_role\_view | View Roles | +| **Account/Org/Project** | core\_role\_edit | Create/Edit Roles | +| **Account/Org/Project** | core\_role\_delete | Delete Roles | +| **Account/Org/Project** | core\_resourcegroup\_view | View Resource Groups | +| **Account/Org/Project** | core\_resourcegroup\_edit | Create/Edit Resource Groups | +| **Account/Org/Project** | core\_resourcegroup\_delete | Delete Resource Groups | +| **Account/Org/Project** | core\_user\_invite | Invite Users | +| **Account/Org/Project** | core\_license\_view | View License | +| **Account/Org/Project** | core\_license\_edit | Edit License | +| **Account/Org/Project** | core\_serviceaccount\_view | View Service Account details | +| **Account/Org/Project** | core\_serviceaccount\_edit | Edit Service Account details | +| **Account/Org/Project** | core\_serviceaccount\_delete | Delete Service Account details | +| **Account/Org/Project** | core\_serviceaccount\_manageapikey | Manage API keys for Service Account | +| **Account** | core\_authsetting\_view | View Auth settings | +| **Account** | core\_authsetting\_edit | Edit Auth settings | +| **Account** | core\_authsetting\_delete | Delete Auth settings | +| **Account/Org/Project** | ff\_featureflag\_edit | Create/Edit Feature Flags | +| **Account/Org/Project** | ff\_featureflag\_delete | Delete Feature Flags | +| **Account/Org/Project** | ff\_featureflag\_view | View Feature Flags | +| **Account/Org/Project** | ff\_targetgroup\_view | View Target Groups | +| **Account/Org/Project** | ff\_targetgroup\_edit | Create/Edit Target Groups | +| **Account/Org/Project** | ff\_targetgroup\_delete | Delete Target Groups | +| **Account/Org/Project** | ff\_environment\_targetGroupEdit | Edit Target Groups | +| **Account/Org/Project** | ff\_target\_view | View Targets | +| **Account/Org/Project** | ff\_environment\_apiKeyView | View Feature Flag Environment API Keys | +| **Account/Org/Project** | ff\_environment\_apiKeyCreate | Create Feature Flag Environment API Keys | +| **Account/Org/Project** | ff\_environment\_apiKeyDelete | Delete Feature Flag Environment API Keys | +| **Account/Org/Project** | ff\_environment\_edit | Edit Feature Flag Environment Configuration | +| **Account/Org/Project** | ff\_environment\_view | View Feature Flag Environment Configuration | +| **Account/Org/Project** | ff\_featureflag\_toggle | Toggle a Feature Flag on/off | +| **Account/Org** | core\_dashboards\_view | View Dashboards | +| **Account/Org** | core\_dashboards\_edit | Edit Dashboards | +| **Account/Org/Project** | core\_template\_view | View Templates | +| **Account/Org/Project** | core\_template\_copy | Copy Templates | +| **Account/Org/Project** | core\_template\_edit | Edit Templates | +| **Account/Org/Project** | core\_template\_delete | Delete Templates | +| **Account/Org/Project** | core\_template\_access | Access Templates | +| **Account/Org/Project** | core\_governancePolicy\_edit | Create/Edit Policies | +| **Account/Org/Project** | core\_governancePolicy\_view | View Policies | +| **Account/Org/Project** | core\_governancePolicy\_delete | Delete Policies | +| **Account/Org/Project** | core\_governancePolicySets\_edit | Create/Edit Policy Sets | +| **Account/Org/Project** | core\_governancePolicySets\_view | View Policy Sets | +| **Account/Org/Project** | core\_governancePolicySets\_delete | Delete Policy Sets | +| **Account/Org/Project** | core\_governancePolicySets\_evaluate | Evaluate Policy Sets | +| **Account/Org/Project** | chi\_monitoredservice\_view | View Monitored Services | +| **Account/Org/Project** | chi\_monitoredservice\_edit | Create/Edit Monitored Services | +| **Account/Org/Project** | chi\_monitoredservice\_delete | Delete Monitored Services | +| **Account/Org/Project** | chi\_monitoredservice\_toggle | Toggle Monitored Services on/off | +| **Account/Org/Project** | chi\_slo\_view | View SLOs | +| **Account/Org/Project** | chi\_slo\_edit | Create/Edit SLOs | +| **Account/Org/Project** | chi\_slo\_delete | Delete SLOs | +| **Account/Org/Project** | gitops\_agent\_view | View GitOps Agents | +| **Account/Org/Project** | gitops\_agent\_edit | Edit GitOps Agents | +| **Account/Org/Project** | gitops\_agent\_delete | Delete GitOps Agents | +| **Account/Org/Project** | gitops\_application\_view | View GitOps Applications | +| **Account/Org/Project** | gitops\_application\_edit | Edit GitOps Applications | +| **Account/Org/Project** | gitops\_application\_delete | Delete GitOps Applications | +| **Account/Org/Project** | gitops\_application\_sync | Syns GitOps Applications | +| **Account/Org/Project** | gitops\_repository\_view | View GitOps Repositories | +| **Account/Org/Project** | gitops\_repository\_edit | Edit GitOps Repositories | +| **Account/Org/Project** | gitops\_repository\_delete | Delete GitOps Repositories | +| **Account/Org/Project** | gitops\_cluster\_view | View GitOps Clusters | +| **Account/Org/Project** | gitops\_cluster\_edit | Edit GitOps Clusters | +| **Account/Org/Project** | gitops\_cluster\_delete | Delete GitOps Clusters | +| **Account/Org/Project** | gitops\_gpgkey\_view | View GitOps GPG keys | +| **Account/Org/Project** | gitops\_gpgkey\_edit | Edit GitOps GPG keys | +| **Account/Org/Project** | gitops\_gpgkey\_delete | Delete GitOps GPG keys | +| **Account/Org/Project** | gitops\_cert\_view | View GitOps Certificate | +| **Account/Org/Project** | gitops\_cert\_edit | Edit GitOps Certificate | +| **Account/Org/Project** | gitops\_cert\_delete | Delete GitOps Certificate | +| **Account/Org/Project** | sto\_testtarget\_view | View Test Targets | +| **Account/Org/Project** | sto\_testtarget\_edit | Edit Test Targets | +| **Account/Org/Project** | sto\_exemption\_view | View Exemptions | +| **Account/Org/Project** | sto\_exemption\_create | Create Exemptions | +| **Account/Org/Project** | sto\_exemption\_approve | Approve Exemptions | +| **Account/Org/Project** | sto\_issue\_view | View Security Issues | +| **Account/Org/Project** | sto\_scan\_view | View Security Scans | +| **Account/Org/Project** | core\_file\_view | View Files | +| **Account/Org/Project** | core\_file\_edit | Edit Files | +| **Account/Org/Project** | core\_file\_delete | Delete Files | +| **Account/Org/Project** | core\_file\_access | Access Files | +| **Account/Org/Project** | core\_variable\_view | View Variables | +| **Account/Org/Project** | core\_variable\_edit | Edit Variables | +| **Account/Org/Project** | core\_variable\_delete | Delete Variables | +| **Account/Org/Project** | chaos\_chaoshub\_view | View Chaos Hubs | +| **Account/Org/Project** | chaos\_chaoshub\_edit | Edit Chaos Hubs | +| **Account/Org/Project** | chaos\_chaoshub\_delete | Delete Chaos Hubs | +| **Account/Org/Project** | chaos\_chaosinfrastructure\_view | View Chaos Infrastructures | +| **Account/Org/Project** | chaos\_chaosinfrastructure\_edit | Edit Chaos Infrastructures | +| **Account/Org/Project** | chaos\_chaosinfrastructure\_delete | Delete Chaos Infrastructures | +| **Account/Org/Project** | chaos\_chaosexperiment\_view | View Chaos Experiments | +| **Account/Org/Project** | chaos\_chaosexperiment\_edit | Edit Chaos Experiments | +| **Account/Org/Project** | chaos\_chaosexperiment\_delete | Delete Chaos Experiments | +| **Account/Org/Project** | chaos\_chaosgameday\_view | View Chaos GameDay | +| **Account/Org/Project** | chaos\_chaosgameday\_edit | Edit Chaos GameDay | +| **Account/Org/Project** | chaos\_chaosgameday\_delete | Delete Chaos GameDay | +| **Account/Org/Project** | core\_setting\_view | View Settings | +| **Account/Org/Project** | core\_setting\_edit | Edit Settings | +| **Account** | ccm\_perspective\_view | View CCM Perspective | +| **Account** | ccm\_perspective\_edit | Edit CCM Perspective | +| **Account** | ccm\_perspective\_delete | Delete CCM Perspective | +| **Account** | ccm\_budget\_view | View CCM Budgets | +| **Account** | ccm\_budget\_edit | Edit CCM Budgets | +| **Account** | ccm\_budget\_delete | Delete CCM Budgets | +| **Account** | ccm\_costCategory\_view | View CCM Cost Category | +| **Account** | ccm\_costCategory\_edit | Create/Edit CCM Cost Category | +| **Account** | ccm\_costCategory\_delete | Delete CCM Cost Category | +| **Account** | ccm\_autoStoppingRule\_view | View CCM Auto stopping Rules | +| **Account** | ccm\_autoStoppingRule\_edit | Create/Edit CCM Auto stopping Rules | +| **Account** | ccm\_autoStoppingRule\_delete | Delete CCM Auto stopping Rules | +| **Account** | ccm\_folder\_view | View CCM Folders | +| **Account** | ccm\_folder\_edit | Create/Edit CCM Folders | +| **Account** | ccm\_folder\_delete | Delete CCM Folders | +| **Account** | ccm\_loadBalancer\_view | View CCM Load Balancers | +| **Account** | ccm\_loadBalancer\_edit | Create/Edit CCM Load Balancers | +| **Account** | ccm\_loadBalancer\_delete | Delete CCM Load Balancers | +| **Account** | ccm\_overview\_view | View CCM Overview page | +| **Account/Org/Project** | core\_deploymentfreeze\_manage | Manage Deployment Freeze | +| **Account/Org/Project** | core\_deploymentfreeze\_override | Override a Deployment Freeze | +| **Account/Org/Project** | core\_deploymentfreeze\_global | Global Deployment Freeze | + diff --git a/docs/platform/4_Role-Based-Access-Control/ref-access-management/permissions-reference.md b/docs/platform/4_Role-Based-Access-Control/ref-access-management/permissions-reference.md new file mode 100644 index 00000000000..4e06ebd5fce --- /dev/null +++ b/docs/platform/4_Role-Based-Access-Control/ref-access-management/permissions-reference.md @@ -0,0 +1,498 @@ +--- +title: Permissions Reference +description: This document lists the default user groups and permissions present in the Harness Access Management system. +# sidebar_position: 2 +helpdocs_topic_id: yaornnqh0z +helpdocs_category_id: 4przngb8nk +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic gives you details of the Harness Permissions and Default Roles and Resource Groups available in the Harness system. + +### Default Roles + +Each Harness Account, Organization and Project includes default Roles to help you with [RBAC](../1-rbac-in-harness.md). + +The following table lists permissions corresponding to the default roles at the Account [scope](../1-rbac-in-harness.md#rbac-scope): + + + +| | | | +| --- | --- | --- | +| **Role** | **Resource Type** | **Permissions** | +| **Account Admin​** | Resource Groups |
  • **View** - Can view Resource Groups
  • **Create/Edit** - Can create and edit Resource Groups
  • **Delete** - Can delete Resource Groups
  • | +| | Service Accounts |
  • **View** - Can view Service Accounts
  • **Create/Edit** - Can create and edit Service Accounts
  • **Delete** - Can delete Service Accounts
  • **Manage** - Can create/update/delete API keys and tokens
  • | +| | Organizations |
  • **View** - Can view existing Organizations
  • **Create** - Can create new Organizations
  • **Edit** - Can edit existing Organizations
  • **Delete** - Can delete Organizations
  • + | +| | Roles |
  • **View** - Can view existing Roles
  • **Create/Edit** - Can create and edit Roles
  • **Delete** - Can delete existing Roles
  • + | +| | Account Settings |
  • **View** - Can view Account Settings
  • **Edit** - Can edit Account Settings
  • + | +| | Projects |
  • **View** - Can view existing Projects
  • **Create** - Can create new Projects
  • **Edit** - Can edit existing Projects
  • **Delete** - Can delete existing Projects
  • + | +| | Users |
  • **View** - Can view existing users
  • **Manage** - Can update/delete users and their role bindings
  • **Invite** - Can invite users to Harness
  • + | +| | Authentication Settings |
  • **View** - Can view Authentications settings
  • **Create/Edit** - Can create and edit Authentications settings
  • **Delete** - Can delete Authentications settings
  • + | +| | User Groups |
  • **View** - Can view User Groups
  • **Manage** - Can create/update/delete User Groups
  • + | +| | Governance Policy Sets |
  • **View** - Can view existing Governance Policy Sets
  • **Create/Edit** - Can create and edit Governance Policy Sets
  • **Delete** - Can delete existing Governance Policy Sets
  • **Evaluate** - Can evaluate Governance Policy Sets
  • + | +| | Variables |
  • **View** - Can view existing Variables
  • **Create/Edit** - Can create and edit Variables
  • **Delete** - Can delete existing Variables
  • + | +| | Templates |
  • **View** - Can view existing Templates
  • **Create/Edit** - Can create and edit Templates
  • **Delete** - Can delete existing Templates
  • **Access** - Can access referenced Templates at runtime
  • + | +| | Governance Policies |
  • **View** - Can view existing Governance Policies
  • **Create/Edit** - Can create and edit Governance Policies
  • **Delete** - Can delete existing Governance Policies
  • + | +| | Dashboards |
  • **View** - Can view Dashboards
  • **Manage** - Can manage and edit Dashboards
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • **Create/Edit** - Can create and edit Delegate Configurations
  • **Delete** - Can delete existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • **Create/Edit** - Can create and Edit Delegates
  • **Delete** - Can delete existing Delegates
  • + | +| | Secrets |
  • **View** - Can view existing Secrets
  • **Create/Edit** - Can create and edit Secrets
  • **Delete** - Can delete existing Secrets
  • **Access** - Can access referenced Secrets at runtime
  • + | +| | Connectors |
  • **View** - Can view existing Connectors
  • **Create/Edit** - Can create and edit Connectors
  • **Delete** - Can delete existing Connectors
  • **Access** - Can access referenced Connectors at runtime
  • + | +| | Environments |
  • **View** - Can view existing Environments
  • **Create/Edit** - Can create and edit Environments
  • **Delete** - Can delete existing Environments
  • **Access** - Can access referenced Environments at runtime
  • + | +| | ChaosHub |
  • **View** - Can view Chaos experiments and Chaos Scenarios
  • **Create/Edit** - Can connect to ChaosHub Git repo
  • **Delete** - Can disconnect ChaosHub Git repo
  • + | +| | Clusters |
  • **View** - Can view existing Clusters
  • **Create/Edit** - Can create and edit Clusters
  • **Delete** - Can delete existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • **Create/Edit** - Can create and edit Agents
  • **Delete** - Can delete existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • **Create/Edit** - Can create and edit Repository Certificates
  • **Delete** - Can delete existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing applications
  • **Create/Edit** - Can create and edit Applications
  • **Delete** - Can delete existing Applications
  • **Sync** - Can deploy Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • **Create/Edit** - Can create and edit Repositories
  • **Delete** - Can delete existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • **Create/Edit** - Can create and edit GnuPG Keys
  • **Delete** - Can delete existing GnuPG Keys
  • + | +| | Environment Groups |
  • **View** - Can view existing Environment Groups
  • **Create/Edit** - Can create and edit Environment Groups
  • **Delete** - Can delete existing Environment Groups
  • + | +| | SLO |
  • **View** - Can view an existing SLO
  • **Create/Edit** - Can create and edit SLO
  • **Delete** - Can delete an existing SLO
  • + | +| | Monitored Services |
  • **View** - Can view existing Monitored Services
  • **Create/Edit** - Can create and edit Monitored Services
  • **Delete** - Can delete existing Monitored Services
  • **Toggle** - Can toggle between different Monitored Services
  • + | +| | Pipelines |
  • **View** - Can view existing Pipelines
  • **Create/Edit** - Can create and edit Pipelines
  • **Delete** - Can delete existing Pipelines
  • **Execute** - Can execute Pipelines
  • + | +| | Services |
  • **View** - Can view existing Services
  • **Create/Edit** - Can create and edit Services
  • **Delete** - Can delete existing Services
  • **Access** - Can access referenced Services at runtime
  • + | +| | Feature Flags |
  • **Toggle** - Can turn a Feature Flag on or off
  • **Create/Edit** - Can create and edit Feature Flags
  • **Delete** - Can delete existing Feature Flags
  • + | +| | Target Management |
  • **Create/Edit** - Can create and edit Targets and Target Groups to control visibility of variation of a Feature Flag
  • **Delete** - Can delete Targets and Target Groups
  • + | +| **Account Viewer** | Resource Groups |
  • **View** - Can view existing Resource Groups
  • + | +| | Service Accounts |
  • **View** - Can view existing Service Accounts
  • + | +| | Organizations |
  • **View** - Can view existing Organizations
  • + | +| | Roles |
  • **View** - Can view existing Roles
  • + | +| | Account Settings |
  • **View** - Can view existing Account Settings
  • + | +| | Projects |
  • **View** - Can view existing Projects
  • + | +| | Users |
  • **View** - Can view existing Users
  • + | +| | Authentication Settings |
  • **View** - Can view existing Authentication Settings
  • + | +| | User Groups |
  • **View** - Can view existing User Groups
  • + | +| | Governance Policy Sets |
  • **View** - Can view existing Governance Policy Sets
  • + | +| | Variables |
  • **View** - Can view existing Variables
  • + | +| | Templates |
  • **View** - Can view existing Templates
  • + | +| | Governance Policies |
  • **View** - Can view existing Governance Policies
  • + | +| | Dashboards |
  • **View** - Can view existing Dashboards
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • + | +| | Secrets |
  • **View** - Can view existing Secrets
  • + | +| | Connectors |
  • **View** - Can view existing Connectors
  • + | +| | Environments |
  • **View** - Can view existing Environments
  • + | +| | ChaosHub |
  • **View** - Can view Chaos experiments and Chaos Scenarios
  • + | +| | Clusters |
  • **View** - Can view existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • + | +| | Environment Groups |
  • **View** - Can view existing Environment Groups
  • + | +| | SLO |
  • **View** - Can view existing SLOs
  • + | +| | Monitored Services |
  • **View** - Can view existing Monitored Services
  • + | +| | Pipelines |
  • **View** - Can view existing Pipelines
  • + | +| | Services |
  • **View** - Can view existing Services
  • + | +| **GitOps Admin Role** | Clusters |
  • **View** - Can view existing Clusters
  • **Create/Edit** - Can create and edit Clusters
  • **Delete** - Can delete existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • **Create/Edit** - Can create and edit Agents
  • **Delete** - Can delete existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • **Create/Edit** - Can create and edit Repository Certificates
  • **Delete** - Can delete existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing Applications
  • **Create/Edit** - Can create and edit Applications
  • **Delete** - Can delete existing Applications
  • **Sync** - Can deploy Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • **Create/Edit** - Can create and edit Repositories
  • **Delete** - Can delete existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • **Create/Edit** - Can create and edit GnuPG Keys
  • **Delete** - Can delete existing GnuPG Keys
  • + | +| **Feature Flag Manage Role** | Feature Flags |
  • **Create/Edit** - Can create and edit Feature Flags
  • + | +| | Target Management |
  • **Create/Edit** - Can create and edit Targets and Target Groups to control visibility of variation of a Feature Flag
  • + | + +The following table lists permissions corresponding to the default roles at the Organization [scope](../1-rbac-in-harness.md#rbac-scope): + + + +| | | | +| --- | --- | --- | +| **Role** | **Resource Type** | **Permissions** | +| **Organization Admin​** | Resource Groups |
  • **View** - Can view existing Resource Groups
  • **Create/Edit** - Can create and edit Resource Groups
  • **Delete** - Can delete existing Resource Groups
  • + | +| | Service Accounts |
  • **View** - Can view existing Service Accounts
  • **Create/Edit** - Can create and edit Service Accounts
  • **Delete** - Can delete existing Service Accounts
  • **Manage** - Can create/update/delete API keys and tokens
  • + | +| | Organizations |
  • **View** - Can view existing Organizations
  • **Create** - Can create Organizations
  • **Edit** - Can edit existing Organizations
  • **Delete** - Can delete existing Organizations
  • + | +| | Roles |
  • **View** - Can view existing Roles
  • **Create/Edit** - Can create and edit Roles
  • **Delete** - Can delete existing Roles
  • + | +| | Projects |
  • **View** - Can view existing Projects
  • **Create** - Can create Projects
  • **Edit** - Can edit existing Projects
  • **Delete** -Can delete existing Projects
  • + | +| | Users |
  • **View** - Can view existing Users
  • **Manage** - Can update/delete users and their role bindings
  • **Invite** - Can invite Users to Harness
  • + | +| | User Groups |
  • **View** - Can view existing User Groups
  • **Manage** - Can create/update/delete User Groups
  • + | +| | Governance Policy Sets |
  • **View** - Can view existing Governance Policy Sets
  • **Create/Edit** - Can create and edit Governance Policy Sets
  • **Delete** - Can delete existing Governance Policy Sets
  • **Evaluate** - Can evaluate Governance Policy Sets
  • + | +| | Variables |
  • **View** - Can view existing Variables
  • **Create/Edit** - Can create and edit Variables
  • **Delete** - Can delete existing Variables
  • + | +| | Templates |
  • **View** - Can view existing Templates
  • **Create/Edit** - Can create and edit Templates
  • **Delete** - Can delete existing Templates
  • **Access** - Can access referenced Templates at runtime
  • + | +| | Governance Policies |
  • **View** - Can view existing Govenance Policies
  • **Create/Edit** - Can create and edit Governance Policies
  • **Delete** - Can delete existing Governance Policies
  • + | +| | Dashboards |
  • **View** - Can view existing Dashboards
  • **Manage** - Can manage existing Dashboards
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • **Create/Edit** - Can create and edit Delegate Configurations
  • **Delete** - Can delete existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • **Create/Edit** - Can create and edit Delegates
  • **Delete -** Can delete existing Delegates
  • + | +| | Secrets |
  • **View** - Can view existing Secrets
  • **Create/Edit** - Can create and edit Secrets
  • **Delete** - Can delete existing Secrets
  • **Access** - Can access referenced Secrets at runtime
  • + | +| | Connectors |
  • **View** - Can view existing Connectors
  • **Create/Edit** - Can create and edit Connectors
  • **Delete** - Can delete existing Connectors
  • **Access** - Can access referenced Connectors at runtime
  • + | +| | Environments |
  • **View** - Can view existing Environments
  • **Create/Edit** - Can create and edit Environments
  • **Delete** - Can delete existing Environments
  • **Access** - Can access referenced Environments at runtime
  • + | +| | ChaosHub |
  • **View** - Can view Chaos experiments and Chaos Scenarios
  • **Create/Edit** - Can connect to ChaosHub Git repo
  • **Delete** - Can disconnect ChaosHub Git repo
  • + | +| | Clusters |
  • **View** - Can view existing Clusters
  • **Create/Edit** - Can create and edit Clusters
  • **Delete** - Can delete existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • **Create/Edit** - Can create and edit Agents
  • **Delete** - Can delete existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • **Create/Edit** - Can create and edit Repository Certificates
  • **Delete** - Can delete existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing applications
  • **Create/Edit** - Can create and edit Applications
  • **Delete** - Can delete existing Applications
  • **Sync** - Can deploy Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • **Create/Edit** - Can create and edit Repositories
  • **Delete** - Can delete existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • **Create/Edit** - Can create and edit GnuPG Keys
  • **Delete** - Can delete existing GnuPG Keys
  • + | +| | Environment Groups |
  • **View** - Can view existing Environment Groups
  • **Create/Edit** - Can create and edit Environment Groups
  • **Delete** - Can delete existing Environment Groups
  • + | +| | SLO |
  • **View** - Can view an existing SLO
  • **Create/Edit** - Can create and edit SLO
  • **Delete** - Can delete an existing SLO
  • + | +| | Monitored Services |
  • **View** - Can view existing Monitored Services
  • **Create/Edit** - Can create and edit Monitored Services
  • **Delete** - Can delete existing Monitored Services
  • **Toggle** - Can toggle between different Monitored Services
  • + | +| | Pipelines |
  • **View** - Can view existing Pipelines
  • **Create/Edit** - Can create and edit Pipelines
  • **Delete** - Can delete existing Pipelines
  • **Execute** - Can execute Pipelines
  • + | +| | Services |
  • **View** - Can view existing Services
  • **Create/Edit** - Can create and edit Services
  • **Delete** - Can delete existing Services
  • **Access** - Can access referenced Services at runtime
  • + | +| | Feature Flags |
  • **Toggle** - Can turn a Feature Flag on or off
  • **Create/Edit** - Can create and edit Feature Flags
  • **Delete** - Can delete existing Feature Flags
  • + | +| | Target Management |
  • **Create/Edit** - Can create and edit Targets and Target Groups to control visibility of variation of a Feature Flag
  • **Delete** - Can delete Targets and Target Groups
  • + | +| **Organization Viewer** | Resource Groups |
  • **View** - Can view existing Resource Groups
  • + | +| | Service Accounts |
  • **View** - Can view existing Service Accounts
  • + | +| | Organizations |
  • **View** - Can view existing Organizations
  • + | +| | Roles |
  • **View** - Can view existing Roles
  • + | +| | Projects |
  • **View** - Can view existing Projects
  • + | +| | Users |
  • **View** - Can view existing Users
  • + | +| | User Groups |
  • **View** - Can view existing User Groups
  • + | +| | Governance Policy Sets |
  • **View** - Can view existing Governance Policy Sets
  • + | +| | Variables |
  • **View** - Can view existing Variables
  • + | +| | Templates |
  • **View** - Can view existing Templates
  • + | +| | Governance Policies |
  • **View** - Can view existing Governance Policies
  • + | +| | Dashboards |
  • **View** - Can view existing Dashboards
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • + | +| | Secrets |
  • **View** - Can view existing Secrets
  • + | +| | Connectors |
  • **View** - Can view existing Connectors
  • + | +| | Environments |
  • **View** - Can view existing Environments
  • + | +| | ChaosHub |
  • **View** - Can view Chaos experiments and Chaos Scenarios
  • **Create/Edit** - Can connect to ChaosHub Git repo
  • **Delete** - Can disconnect ChaosHub Git repo
  • + | +| | Clusters |
  • **View** - Can view existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • + | +| | Environment Groups |
  • **View** - Can view existing Environment Groups
  • + | +| | SLO |
  • **View** - Can view existing SLOs
  • + | +| | Monitored Services |
  • **View** - Can view existing Monitored Services
  • + | +| | Pipelines |
  • **View** - Can view existing Pipelines
  • + | +| | Services |
  • **View** - Can view existing Services
  • + | +| **GitOps Admin Role** | Clusters |
  • **View** - Can view existing Clusters
  • **Create/Edit** - Can create and edit Clusters
  • **Delete** - Can delete existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • **Create/Edit** - Can create and edit Agents
  • **Delete** - Can delete existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • **Create/Edit** - Can create and edit Repository Certificates
  • **Delete** - Can delete existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing Applications
  • **Create/Edit** - Can create and edit Applications
  • **Delete** - Can delete existing Applications
  • **Sync** - Can deploy Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • **Create/Edit** - Can create and edit Repositories
  • **Delete** - Can delete existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • **Create/Edit** - Can create and edit GnuPG Keys
  • **Delete** - Can delete existing GnuPG Keys
  • + | +| **Feature Flag Manage Role** | Feature Flags |
  • **Create/Edit** - Can create and edit Feature Flags
  • + | +| | Target Management |
  • **Create/Edit** - Can create and edit Targets and Target Groups to control visibility of variation of a Feature Flag
  • + | + +The following table lists permissions corresponding to the default roles at the Project [scope](../1-rbac-in-harness.md#rbac-scope): + + + +| | | | +| --- | --- | --- | +| **Role** | **Resource Type** | **Permissions** | +| **Pipeline Executor** | Resource Groups |
  • **View** - Can view existing Resource Groups
  • + | +| | Roles |
  • **View** - Can view existing Service Accounts
  • + | +| | Projects |
  • **View** - Can view existing Projects
  • + | +| | Users |
  • **View** - Can view existing Users
  • + | +| | User Groups |
  • **View** - Can view existing User Groups
  • + | +| | Variables |
  • **View** - Can view existing Variables
  • + | +| | Templates |
  • **View** - Can view existing Templates
  • **Access** - Can access referenced Templates at runtime
  • + | +| | Secrets |
  • **View** - Can view existing Secrets
  • **Access** - Can access referenced Secrets at runtime
  • + | +| | Connectors |
  • **View** - Can view existing Connectors
  • **Access** - Can access referenced Connectors at runtime
  • + | +| | Environments |
  • **View** - Can view existing Environments
  • **Access** - Can access referenced Environments at runtime
  • + | +| | Environment Groups |
  • **View** - Can view existing Environment Groups
  • + | +| | Pipelines |
  • **View** - Can view existing Pipelines
  • **Execute** - Can execute Pipelines
  • + | +| | Services |
  • **View** - Can view existing Services
  • **Access** - Can access Services
  • + | +| **Project Admin​** | Resource Groups |
  • **View** - Can view Resource Groups
  • **Create/Edit** - Can create and edit Resource Groups
  • **Delete** - Can delete Resource Groups
  • + | +| | Service Accounts |
  • **View** - Can view Service Accounts
  • **Create/Edit** - Can create and edit Service Accounts
  • **Delete** - Can delete Service Accounts
  • **Manage** - Can create/update/delete API keys and tokens
  • + | +| | Roles |
  • **View** - Can view existing Roles
  • **Create/Edit** - Can create and edit Roles
  • **Delete** - Can delete existing Roles
  • + | +| | Projects |
  • **View** - Can view existing Projects
  • **Edit** - Can edit existing Projects
  • **Delete** - Can delete existing Projects
  • + | +| | Users |
  • **View** - Can view existing users
  • **Manage** - Can update/delete users and their role bindings
  • **Invite** - Can invite users to Harness
  • + | +| | User Groups |
  • **View** - Can view User Groups
  • **Manage** - Can create/update/delete User Groups
  • + | +| | Governance Policy Sets |
  • **View** - Can view exisitng Governance Policy Sets
  • **Create/Edit** - Can create and edit Governance Policy Sets
  • **Delete** - Can delete existing Governance Policy Sets
  • **Evaluate** - Can evaluate Governance Policy Sets
  • + | +| | Variables |
  • **View** - Can view existing Variables
  • **Create/Edit** - Can create and edit Variables
  • **Delete** - Can delete existing Variables
  • + | +| | Templates |
  • **View** - Can view existing Templates
  • **Create/Edit** - Can create and edit Templates
  • **Delete** - Can delete existing Templates
  • **Access** - Can access referenced Templates at runtime
  • + | +| | Governance Policies |
  • **View** - Can view existing Governance Policies
  • **Create/Edit** - Can create and edit Governance Policies
  • **Delete** - Can delete existing Governance Policies
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • **Create/Edit** - Can create and edit Delegate Configurations
  • **Delete** - Can delete existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • **Create/Edit** - Can create and Edit Delegates
  • **Delete** - Can delete existing Delegates
  • + | +| | Dashboards |
  • **View** - Can view Dashboards
  • **Manage** - Can manage and edit Dashboards
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • **Create/Edit** - Can create and edit Delegate Configurations
  • **Delete** - Can delete existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • **Create/Edit** - Can create and Edit Delegates
  • **Delete** - Can delete existing Delegates
  • + | +| | Secrets |
  • **View** - Can view existing Secrets
  • **Create/Edit** - Can create and edit Secrets
  • **Delete** - Can delete existing Secrets
  • **Access** - Can access referenced Secrets at runtime
  • + | +| | Connectors |
  • **View** - Can view existing Connectors
  • **Create/Edit** - Can create and edit Connectors
  • **Delete** - Can delete existing Connectors
  • **Access** - Can access referenced Connectors at runtime
  • + | +| | Environments |
  • **View** - Can view existing Environments
  • **Create/Edit** - Can create and edit Environments
  • **Delete** - Can delete existing Environment
  • **Access** - Can access referenced Environments at runtime
  • + | +| | ChaosHub |
  • **View** - Can view Chaos experiments and Chaos Scenarios
  • **Create/Edit** - Can connect to ChaosHub Git repo
  • **Delete** - Can disconnect ChaosHub Git repo
  • + | +| | Clusters |
  • **View** - Can view existing Clusters
  • **Create/Edit** - Can create and edit Clusters
  • **Delete** - Can delete existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • **Create/Edit** - Can create and edit Agents
  • **Delete** - Can delete existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • **Create/Edit** - Can create and edit Repository Certificates
  • **Delete** - Can delete existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing applications
  • **Create/Edit** - Can create and edit Applications
  • **Delete** - Can delete existing Applications
  • **Sync** - Can deploy Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • **Create/Edit** - Can create and edit Repositories
  • **Delete** - Can delete existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • **Create/Edit**- Can create and edit GnuPG Keys
  • **Delete**- Can delete existing GnuPG Keys
  • + | +| | Environment Groups |
  • **View** - Can view existing Environment Groups
  • **Create/Edit** - Can create and edit Environment Groups
  • **Delete** - Can delete existing Environment Groups
  • + | +| | SLO |
  • **View** - Can view an existing SLO
  • **Create/Edit** - Can create and edit SLO
  • **Delete** - Can delete an existing SLO
  • + | +| | Monitored Services |
  • **View** - Can view existing Monitored Services
  • **Create/Edit** - Can create and edit Monitored Services
  • **Delete** - Can delete existing Monitored Services
  • **Toggle** - Can toggle between different Monitored Services
  • + | +| | Pipelines |
  • **View** - Can view existing Pipelines
  • **Create/Edit** - Can create and edit Pipelines
  • **Delete** - Can delete existing Pipelines
  • **Execute** - Can execute Pipelines
  • + | +| | Services |
  • **View** - Can view existing Services
  • **Create/Edit** - Can create and edit Services
  • **Delete** - Can delete existing Services
  • **Access** - Can access referenced Services at runtime
  • + | +| | Feature Flags |
  • **Toggle** - Can turn a Feature Flag on or off
  • **Create/Edit** - Can create and edit Feature Flags**Delete** - Can delete existing Feature Flags
  • + | +| | Target Management |
  • **Create/Edit** - Can create and edit Targets and Target Groups to control visibility of variation of a Feature Flag
  • **Delete** - Can delete Targets and Target Groups
  • + | +| **Project Viewer** | Resource Groups |
  • **View** - Can view existing Resource Groups
  • + | +| | Service Accounts |
  • **View** - Can view existing Service Accounts
  • + | +| | Roles |
  • **View** - Can view existing Roles
  • + | +| | Projects |
  • **View** - Can view existing Projects
  • + | +| | Users |
  • **View** - Can view existing Users
  • + | +| | User Groups |
  • **View** - Can view existing User Groups
  • + | +| | Governance Policy Sets |
  • **View** - Can view existing Governance Policy Sets
  • + | +| | Variables |
  • **View** - Can view existing Variables
  • + | +| | Templates |
  • **View** - Can view existing Templates
  • + | +| | Governance Policies |
  • **View** - Can view existing Governance Policies
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • + | +| | Dashboards |
  • **View** - Can view existing Dashboards
  • + | +| | Delegate Configurations |
  • **View** - Can view existing Delegate Configurations
  • + | +| | Delegates |
  • **View** - Can view existing Delegates
  • + | +| | Secrets |
  • **View** - Can view existing Secrets
  • + | +| | Connectors |
  • **View** - Can view existing Connectors
  • + | +| | Environments |
  • **View** - Can view existing Environments
  • + | +| | ChaosHub |
  • **View** - Can view Chaos experiments and Chaos Scenarios
  • | +| | Clusters |
  • **View** - Can view existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • + | +| | Environment Groups |
  • **View** - Can view existing Environment Groups
  • + | +| | SLO |
  • **View** - Can view existing SLOs
  • + | +| | Monitored Services |
  • **View** - Can view existing Monitored Services
  • + | +| | Pipelines |
  • **View** - Can view existing Pipelines
  • + | +| | Services |
  • **View** - Can view existing Services
  • + | +| **GitOps Admin Role** | Clusters |
  • **View** - Can view existing Clusters
  • **Create/Edit** - Can create and edit Clusters
  • **Delete** - Can delete existing Clusters
  • + | +| | Agents |
  • **View** - Can view existing Agents
  • **Create/Edit** - Can create and edit Agents
  • **Delete** - Can delete existing Agents
  • + | +| | Repository Certificates |
  • **View** - Can view existing Repository Certificates
  • **Create/Edit** - Can create and edit Repository Certificates
  • **Delete** - Can delete existing Repository Certificates
  • + | +| | Applications |
  • **View** - Can view existing Applications
  • **Create/Edit** - Can create and edit Applications
  • **Delete** - Can delete existing Applications
  • **Sync** - Can deploy Applications
  • + | +| | Repositories |
  • **View** - Can view existing Repositories
  • **Create/Edit** - Can create and edit Repositories
  • **Delete** - Can delete existing Repositories
  • + | +| | GnuPG Keys |
  • **View** - Can view existing GnuPG Keys
  • **Create/Edit** - Can create and edit GnuPG Keys
  • **Delete** - Can delete existing GnuPG Keys
  • + | +| **Feature Flag Manage Role** | Feature Flags |
  • **Create/Edit** - Can create and edit Feature Flags
  • + | +| | Target Management |
  • **Create/Edit** - Can create and edit Targets and Target Groups to control visibility of variation of a Feature Flag
  • + | + +### Default Resource Group + +Harness includes the following default Resource Groups at the Account, Org, and Project scope: + + + +| | | | +| --- | --- | --- | +| Scope | Resource Group | Description | +| Account | **All Resources Including Child Scopes** | Includes all the resources within the scope of the Account, as well as those within the scope of the Orgs and Projects in this Account. | +| Account | **All Account Level Resources** | Includes all the resources within the scope of the Account. This does not include resources within the scope of Org or Project. | +| Org | **All Resources Including Child Scopes** | Includes all the resources within the scope of the Org, as well as those within the scope of all the Projects created within this Org. | +| Org | **All Organization Level Resources** | Includes all the resources within the scope of the Org. This does not include resources within the scope of Projects. | +| Project | **All Project Level Resources** | Includes all the resources within the scope of the Project. | + diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-20.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-20.png new file mode 100644 index 00000000000..21ebf2a775b Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-20.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-21.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-21.png new file mode 100644 index 00000000000..19791b785ec Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-21.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-22.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-22.png new file mode 100644 index 00000000000..59bce161d88 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-22.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-23.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-23.png new file mode 100644 index 00000000000..12433406226 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-23.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-24.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-24.png new file mode 100644 index 00000000000..42d13b12bda Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-24.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-25.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-25.png new file mode 100644 index 00000000000..0edcb649692 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-25.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-26.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-26.png new file mode 100644 index 00000000000..42d13b12bda Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-26.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-27.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-27.png new file mode 100644 index 00000000000..0068bee02f0 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-27.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-28.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-28.png new file mode 100644 index 00000000000..0e7e6c911dc Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-28.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-29.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-29.png new file mode 100644 index 00000000000..243a4b4c02b Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-29.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-30.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-30.png new file mode 100644 index 00000000000..f6180e87f1f Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-30.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-31.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-31.png new file mode 100644 index 00000000000..8b8a65742c7 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-api-keys-31.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-45.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-45.png new file mode 100644 index 00000000000..f2d3b6da956 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-45.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-46.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-46.png new file mode 100644 index 00000000000..f31169ac43f Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-46.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-47.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-47.png new file mode 100644 index 00000000000..f5ab8db676c Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-47.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-48.png b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-48.png new file mode 100644 index 00000000000..a153dbcb9a5 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-and-manage-service-account-48.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-17.png b/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-17.png new file mode 100644 index 00000000000..178c8a9cd17 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-17.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-18.png b/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-18.png new file mode 100644 index 00000000000..13de1c613f4 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-18.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-19.png b/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-19.png new file mode 100644 index 00000000000..4ce0adfb147 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-manage-roles-19.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-32.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-32.png new file mode 100644 index 00000000000..5dcabd72b66 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-32.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-33.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-33.png new file mode 100644 index 00000000000..5158736cec3 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-33.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-34.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-34.png new file mode 100644 index 00000000000..e5dbf191731 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-34.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-35.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-35.png new file mode 100644 index 00000000000..16d90ad5998 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-35.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-36.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-36.png new file mode 100644 index 00000000000..05293539209 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-36.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-37.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-37.png new file mode 100644 index 00000000000..b0f1dc2b086 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-37.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-38.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-38.png new file mode 100644 index 00000000000..3c9cd030456 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-38.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-39.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-39.png new file mode 100644 index 00000000000..e22ce323b98 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-39.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-40.png b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-40.png new file mode 100644 index 00000000000..cbb4580b2bc Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-resource-groups-40.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-49.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-49.png new file mode 100644 index 00000000000..fed59899c72 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-49.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-50.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-50.png new file mode 100644 index 00000000000..cb253ddc302 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-50.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-51.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-51.png new file mode 100644 index 00000000000..b0f42095de8 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-51.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-52.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-52.png new file mode 100644 index 00000000000..5a251c3dd95 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-52.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-53.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-53.png new file mode 100644 index 00000000000..b8aeb2117df Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-53.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-54.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-54.png new file mode 100644 index 00000000000..cf07441cd90 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-54.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-55.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-55.png new file mode 100644 index 00000000000..0513d56794e Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-55.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-56.png b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-56.png new file mode 100644 index 00000000000..a32b1d354ee Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-user-groups-56.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-users-11.png b/docs/platform/4_Role-Based-Access-Control/static/add-users-11.png new file mode 100644 index 00000000000..bc93e096f1d Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-users-11.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-users-12.png b/docs/platform/4_Role-Based-Access-Control/static/add-users-12.png new file mode 100644 index 00000000000..566a92315ee Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-users-12.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-users-13.png b/docs/platform/4_Role-Based-Access-Control/static/add-users-13.png new file mode 100644 index 00000000000..feabaf14b24 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-users-13.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-users-14.png b/docs/platform/4_Role-Based-Access-Control/static/add-users-14.png new file mode 100644 index 00000000000..29d179fa420 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-users-14.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-users-15.png b/docs/platform/4_Role-Based-Access-Control/static/add-users-15.png new file mode 100644 index 00000000000..edfdff39568 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-users-15.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/add-users-16.png b/docs/platform/4_Role-Based-Access-Control/static/add-users-16.png new file mode 100644 index 00000000000..55d975d28b3 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/add-users-16.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-05.png b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-05.png new file mode 100644 index 00000000000..614da2b2810 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-05.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-06.png b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-06.png new file mode 100644 index 00000000000..55ea7266268 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-06.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-07.png b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-07.png new file mode 100644 index 00000000000..3dbca749726 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-07.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-08.png b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-08.png new file mode 100644 index 00000000000..1df414540f1 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-08.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-09.png b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-09.png new file mode 100644 index 00000000000..cf2c7420b56 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-09.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-10.png b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-10.png new file mode 100644 index 00000000000..e53ba87c781 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/attribute-based-access-control-10.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-57.png b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-57.png new file mode 100644 index 00000000000..d1e1e223f9d Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-57.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-58.png b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-58.png new file mode 100644 index 00000000000..fc3ab7d7938 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-58.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-59.png b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-59.png new file mode 100644 index 00000000000..f6a4337a62a Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-59.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-60.png b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-60.png new file mode 100644 index 00000000000..04f9e22d712 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/harness-default-user-groups-60.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-00.png b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-00.png new file mode 100644 index 00000000000..3c0a4f79bf6 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-00.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-01.png b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-01.png new file mode 100644 index 00000000000..820740536c9 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-01.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-02.png b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-02.png new file mode 100644 index 00000000000..14ab82a361f Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-02.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-03.png b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-03.png new file mode 100644 index 00000000000..04776084202 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-03.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-04.png b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-04.png new file mode 100644 index 00000000000..6b0b52e65fe Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/rbac-in-harness-04.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-41.png b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-41.png new file mode 100644 index 00000000000..2cebad5d180 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-41.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-42.png b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-42.png new file mode 100644 index 00000000000..f3963d8e511 Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-42.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-43.png b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-43.png new file mode 100644 index 00000000000..f7ef136881c Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-43.png differ diff --git a/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-44.png b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-44.png new file mode 100644 index 00000000000..79c04e0254f Binary files /dev/null and b/docs/platform/4_Role-Based-Access-Control/static/set-up-rbac-pipelines-44.png differ diff --git a/docs/platform/5_Notifications/_category_.json b/docs/platform/5_Notifications/_category_.json new file mode 100644 index 00000000000..d6a9f19cb1d --- /dev/null +++ b/docs/platform/5_Notifications/_category_.json @@ -0,0 +1 @@ +{"label": "Notifications", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Notifications"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "y9pmm3ig37"}} \ No newline at end of file diff --git a/docs/platform/5_Notifications/add-smtp-configuration.md b/docs/platform/5_Notifications/add-smtp-configuration.md new file mode 100644 index 00000000000..59dad6796c3 --- /dev/null +++ b/docs/platform/5_Notifications/add-smtp-configuration.md @@ -0,0 +1,126 @@ +--- +title: Add SMTP Configuration +description: Explains how to configure SMTP for email-based deployment notifications, approvals, and tracking. +# sidebar_position: 2 +helpdocs_topic_id: d43r71g20s +helpdocs_category_id: y9pmm3ig37 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can send email notifications to Harness User Groups using your SMTP accounts. + +Emails can be sent automatically in response to Pipeline and stage events like Pipeline Failed and Stage Succeeded. + +Your Harness SaaS account includes an SMTP server, so you don't need to add one of your own. + +If you are using the Harness On-Prem offering, then you will need to add an SMTP server to your Harness account.This topic explains how to configure an SMTP server with your Harness account and send email notifications according to different Pipeline events. + + +### Before you begin + +* [User Group Notification Preferences](../4_Role-Based-Access-Control/4-add-user-groups.md#option-notification-preferences) + +### Limitations + +Configuring your SMTP server is required only if you are using [Harness On-Prem](https://docs.harness.io/article/tb4e039h8x-harness-on-premise-overview), or if you wish to use your own SMTP server instead of the Harness SaaS default SMTP option. + +### Step 1: Add SMTP Configuration + +In your Harness account, go to **Account Settings**. + +Click **Account Resources.** + +![](./static/add-smtp-configuration-00.png) +Click **SMTP Configuration** and then click **Setup**. + +The SMTP Configuration settings appear. + +![](./static/add-smtp-configuration-01.png) +### Step 2: Details + +Enter **Name** for your SMTP Configuration. + +In **Host,** enter your SMTP server's URL. + +Enter the port number on which the SMTP server is listening (typically, `25`). + +Select **Enable SSL** for secure connections (SSL/TLS). + +Select **Start TSL** to enable SMTP over TLS, or when the connection is upgraded to SSL/TLS using `STARTTLS`. + +In **From Address**, enter the email address from which Harness will send notification emails. + +Click **Continue**. + +### Step 3: Credentials + +Enter the username and password for the email account. + +![](./static/add-smtp-configuration-02.png) +Click **Save and Continue**. + +### Step 4: Test Connection + +In **To,** enter the email address to which you want to send notifications. + +Enter **Subject** and **Body** for the email. + +Click **Test**. + +![](./static/add-smtp-configuration-03.png) +Click **Continue** after the test is successful. + +SMTP is configured for your account. + +![](./static/add-smtp-configuration-04.png) +### Option: Send Notifications for a User Group using Email + +In your **Account**/**Organization**/**Project** click Access Control. + +Click **User Groups**. + +Select the User Group to which you want to add notification preferences. + +In **Notification Preferences**, select **Email/Alias**. + +Enter the email address from which you want to send email notifications. + +You can also send email notifications to all the members of this user group by selecting **Send email to all users part of this group**. + +![](./static/add-smtp-configuration-05.png) +Click **Save**. + +![](./static/add-smtp-configuration-06.png) +### Option: Send Notification for a Pipeline + +You can send Pipeline event notifications using email. Event notifications are set up using **Notify** option in your Pipeline. + +In Harness, go to your Pipeline and click **Notify**. + +Click **Notifications**. The **New Notification** settings appear. + +![](./static/add-smtp-configuration-07.png) +Enter a name for your notification rule and click **Continue**. + +Select the Pipeline Events for which you want to send notifications. Click **Continue**. + +![](./static/add-smtp-configuration-08.png) +In **Notification Method**, select **Email**. + +Enter the email addresses to which you want to send the notifications. + +Select the User groups which you want to notify. + +Click **Test**. + +Once the test is successful, click **Finish**. + +![](./static/add-smtp-configuration-09.png) +Your Notification Rule is now listed in Notifications. Post this users will get email notifications when the events listed in the Notification Rule occur. + +### See also + +* [Send Notifications using Slack](send-notifications-using-slack.md) +* [Send Notifications to Microsoft Teams](send-notifications-to-microsoft-teams.md) + diff --git a/docs/platform/5_Notifications/send-notifications-to-microsoft-teams.md b/docs/platform/5_Notifications/send-notifications-to-microsoft-teams.md new file mode 100644 index 00000000000..d0180197d53 --- /dev/null +++ b/docs/platform/5_Notifications/send-notifications-to-microsoft-teams.md @@ -0,0 +1,88 @@ +--- +title: Send Notifications to Microsoft Teams +description: This topic explains how to send user group notifications using Microsoft Teams. +# sidebar_position: 2 +helpdocs_topic_id: xcb28vgn82 +helpdocs_category_id: y9pmm3ig37 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness notifies your User Groups of events in Pipelines, and general alerts. + +You can integrate your Harness User Group with Microsoft Teams and receive notifications in Teams channels. + +Setup is a simple process of generating a Webhook in Microsoft Teams and adding it to a Harness User Group's Notification Preferences. Let's get started. + +### Before you begin + +* See [User Group Notification Preferences](../4_Role-Based-Access-Control/4-add-user-groups.md#option-notification-preferences) + +### Review: Requirements + +We assume you have a Microsoft Teams administrator account. + +### Step 1: Create a Connector for Microsoft Teams Channel + +You create a channel connector in Microsoft Teams to generate the Webhook Harness needs for notification. + +In Microsoft Teams, right-click the channel where you want to send notifications, and select **Connectors**. + +![](./static/send-notifications-to-microsoft-teams-10.png) +In **Connectors**, locate **Incoming Webhook**, and click **Configure.** + +![](./static/send-notifications-to-microsoft-teams-11.png) +In **Incoming Webhook**, enter a name, such as **Harness**. + +Right-click and save the Harness icon from here: + +![](./static/send-notifications-to-microsoft-teams-12.png) +Click **Upload Image** and add the Harness icon you downloaded. + +Next, you'll create the Webhook URL needed by Harness. + +### Step 2: Generate Channel Webhook + +In your Microsoft Teams Connector, click **Create**. The Webhook URL is generated. + +![](./static/send-notifications-to-microsoft-teams-13.png) +Click the copy button to copy the Webhook URL, and then click **Done**. + +The channel indicates that the Connector was set up. + +![](./static/send-notifications-to-microsoft-teams-14.png) +### Step 3: Add Webhook to Harness User Group Notification Preferences + +In your **Account**/**Organization**/**Project** click Access Control. + +Click **User Groups**. + +Select the User Group to which you want to add notification preferences. + +In **Notification Preferences**, select **Microsoft Teams Webhook URL**. + +Paste the Webhook into **Microsoft Teams Webhook URL** or add it as an [Encrypted Text](../6_Security/2-add-use-text-secrets.md) in Harness and reference it here. + +For example, if you have a text secret with the identifier `teamswebhookURL`, you can reference it like this: ​ + + +``` +<+secrets.getValue("teamswebhookURL")>​​ +``` +You can reference a secret within the Org scope using an expression with `org`:​ + + +``` +<+secrets.getvalue("org.your-secret-Id")>​ +``` +You can reference a secret within the Account scope using an expression with `account`:​ + + +``` +<+secrets.getvalue("account.your-secret-Id")>​ +``` +Click **Save**. + +![](./static/send-notifications-to-microsoft-teams-15.png) +Now your Microsoft Teams channel will be used to notify this User Group of alerts. + diff --git a/docs/platform/5_Notifications/send-notifications-using-slack.md b/docs/platform/5_Notifications/send-notifications-using-slack.md new file mode 100644 index 00000000000..1a2b3ee956d --- /dev/null +++ b/docs/platform/5_Notifications/send-notifications-using-slack.md @@ -0,0 +1,58 @@ +--- +title: Send Notifications Using Slack +description: This topic explains how to send user group notifications using slack. +# sidebar_position: 2 +helpdocs_topic_id: h5n2oj8y5y +helpdocs_category_id: y9pmm3ig37 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can notify your User Group members using Slack as one of the notification channels. To do this, add a Slack Incoming Webhook into your Harness User Groups' [Notification Preferences](../4_Role-Based-Access-Control/4-add-user-groups.md#option-notification-preferences). + +Then you can add your User Group to a Notification Strategy and receive alert info in Slack. + + +### Before you begin + +* See [User Group Notification Preferences](../4_Role-Based-Access-Control/4-add-user-groups.md#option-notification-preferences) + +### Visual Summary + +Adding a Slack channel to your Harness User Groups **Notification Preferences** is as simple as pasting in a Slack Webhook: + +![](./static/send-notifications-using-slack-16.png) +### Step 1: Create a Slack App and Webhook for Your Channel + +Follow the steps in Slack documentation for creating a Slack app, selecting your channel, and creating a webhook: [Sending messages using Incoming Webhooks](https://api.slack.com/messaging/webhooks). + +When you are done, you'll have a webhook that looks something like this: + +![](./static/send-notifications-using-slack-17.png) +Copy the Webhook. + +### Step 2: Add the Webhook to the User Group Notification Preferences + +1. In your **Account**/**Organization**/**Project** click Access Control. +2. Click **User Groups**. +3. Select the User Group to which you want to add notification preferences. +4. In **Notification Preferences**, select **Slack Webhook URL**. +5. Paste the Webhook into **Slack Webhook URL**  or add it as an [Encrypted Text](../6_Security/2-add-use-text-secrets.md) in Harness and reference it here. +For example, if you have a text secret with the identifier slackwebhookURL, you can reference it like this: +``` +<+secrets.getValue("slackwebhookURL")>​​ +``` + +You can reference a secret within the Org scope using an expression with `org`:​ +``` +<+secrets.getValue("org.your-secret-Id")>​ +``` + +You can reference a secret within the Account scope using an expression with `account`:​ +``` +<+secrets.getValue("account.your-secret-Id")>​ +``` +6. Click **Save**.![](./static/send-notifications-using-slack-18.png) + +Now your Slack channel will be used to notify this User Group of alerts. + diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-00.png b/docs/platform/5_Notifications/static/add-smtp-configuration-00.png new file mode 100644 index 00000000000..a1599fdbba1 Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-00.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-01.png b/docs/platform/5_Notifications/static/add-smtp-configuration-01.png new file mode 100644 index 00000000000..e583d88ead3 Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-01.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-02.png b/docs/platform/5_Notifications/static/add-smtp-configuration-02.png new file mode 100644 index 00000000000..782fe82df9d Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-02.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-03.png b/docs/platform/5_Notifications/static/add-smtp-configuration-03.png new file mode 100644 index 00000000000..76886eff045 Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-03.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-04.png b/docs/platform/5_Notifications/static/add-smtp-configuration-04.png new file mode 100644 index 00000000000..19fc8efab9a Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-04.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-05.png b/docs/platform/5_Notifications/static/add-smtp-configuration-05.png new file mode 100644 index 00000000000..fce451ac6fc Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-05.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-06.png b/docs/platform/5_Notifications/static/add-smtp-configuration-06.png new file mode 100644 index 00000000000..6cb60c93beb Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-06.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-07.png b/docs/platform/5_Notifications/static/add-smtp-configuration-07.png new file mode 100644 index 00000000000..c1c9774f9d1 Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-07.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-08.png b/docs/platform/5_Notifications/static/add-smtp-configuration-08.png new file mode 100644 index 00000000000..43079ce8347 Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-08.png differ diff --git a/docs/platform/5_Notifications/static/add-smtp-configuration-09.png b/docs/platform/5_Notifications/static/add-smtp-configuration-09.png new file mode 100644 index 00000000000..fbf7457f63b Binary files /dev/null and b/docs/platform/5_Notifications/static/add-smtp-configuration-09.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-10.png b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-10.png new file mode 100644 index 00000000000..b09b9a1a116 Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-10.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-11.png b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-11.png new file mode 100644 index 00000000000..8debeb4c717 Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-11.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-12.png b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-12.png new file mode 100644 index 00000000000..41828e9963e Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-12.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-13.png b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-13.png new file mode 100644 index 00000000000..0847e1273c4 Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-13.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-14.png b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-14.png new file mode 100644 index 00000000000..dd0bf6b7183 Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-14.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-15.png b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-15.png new file mode 100644 index 00000000000..055f0232947 Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-to-microsoft-teams-15.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-using-slack-16.png b/docs/platform/5_Notifications/static/send-notifications-using-slack-16.png new file mode 100644 index 00000000000..a399c5cafce Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-using-slack-16.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-using-slack-17.png b/docs/platform/5_Notifications/static/send-notifications-using-slack-17.png new file mode 100644 index 00000000000..70856351ad5 Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-using-slack-17.png differ diff --git a/docs/platform/5_Notifications/static/send-notifications-using-slack-18.png b/docs/platform/5_Notifications/static/send-notifications-using-slack-18.png new file mode 100644 index 00000000000..a399c5cafce Binary files /dev/null and b/docs/platform/5_Notifications/static/send-notifications-using-slack-18.png differ diff --git a/docs/platform/6_Security/1-harness-secret-manager-overview.md b/docs/platform/6_Security/1-harness-secret-manager-overview.md new file mode 100644 index 00000000000..c3d37220c2e --- /dev/null +++ b/docs/platform/6_Security/1-harness-secret-manager-overview.md @@ -0,0 +1,68 @@ +--- +title: Harness Secrets Management Overview +description: Harness includes a built-in Secrets Management feature that enables you to store encrypted secrets, such as access keys, and use them in your Harness account. Some key points about Secrets Management… +# sidebar_position: 2 +helpdocs_topic_id: hngrlb7rd6 +helpdocs_category_id: sy6sod35zi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness includes a built-in Secrets Management feature that enables you to store encrypted secrets, such as access keys, and use them in your Harness account. Some key points about Secrets Management: + +* Secrets are always stored in encrypted form and decrypted when they are needed. +* Harness Manager does not have access to your key management system, and only the Harness Delegate, which sits in your private network, has access to it. Harness never makes secrets management accessible publicly. This adds an important layer of security. + +### Before you begin + +* See [Harness Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Visual Summary + +You can choose to use your own secrets management solution, or the built-in Harness Secrets Manager. This diagram shows how Harness handles secrets: + +![](./static/harness-secret-manager-overview-44.png) +### Harness Secrets Management Process Overview + +Harness sends only encrypted data to the Secrets Manager, as follows:  + +1. Your browser sends data over HTTPS to Harness Manager. +2. Harness Manager relays encrypted data to the Harness Delegate, also over HTTPS. +3. The Delegate exchanges a key pair with the Secrets Manager, over an encrypted connection. +4. The Harness Delegate uses the encrypted key and the encrypted secret and then discards them. The keys never leave the Delegate. + +Any secrets manager requires a running Harness Delegate to encrypt and decrypt secrets. Any Delegate that references a secret requires direct access to the Secrets Manager.You can manage your secrets in Harness using either a Key Management Service or third-party Secrets Manager. + +#### Using Key Management Services + +Google Cloud Key Management Service is the default Secrets Manager in Harness and is named Harness Secrets Manager Google KMS. + +The Key Management Service (Google Cloud KMS or AWS KMS) only stores the key. Harness uses [envelope encryption](https://cloud.google.com/kms/docs/envelope-encryption) to encrypt and decrypt secrets. The encrypted secret and the encrypted Data Encryption Key (used for envelope encryption) are stored in the Harness database.  + +If you are using a KMS, rotation of keys is not supported by Harness and you might lose access to your secrets if the older version of the key is removed from your KMS. + +#### Using Third-Party Secrets Managers + +You can also use third-party Secrets Managers, for example, HashiCorp Vault, Azure Key Vault, and AWS Secrets Manager. + +These Secrets Managers store the key, perform encryption and decryption, and also store the secrets (encrypted key pair). Neither the keys nor the secrets are stored in the Harness database. A reference to the secret is stored in the Harness database. + +#### Secrets in Harness Community and Self-Managed Enterprise Edition Accounts + +In Community and Self-Managed Enterprise Edition accounts, Harness uses a random-key secrets store as the Harness Secrets Manager. + +Once you have installed Self-Managed Enterprise Edition, [Add a Secrets Manager](./5-add-secrets-manager.md). By default, Self-Managed Enterprise Edition installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended.Harness does not currently support migrating secrets from the random-key secrets store. If you add secrets here, you will need to recreate them in any custom secrets manager you configure later.All Harness secrets managers require a running Harness Delegate to encrypt and decrypt secrets. + +If you created a Harness trial account, a Delegate is typically provisioned by Harness, and the default Harness Secrets Manager performs encryption/decryption. + +#### Harness Secrets and Harness Git Experience + +When you set up [Harness Git Experience](../10_Git-Experience/git-experience-overview.md), you select the Connectivity Mode for Git syncing. You have two options: + +* **Connect Through Manager:** Harness SaaS will connect to your Git repo whenever you make a change and Git and Harness sync. +* **Connect Through Delegate:** Harness will make all connections using the Harness Delegate. This option is used for Self-Managed Enterprise Edition frequently, but it is also used for Harness SaaS. See [Harness Self-Managed Enterprise Edition Overview](https://docs.harness.io/article/tb4e039h8x-harness-on-premise-overview). + +If you select **Connect Through Manager**, the Harness Manager decrypts the secrets you have set up in the Harness Secrets Manager. + +This is different than **Connect Through Delegate** where only the Harness Delegate, which sits in your private network, has access to your key management system. + diff --git a/docs/platform/6_Security/10-add-google-kms-secrets-manager.md b/docs/platform/6_Security/10-add-google-kms-secrets-manager.md new file mode 100644 index 00000000000..6b9b848c22c --- /dev/null +++ b/docs/platform/6_Security/10-add-google-kms-secrets-manager.md @@ -0,0 +1,110 @@ +--- +title: Add Google KMS as a Harness Secret Manager +description: This topic explains steps to add Google KMS as a Secret Manager. +# sidebar_position: 2 +helpdocs_topic_id: cyyym9tbqt +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use Google [Cloud Key Management Service](https://cloud.google.com/security-key-management) (Cloud KMS) as a Harness Secret Manager. Once Google KMS is added as a Secrets Manager, you can create encrypted secrets in Google KMS and use them in your Harness account. + +For details on Harness Secret Managers, see [Harness Secret Manager Overview](../6_Security/1-harness-secret-manager-overview.md). + +This topic describes how to add a Google KMS Secret Manager in Harness. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Harness Secret Manager Overview](../6_Security/1-harness-secret-manager-overview.md) + +### Add a Secret Manager + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a Connector from any module in your Project in Project setup, or in your Organization or Account Resources. + +In **Connectors**, click **Connector**. + +In **Secret Managers**, click **GCP KMS** under **Secret Managers**. + +![](./static/add-google-kms-secrets-manager-63.png) +The **GCP Key Management Service** settings appear. + +![](./static/add-google-kms-secrets-manager-64.png) +In **Name,** enter a name for your Secret Manager.  You will use this name to select this Secret Manager when adding or selecting a secret. + +Enter a description for your Secret Manager. + +Enter tags for your Secret Manager. + +Click **Continue**. + +### Obtain Google Cloud Symmetric Key + +To obtain the values for the Details page, you'll need a Google Cloud Symmetric Key. + +In the [Google Cloud Console](https://console.cloud.google.com/), select your project. + +Select **Security** > **Key** **Management**. + +Select/create a key ring. Select/create a key in the key ring. + +To create resources in this or the next step, see Google Cloud's [Creating Symmetric Keys](https://cloud.google.com/kms/docs/creating-keys) topic.Open the Actions menu (⋮), then click **Copy Resource Name**. + +![](./static/add-google-kms-secrets-manager-65.png) + +A reference to the key is now on your clipboard. + +Paste the reference into an editor. You can now copy and paste its substrings into each of the Harness Secret Manager’s **Details** settings as shown below. + +![](./static/add-google-kms-secrets-manager-66.png) + +### Attach Service Account Key (Credentials) File + +Next, you will export your Google Cloud service account key and attach it to the **Details** page in Harness. + +First, you need to grant a Principal the Cloud KMS CryptoKey Encrypter/Decrypter (cloudkms.cryptoKeyEncrypterDecrypter) role. + +In Google Cloud Console, go to the IAM page. + +Locate the Principal you want to use, and click Edit. + +In Edit permissions, add the Cloud KMS CryptoKey Encrypter/Decrypter role and click Save. + +![](./static/add-google-kms-secrets-manager-67.png) + +See Google [Permissions and roles](https://cloud.google.com/kms/docs/reference/permissions-and-roles) and Cloud's Using Cloud IAM with KMSCloud's Using Cloud IAM with KMS topics. + +Next, you'll select the Service Account for that Principal and export its Key file. + +In the Google Cloud Console, in IAM & Admin, go to Service Accounts. + +Scroll to the service account for the Principal you gave the Cloud KMS CryptoKey Encrypter/Decrypter role. If no service account is present, create one. + +Open your service account's Actions ⋮ menu, then select **Manage keys**. + +Select **ADD KEY** > **Create new key**. + +![](./static/add-google-kms-secrets-manager-68.png) + +In the resulting Create private key dialog, select JSON, create the key, and download it to your computer. + +Return to the Secret Manager's Details page in Harness. + +Under GCP KMS Credentials File, click **Create or Select a Secret**. You can create a new [File Secret](./3-add-file-secrets.md)  and upload the key file you just exported from Google Cloud. + +![](./static/add-google-kms-secrets-manager-69.png) + +Click **Save** and then **Continue**. + +### Setup Delegates + +In **Delegates** **Setup**, use [**Selectors**](../2_Delegates/delegate-guide/select-delegates-with-selectors.md#option-select-a-delegate-for-a-connector-using-tags) to select any specific **Delegates** that you want this Connector to use. Click **Save and Continue.** + +### Test Connection + +In **Connection** **Test**, click **Finish** after your connection is successful**.** + +![](./static/add-google-kms-secrets-manager-70.png) \ No newline at end of file diff --git a/docs/platform/6_Security/11-add-a-google-cloud-secret-manager.md b/docs/platform/6_Security/11-add-a-google-cloud-secret-manager.md new file mode 100644 index 00000000000..9e51c08dca3 --- /dev/null +++ b/docs/platform/6_Security/11-add-a-google-cloud-secret-manager.md @@ -0,0 +1,147 @@ +--- +title: Add a Google Cloud Secret Manager +description: Topic explaining how to add a Google Cloud Secret Manager. +# sidebar_position: 2 +helpdocs_topic_id: nzqofaebno +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `PL_ENABLE_GOOGLE_SECRET_MANAGER_IN_NG`. Contact Harness Support to enable the feature.You can use your [Google Cloud Secret Manager](https://cloud.google.com/secret-manager/docs) as a secret manager in Harness. + +You can link your Google Cloud Secret Manager to Harness and use it to store any sensitive data you use in Harness, including secrets. + +Harness also supports [Google KMS as a secrets manager](../6_Security/10-add-google-kms-secrets-manager.md).This topic explains how to add a GCP Secrets Manager in Harness. + +### Before you begin + +* See [Harness Key Concepts](https://docs.harness.io/article/hv2758ro4e) +* See [Secrets Management Overview](../6_Security/1-harness-secret-manager-overview.md) + +### Limitations + +* Inline secrets saved to GCP Secrets Manager must follow the naming limitations of Google Cloud Secret Manager. Secret names can only contain alphabets, numbers, dashes (-), and underscores (\_). +* The maximum size for encrypted files saved to Google Cloud Secret Manager is 64KiB. +* Inline secrets saved to Google Cloud Secret Manager have a region assignment by default. An automatic assignment is the same as not selecting the **Regions** setting when creating a secret in Google Cloud Secret Manager. +* Harness does not support Google Cloud Secret Manager labels at this time. +* **Versions for reference secrets:** + + Any modification to the content of a secret stored by Harness in Google Cloud Secret Manager creates a new version of that secret. + + When you delete a secret present in Google Cloud Secret Manager from Harness, the entire secret is deleted and not just a version. +* You cannot update the name of an inline or referenced secret stored in the Google Cloud Secret Manager using the Harness Secret Manager. +* Harness does not support changing an inline secret to a reference secret or vice versa in Harness. + +### Supported Platforms and Technologies + +See [Supported Platforms and Technologies](https://docs.harness.io/article/1e536z41av5y-supported-platforms). + +### Permissions + +* Make sure you have Create/Edit permissions for Secrets. +* Make sure you have Create/Edit permissions for Connectors. +* The GCP Service Account you use in the **Google Secrets Manager Credentials File** should have the following IAM roles: + + `roles/secretmanager.admin` or `roles/secretmanager.secretAccessor` and `roles/secretmanager.secretVersionManager`. + +See [Managing secrets](https://cloud.google.com/secret-manager/docs/access-control) from Google. + +### Step 1: Add a secret manager + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a connector from any module in your project, in the Project setup, or in your organization or account resources. + +This topic explains the steps to add a Google Cloud Secrets Manager to the account [scope](../4_Role-Based-Access-Control/1-rbac-in-harness.md#rbac-scope). + +1. In your Harness Account, click **Account Settings**. +2. Click **Account Resources**. +3. Click **Connectors** and then click **New Connector**. +4. In Secret Managers, click **GCP Secrets Manager**. +The GCP Secrets Manager settings appear.![](./static/add-a-google-cloud-secret-manager-39.png) + +### Step 2: Add overview + +1. In **Name**, enter a name for your secret manager. +2. You can choose to update the **Id** or let it be the same as your secret manager's name. For more information, see [Entity Identifier Reference](../20_References/entity-identifier-reference.md). +3. Enter the **Description** for your secret manager. +4. Enter **Tags** for your secret manager. +5. Click **Continue.** + +### Step 3: Attach a Google Secret Manager credentials file + +You must export your Google Cloud service account key and add it as an [Encrypted File Secret](./3-add-file-secrets.md) in Harness. + +1. In the Google Cloud console, select **IAM & admin** > **Service account**. +2. Scroll to the service account you want to use. If no service account is present, create one. +3. Grant this service account the Google Cloud Secret Manager permissions needed. +To do this, edit the service account and click **Permissions**. Click **Roles**, and then add the roles needed. +See [Managing secrets](https://cloud.google.com/secret-manager/docs/access-control) from Google. +4. Open your service account's Actions ⋮ menu, then select **Create key.** +5. In the resulting **Create private key** dialog, select the **JSON** option, create the key, and download it to your computer. +6. Go back to Harness. +7. In **Google Secrets Manager Credentials File**, select the encrypted file you just added in Harness.![](./static/add-a-google-cloud-secret-manager-40.png) + +You can also create a new [File Secret](./3-add-file-secrets.md) here and add the Google Cloud service account key that you downloaded. +1. Click **Continue**. + +### Step 4: Setup delegates + +1. In **Delegates** **Setup**, enter [**Selectors**](../2_Delegates/delegate-guide/select-delegates-with-selectors.md#option-select-a-delegate-for-a-connector-using-tags) for specific delegates that you want to allow to connect to this connector. +2. Click **Save and** **Continue**. + +### Step 5: Test connection + +Once the Test Connection succeeds, click **Finish**. You can now see the connector in **Connectors**. + +### Add an inline secret to the GCP Secrets Manager + +Let us add an inline text secret to the GCP Secrets Manager we just created. + +1. In your Harness account, click **Account Settings**. +2. Click **Account Resources** and then click **Secrets**. +3. Click **New Secret** and then click **Text**. +The **Add new Encrypted Text** settings appear. +4. Select the GCP Secrets Manager you just created. +5. Enter a **Name** for your secret. +6. The default selection is **Inline Secret Value**. +7. Enter the **Secret Value**. +8. Select **Configure Region** to add the region(s) for your secret. +9. Click **Save**.![](./static/add-a-google-cloud-secret-manager-41.png) + +### Add a secret reference to the GCP Secrets Manager + +Let us add a secret reference to the GCP Secrets Manager we just created. + +1. In your Harness account, click **Account Settings**. +2. Click **Account Resources** and then click **Secrets**. +3. Click **New Secret** and then click **Text**. +The **Add new Encrypted Text** settings appear. +4. Select the GCP Secrets Manager you just created. +5. Enter a **Name** for your secret. +6. Select **Reference Secret**. +7. Enter your secret identifier in **Reference Secret Identifier**. +8. In **Version**, enter the version of your secret that you want to reference. +You can either enter a version number like `1`, `2`, or enter `latest` to reference the latest version. +9. Click **Save**.![](./static/add-a-google-cloud-secret-manager-42.png) + +### Add an encrypted file secret to the GCP Secrets Manager + +Let us add an encrypted file secret to the GCP Secrets Manager we just created. + +1. In your Harness account, click **Account Settings**. +2. Click **Account Resources** and then click **Secrets**. +3. Click **New Secret** and then click **File**. +The **Add new Encrypted File** settings appear. +4. Select the GCP Secrets Manager you just created. +5. Enter a **Name** for your secret. +6. In **Select File**, browse, and select your file. +7. Select **Configure Region** to add the region(s) for your secret. +8. Click **Save**.![](./static/add-a-google-cloud-secret-manager-43.png) + +### See also + +* [Add Google KMS as a Harness Secret Manager](../6_Security/10-add-google-kms-secrets-manager.md) +* [Add an AWS KMS Secret Manager](../6_Security/7-add-an-aws-kms-secrets-manager.md) +* [Add an AWS Secret Manager](../6_Security/6-add-an-aws-secret-manager.md) +* [Add an Azure Key Vault Secret Manager](../6_Security/8-azure-key-vault.md) +* [Add a HashiCorp Vault Secret Manager](../6_Security/12-add-hashicorp-vault.md) + diff --git a/docs/platform/6_Security/12-add-hashicorp-vault.md b/docs/platform/6_Security/12-add-hashicorp-vault.md new file mode 100644 index 00000000000..da3d7b8fd17 --- /dev/null +++ b/docs/platform/6_Security/12-add-hashicorp-vault.md @@ -0,0 +1,258 @@ +--- +title: Add a HashiCorp Vault Secret Manager +description: This document explains how to store and use encrypted secrets (such as access keys) by adding a HashiCorp Vault Secrets Manager. +# sidebar_position: 2 +helpdocs_topic_id: s65mzbyags +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To store and use encrypted secrets (such as access keys), you can add a HashiCorp Vault Secret Manager. + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Harness Secret Manager Overview](../6_Security/1-harness-secret-manager-overview.md) +* Make sure that the Harness Delegate can connect to the Vault URL. +* Make sure you have View and Create/Edit permissions for secrets.​ + +### Step 1: Add a Secret Manager + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a Connector at Project/Organization/Account scope. To do this, go to Project setup, Organization, or Account Resources. + +In **Connectors**, click **Connector**. + +In **Secret Managers**, click **HashiCorp Vault**. The HashiCorp Vault Secret Manager settings appear. + +![](./static/add-hashicorp-vault-19.png) +### Step 2: Overview + +Enter a **Name** for your secret manager. + +You can choose to update the **ID** or let it be the same as your secret manager's name. For more information, see [Entity Identifier Reference](../20_References/entity-identifier-reference.md). + +Enter **Description** for your secret manager. + +Enter **Tags** for your secret manager. + +Click **Save and Continue.** + +Enter **Vault URL**. + +Enter **Base Secret Path**. The Base Secret Path is used for writing secrets. When Harness reads secrets, it uses the full path. + +For more information, see [Vault documentation](https://www.vaultproject.io/docs/index.html). + +Select the **Authentication** Type. + +![](./static/add-hashicorp-vault-20.png) +### Option: App Role + +The App Role option enables the Harness Vault Secret Manager to authenticate with Vault-defined roles. + +The Vault AppRole method allows you to define multiple roles corresponding to different applications, each with different levels of access. The application's **App Role ID** and **Secret ID** are used for authentication with Vault. You need these to log in and fetch a Vault token. + +To assign a **Secret ID**, you can create a new [**Secret**](./2-add-use-text-secrets.md) or choose an existing one. + +The Secret should not expire and it should be valid until you manually revoke it. You would need this Secret to generate new tokens when the older tokens expire.Harness will use the App Role ID and Secret ID that you supply, to fetch a Vault Auth Token dynamically at configured intervals. This interval is set in Renewal Interval. + +For more information, see [RoleID](https://www.vaultproject.io/docs/auth/approle.html#roleid) and [Authenticating Applications with HashiCorp Vault AppRole](https://www.hashicorp.com/blog/authenticating-applications-with-vault-approle) from HashiCorp. + +If you encounter errors, setting [token\_num\_uses](https://www.vaultproject.io/api-docs/auth/approle#token_num_uses) to `0` can often resolve problems. + +#### Permissions + +The Vault AppRole ID or the Periodic Token used in either of the authentication options must have an ACL policy attached so that Harness can use it. Typically, you create the policy first, then create the AppRole or Periodic Token and attach the policy. + +In the policy examples below: if you've created a Read-only Vault Secret Manager, this secret manager needs only read, and list permissions on Vault. It does not need — and cannot assume — create, update, or delete permissions.If the secrets are in the Secret Engine named “secret”, the policy must have the following permissions. + + +``` +path "secret/*" { + capabilities = ["create", "update", "list", "read", "delete"] +} +``` +If the secrets are in a subfolder, such as secrets/harness, the policy will look like this: + + +``` +path "secret/harness/*" { + capabilities = ["create", "list", "read", "update", "delete"] +} +path "secret/harness" { + capabilities = ["list", "read"] +} +``` +These examples apply only to a **v1** secret engine. If you are planning to use a secret engine with version 2 (versioned secret engine), then the policies are different as explained [here](https://www.vaultproject.io/docs/secrets/kv/kv-v2). Go through this link to understand the correct permissions required for your use case.If the Vault Secret Manager needs to renew tokens, the following permissions are needed: + + +``` +path "auth/token/renew-self" { + capabilities = ["read", "update"] +} +``` +### Option: Token + +For Harness, the **Token** option requires [periodic tokens](https://www.vaultproject.io/docs/concepts/tokens#periodic-tokens) (tokens that have renewal options). + +To create a periodic token, make sure to specify a period in the token creation command: + + +``` +vault token create -policy=harness -period=768h +``` +Next, use the new token with Harness. To do this, perform the below steps: + +* Click **Create or Select a Secret**.![](./static/add-hashicorp-vault-21.png) +* The secret settings page appears. Here you can either **Create a new** [**Secret**](./2-add-use-text-secrets.md) or **Select an existing secret**. If creating a new Secret, enter the token which you created in the **Secret Value** field.![](./static/add-hashicorp-vault-22.png) + +For detailed steps on creating a secret, see [Add Text Secrets](./2-add-use-text-secrets.md). + +If you have already added a Secret with your token, you can choose the same as shown below: + +![](./static/add-hashicorp-vault-23.png) +* Click **Apply**. + +If you want to verify the renewal manually, use the command: + + +``` +vault token lookup +``` +### Option: Vault Agent + +This option enables the Harness Vault Secret Manager to authenticate with the Auto-Auth functionality of the [Vault Agent](https://www.vaultproject.io/docs/agent/autoauth). + +To authenticate with Vault Agent, make sure you have configured it on the required environment, with entries for **method** and **sinks**. For more information, see [Vault Agent](https://www.vaultproject.io/docs/agent). + +In the **Sink Path** field, enter any sink path you have in your Vault Agent Configuration. This is the path of the encrypted file with tokens. The specified Delegate reads this file through file protocol (file://). + +![](./static/add-hashicorp-vault-24.png) +### Option: AWS Auth + +This option provides an automated mechanism to retrieve a Vault token for IAM principals and AWS EC2 instances. With this method, you do not need to manually install or supply security-sensitive credentials such as tokens, usernames, or passwords. + +In the AWS Auth method, there are two authentication types: + +* IAM +* EC2. + +Harness recommends using the IAM technique for authentication since it is more versatile and complies with standard practises. + +To authenticate with AWS Auth, make sure you have configured the vault with entries for **Header**, **Role**, and **Region**. For more information, see [AWS Auth Method](https://www.vaultproject.io/docs/auth/aws#iam-auth-method). + +You must add the **Server ID Header** from Vault as a [Harness Encrypted Text Secret](./2-add-use-text-secrets.md) and select it for **Server Id Header** in Harness. + +![](./static/add-hashicorp-vault-25.png) +In **Role**, enter the role you have configured in the Vault. + +![](./static/add-hashicorp-vault-26.png) +In **Region**, enter the AWS Region for the Secret Manager. + +### Option: Kubernetes Auth + +This option uses a Kubernetes Service Account Token to authenticate with Vault. With this method of authentication, you can easily add a Vault token into a Kubernetes Pod. + +To authenticate with Kubernetes Auth, make sure you have created a role in the vault inside `auth/kubernetes/role`. This role authorizes the "vault-auth" service account in the default namespace and it gives it the default policy. This is also where you'll find the **service account name** and **namespace** that will be used to access the vault endpoint. + +![](./static/add-hashicorp-vault-27.png) +For more information, see [Kubernetes Auth Method](https://www.vaultproject.io/docs/auth/kubernetes#configuration). + +In **Role Name**, enter the role you have configured in the Vault. + +![](./static/add-hashicorp-vault-28.png) +In **Service Account Token Path** enter the JSON Web Token (JWT) path. This is the path where the JWT token is mounted. The default path of this token is `/var/run/secrets/kubernetes.io/serviceaccount/token`. + +For more information, see [Service Account Tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens). + +### Step 2: Select Secret Engine and Version + +Once you have entered the required fields, you can choose to **Fetch Engines** or **Manually Configure Engine**. + +#### Fetch Engines + +If you want Harness to automatically fetch secret engines, include this read permission for **sys/mounts** In the ACL policy. + + +``` +path "sys/mounts"{ + capabilities = ["read"] +} +``` +Click **Fetch Engines**. + +Harness will populate the Secret Engine drop-down with the list of engines and their versions. + +Select the engine you want to use. + +#### Manually Configure Engine + +If you don’t want to or cannot add the ACL policy (with read permission for sys/mounts) in the Secret Manager, perform the following steps: + +1. Identify the engine version of the Secret Manager in Vault. +2. In **Secret Engine Name**, enter the name of the Secret Engine. +3. In **Secret Engine** **Version**, enter the engine version. + +You cannot change the Secret Engine later. Harness blocks editing this setting later since there might be secrets that are created/referenced under this secret engine. Changing the secret engine might break references to those secrets. + +### Step 3: Renewal Interval (minutes) + +In **Renew Interval**, you can (optionally) enter how often the Harness Delegate reloads the Vault access token. + +![](./static/add-hashicorp-vault-29.png) +You can expect a delay during the Vault renewal. A periodic job runs to check if there has to be a renewal, resulting in a delay of no more than two minutes. + +### Review: Validating Non-Read Only Vault Secret Managers + +To validate a non-read-only Vault Secret Manager, Harness creates a dummy secret in the secret engine. + +The path of the secret is as follows: + +v2 Secret Engine: + +`/data//harness_vault_validation#value` + +v1 Secret Engine: + +`//harness_vault_validation#value` + +The secret can fail because of various reasons. + +1. Using the Token/App Role, the V**ault** authentication is not successful. +2. The following **permission** is not available in any of the policies attached to the Token/App Role. If this permission is not available, the user will not be able to fetch the list of secret engines from the customer vault and Harness will show a single option of Secret Engine named **“secret”** with version 2, which might be incorrect for the customer. Make sure to add the permission to a policy attached to the Token/App Role as follows: + + +``` + path “sys/mounts”{ + capabilities = ["read"] + }     +``` +3. The policy attached to the Token/AppRole does not provide the **write**permission in the specified path. Make sure you update the policies and permissions. + +### Step 4: Read-only Vault + +If required by your organization's security practices, select the **Read-only Vault** check box. This selection authorizes Harness to read secrets from Vault, but not to create or manage secrets within Vault. + +![](./static/add-hashicorp-vault-30.png) +Once you have filled out the required fields, click **Finish**. + +##### Read-only Limitations + +If you select **Read-only Vault**, there are several limitations on the resulting Harness Vault Secret Manager. + +Also a read-only Harness Vault Secret Manager: + +* Cannot be used in the **Add Encrypted File** dialog. +* Cannot create inline secrets in the **Add Encrypted Text** modal. +* Cannot migrate (deprecate) its secrets to another secret manager. +* Cannot have secrets migrated to it from another secret manager. + +### Step 5: Test Connection + +Once the Test Connection succeeds, click Finish. You can now see the Connector in Connectors. + +Important: Test Connection fails​Harness tests connections by creating a dummy secret in the Secret Manager or Vault. For the Test Connection to function successfully, make sure you have Create permission for secrets. +The Test Connection fails if you do not have Create permission. However, Harness still creates the Connector for you. You may use this Connector to read secrets, if you have View permissions. \ No newline at end of file diff --git a/docs/platform/6_Security/13-disable-harness-secret-manager.md b/docs/platform/6_Security/13-disable-harness-secret-manager.md new file mode 100644 index 00000000000..41a74f82d42 --- /dev/null +++ b/docs/platform/6_Security/13-disable-harness-secret-manager.md @@ -0,0 +1,66 @@ +--- +title: Disable Built-In Secret Manager +description: Disable Harness built-in Secret Manager. +# sidebar_position: 2 +helpdocs_topic_id: p8rcsfra01 +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the Feature Flag `DISABLE_HARNESS_SM`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Harness includes a built-in Secret Management feature that enables you to store encrypted secrets, such as access keys, and use them in your Harness Accounts, Organizations, or Projects. + +You can choose to disable the Harness built-in Secret Manager at any point and use any other [Secret Manager](./5-add-secrets-manager.md) to store secrets. + +This topic explains how to disable the built-in Harness Secret Manager. + +### Before you begin + +* [Harness Secret Management Overview](../6_Security/1-harness-secret-manager-overview.md) +* [Add a Secret Manager](../6_Security/5-add-secrets-manager.md) +* Make sure you have Account Admin permissions to disable the built-in Secret Manager. +For more information, see [API Permissions Reference](../4_Role-Based-Access-Control/ref-access-management/api-permissions-reference.md). + +### Limitations + +* When you disable the built-in Secret Manager, Harness does not move your existing secrets to another secret manager. +* Before you disable Harness built-in secret manager, you must have at least one Secret Manager in the Account scope. + +### Review: Harness Built-In Secret Manager + +Harness always stores secrets in encrypted form and decrypts them when they are needed. Harness never makes secrets accessible publicly. + +By default, Harness provides a built-in Secret Manager that you can use to store your secrets or you can create your own Secret Manager and use that instead. Every new organization or project that you create comes with a built-in Secret Manager by default. The default Secret Manager in Harness is Google Cloud Key Management Service, which is called Harness Built-in Secret Manager. + +The Key Management Service only stores the key. Harness uses [envelope encryption](https://cloud.google.com/kms/docs/envelope-encryption) to encrypt and decrypt secrets. The encrypted secret and the encrypted Data Encryption Key (used for envelope encryption) are stored in the Harness database.  + +### Step: Disable Built-In Secret Manager + +In your Harness Account, go to **Account Settings**. + +Click **Connectors**. + +![](./static/disable-harness-secret-manager-37.png) +Select **Disable default Harness Secret Manager** and then click **Apply.** + +![](./static/disable-harness-secret-manager-38.png) +The built-in Secret Manager is no longer available in any of the following: + +* List of Connectors inside Account/Org/Project Resources. +* List of Secret Managers populated while creating new secrets. +* Any new Organization or Project that you create. + + You can, however, continue to access the secrets created using this Secret Manager before it is disabled. + +If you create a new Organization or Project after disabling Harness Built-In Secret Manager, you'll need to either create a new Secret Manager or refer to the Secrets generated in the Account before disabling built-in Secret Manager. This also means that if you try to set up a new Secret Manager in any scope, the credentials for it must already be stored in the Account scope as secrets. + +You must have another Secret Manager created at the Account scope with its credentials saved as a secret in the built-in Secret Manager to disable the Harness Secret Manager.You can re-enable the built-in Secret Manager at any time. The built-in Secret Manager will be available in the Organizations and Projects created before it was disabled when you re-enable it. Any Organization or Project you add after you disable the built-in Secret Manager will not have this when you re-enable it. + +### See also + +* [Add Google KMS as a Harness Secret Manager](../6_Security/10-add-google-kms-secrets-manager.md) +* [Add an AWS KMS Secret Manager](../6_Security/7-add-an-aws-kms-secrets-manager.md) +* [Add an AWS Secret Manager](../6_Security/6-add-an-aws-secret-manager.md) +* [Add an Azure Key Vault Secret Manager](../6_Security/8-azure-key-vault.md) +* [Add a HashiCorp Vault Secret Manager](../6_Security/12-add-hashicorp-vault.md) + diff --git a/docs/platform/6_Security/14-reference-existing-secret-manager-secrets.md b/docs/platform/6_Security/14-reference-existing-secret-manager-secrets.md new file mode 100644 index 00000000000..e4a5103f189 --- /dev/null +++ b/docs/platform/6_Security/14-reference-existing-secret-manager-secrets.md @@ -0,0 +1,86 @@ +--- +title: Reference Existing Secret Manager Secrets +description: Topic to explain how to reference existing secret manager secrets. +# sidebar_position: 2 +helpdocs_topic_id: 60lbrjdasw +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +If you already have secrets created in a secrets manager such as HashiCorp Vault or AWS Secrets Manager, you do not need to re-create the existing secrets in Harness. + +Harness does not query the secrets manager for existing secrets, but you can create a secret in Harness that references an existing secret in HashiCorp Vault or AWS Secrets Manager. No new secret is created in those providers. If you delete the secret in Harness, it does not delete the secret in the provider. + +### Before you begin + +* See [AWS KMS Secret Manager](../6_Security/7-add-an-aws-kms-secrets-manager.md) +* See [AWS Secrets Manager](../6_Security/6-add-an-aws-secret-manager.md) +* See [Azure Key Vault Secret Manager](../6_Security/8-azure-key-vault.md) +* See [HashiCorp Vault Secret Manager](../6_Security/12-add-hashicorp-vault.md) + +### Option: Vault secrets + +You can create a Harness secret that refers to the existing Vault secret using a path and key, such as `/path/secret_key#my_key`. + +![](./static/reference-existing-secret-manager-secrets-60.png) +In the above example, `/path` is the pre-existing path, `secret_key` is the secret name, and `my_key` is the key used to lookup the secret value. + +Do not prepend the Vault secrets engine to the path. In the above example, if the secret (`/path/secret_key#my_key`) had been generated by a Vault secrets engine named `harness-engine`, it would reside in this full path `/harness-engine/path/secret_key#my_key`. However, in the **Value** field, you would enter only `/path/secret_key#my_key`.This Harness secret is simply a reference pointing to an existing Vault secret. Deleting this Harness secret will not delete the Vault secret referred to by this secret. + +You can reference pre-existing Vault secrets in the Harness YAML editor. + +### Option: HashiCorp Vault Secrets + +Currently, this feature is behind the feature flag `PL_ACCESS_SECRET_DYNAMICALLY_BY_PATH`. Contact Harness Support to enable the feature.​​For HashiCorp Vault, you can also use expressions to reference pre-existing secrets in Vault using a fully-qualified path, such as `hashicorpvault://LocalVault/foo/bar/mysecret#mykey`.  + +With this kind of referencing, you don't need to pre-create secrets. + +The scheme `hashicorpvault://` is needed to distinguish a Vault secret from other secret references. It is followed by the identifier of the Vault secret manager. + +For example, if you have a HashiCorp Vault connector with the identifier `myVault` in the Account scope and a secret with the name `example` present in the vault path `/harness/testpath` with the following values: + + +``` +​{ + "key1": "value one", + "key2": "value two" +} +``` +You can reference the value of `key1` for the secret `example` using the following expression: + + +``` +<+secrets.getValue("account.hashicorpvault://myVault/harness/testpath/example#key1")> +``` +For a HashiCorp Vault connector at the Org scope, use the following expression: + + +``` +<+secrets.getValue("org.hashicorpvault://myVault/harness/testpath/example#key1")>​ +``` +For a HashiCorp Vault connector at the Project scope, use the following expression: + + +``` +<+secrets.getValue("hashicorpvault://myVault/harness/testpath/example#key1")> +``` +To dynamically reference secrets in HashiCorp Vault, make sure you use the expression in the following format: +`<+secrets.getValue()>` + +### Option: AWS Secrets Manager secrets + +You can create a Harness secret that refers to an existing secret in AWS Secrets Manager using the name of the secret, and a prefix if needed. For example, `mySecret`. + +![](./static/reference-existing-secret-manager-secrets-61.png) +#### Referencing secret keys + +In AWS Secrets Manager, your secrets are specified as key-value pairs, using a JSON collection: + +![](./static/reference-existing-secret-manager-secrets-62.png) +To reference a specific key in your Harness secret, add the key name following the secret name, like `secret_name#key_name`. In the above example, the secret is named **example4docs**. To reference the **example1** key, you would enter `example4docs#example1`. + +### Option: Azure Key Vault secrets + +You can create a Harness secret that refers to an existing secret in Azure Key Vault, using that secret's name (for example: `azureSecret`). You can also specify the secret's version (for example: `azureSecret/05`). + diff --git a/docs/platform/6_Security/2-add-use-text-secrets.md b/docs/platform/6_Security/2-add-use-text-secrets.md new file mode 100644 index 00000000000..6034398a3f4 --- /dev/null +++ b/docs/platform/6_Security/2-add-use-text-secrets.md @@ -0,0 +1,171 @@ +--- +title: Add and Reference Text Secrets +description: This topic shows how to create a text secret and reference it in Harness Application entities. +# sidebar_position: 2 +helpdocs_topic_id: osfw70e59c +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add a text secret to the Secret Manager and use them in your resources like Pipelines and Connectors. + +This topic describes how to add a text secret in Harness. + +### Step 1: Add Text Secret + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +Secrets can be added inline while setting up a Connector or other setting, and they can also be set up in the Account/Organization/Project resources. + +These steps are for setting up a secret in the Account/Organization/Project resources. To do this, go to Project setup, Organization, or Account Resources. + +Click **Secrets**. + +Click **Secret** and select **Text.** + +![](./static/add-use-text-secrets-45.png) +The **Add new Encrypted Text** settings appear. + +![](./static/add-use-text-secrets-46.png) +Select the **Secret Manager** you will use to encrypt this secret. + +In **Secret Name**, enter a name for the encrypted text. This is the name you will use to reference the text elsewhere in your resources. + +#### Option: Inline Secret Value + +In **Inline Secret** **Value**, enter a value for the encrypted text. + +#### Option: Reference Secret + +You can create a Harness secret that refers to an existing secret by selecting **Reference** **Secret** and using that secret's name. + +You can reference existing secrets in the following types of Secret Managers: + +* Azure Key Vault +* Hashicorp Vault + +Enter **Description** for your secret. + +Enter **Tags** for your secret. + +Click **Save.** + +### Step 2: Use the Encrypted Text in Connectors + +All of the passwords and keys used in Harness Connectors are stored as Encrypted Text secrets in Harness. + +You can either create the Encrypted Text secret first and then select it in the Connector or you can create/select it from the Connector by clicking **Create or Select a Secret**: + +![](./static/add-use-text-secrets-47.png)![] +(./static/add-use-text-secrets-48.png) +You can also edit it in the Connector. + +![](./static/add-use-text-secrets-49.png) +### Step 3: Reference the Encrypted Text by Identifier + +For an Encrypted Text secret that's been scoped to a Project, you reference the secret in using the secret identifier in the expression: `<+secrets.getValue("your_secret_Id")>`. + +![](./static/add-use-text-secrets-50.png) +Always reference a secret in an expression using its identifier. Names will not work.For example, if you have a text secret with the identifier `doc-secret`, you can reference it in a Shell Script step like this: + + +``` +echo "text secret is: " <+secrets.getValue("doc-secret")> +``` +You can reference a secret at the Org scope using an expression with `org`: + + +``` +<+secrets.getValue("org.your-secret-Id")>​ +``` +You can reference a secret at the Account scope using an expression with `account`: + + +``` +<+secrets.getValue("account.your-secret-Id")>​​ +``` +Avoid using `$` in your secret value. If your secret value includes `$`, you must use single quotes when you use the expression in a script. +For example, if your secret in the Project scope has a value `'my$secret'`, and identifier `doc-secret`, to echo, use single quotes: +`echo '<+secrets.getValue("doc-secret")>'` + +### Review: Invalid Characters in Secret Names + +The following characters aren't allowed in the names of secrets: + + +``` + ~ ! @ # $ % ^ & * ' " ? / < > , ; +``` + +### Review: Secrets in Outputs + +When a secret is displayed in an output, Harness substitutes the secret value with asterisks so that the secret value is masked. Harness replaces each character in the name with an asterisk (\*). + +For example, here the secret values referenced in a Shell Script step are replaced with `*****`: + +![](./static/add-use-text-secrets-51.png) +If you accidentally use a very common value in your secret, like whitespace, the `*` substitution might appear in multiple places in the output. + +If you see an output like this, review your secret and fix the error. + +### Review: Secret Scope + +When creating secrets, it's important to understand their scope in your Harness account. + +A user can only create a secret according to the scope set by its Harness User permissions. + +For example, when you create a new project or a new organization, a Harness Secret Manager is automatically scoped to that level. + +### Review: Line breaks and Shell-Interpreted Characters + +A text secret can be referenced in a script and written to a file as well. For example, here is a secret decoded from [base64](https://linux.die.net/man/1/base64) and written to a file: + +`echo <+secrets.getValue("my_secret")> | base64 -d > /path/to/file.txt` + +If you have line breaks in your secret value, you can encode the value, add it to a secret, and then decode it when you use it in a Harness step. + +The previous example uses base64, but you can also write a secret to a file without it: + +`echo '<+secrets.getValue("long_secret")>' > /tmp/secretvalue.txt` + +If you do not use base64 and the secret value contains any character that are interpreted by the shell, it might cause issues. + +In this case, you can use a special-purpose code block: + + +``` +cat >/harness/secret_exporter/values.txt << 'EOF' +MySecret:<+secrets.getValue("test")> +EOF +``` +### Sanitization + +Sanitization only looks for an exact match of what is stored. So, if you stored a base64 encoded value then only the base64 encoded value is sanitized. + +For example, let say I have this multiline secret: + + +``` +line 1 +line 2 +line 3 +``` +When it is base64 encoded, it results in `bGluZSAxCmxpbmUgMgpsaW5lIDM=`. + +We can add this to a Harness secret named **linebreaks** and then decode the secret like this: + + +``` +echo <+secrets.getValue("linebreaks")> | base64 -d +``` +The result loses any secret sanitization. + +![](./static/add-use-text-secrets-52.png) +### Nested expressions uing string concatenation + +You can use the + operator or concat method inside the secret reference. For example, each of these expressions use one method and another Harness variable expression: + +* `<+secrets.getValue("test_secret_" + <+pipeline.variables.envVar>)>` +* `<+secrets.getValue("test_secret_".concat(<+pipeline.variables.envVar>))>` + diff --git a/docs/platform/6_Security/3-add-file-secrets.md b/docs/platform/6_Security/3-add-file-secrets.md new file mode 100644 index 00000000000..7d9ee58d4c7 --- /dev/null +++ b/docs/platform/6_Security/3-add-file-secrets.md @@ -0,0 +1,74 @@ +--- +title: Add and Reference File Secrets +description: This document explains how to add and reference an encrypted file secret. +# sidebar_position: 2 +helpdocs_topic_id: 77tfo7vtea +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can upload encrypted files and use them in your resources like Pipelines and Connectors, in the same way as encrypted text. + +This topic describes how to add an encrypted file in Harness. + + +### Step 1: Add Encrypted File + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add an encrypted file at Project/Organization/Account scope. To do this, go to Project setup, Organization, or Account Resources. + +Click **Secrets**. + +Click **Secret** and select **File.** + +![](./static/add-file-secrets-55.png) +The **Add new Encrypted File** settings appear. + +![](./static/add-file-secrets-56.png) +Select the **Secrets Manager** you will use to encrypt this secret. + +Enter a name for the encrypted file. You will use this name to reference the file in your resources. + +Click **Browse** to locate and add a file. The default Secrets Manager for your account is used to encrypt the file. + +Enter **Description** for your secret. + +Enter **Tags** for your secret. + +Click **Save.** + +### Step 2: Reference the Encrypted File by Name + +You can reference the encrypted file in any resource that uses files. + +For example, in the following **Configuration and Authentication** dialog, click **Create or Select a Secret** under Select or Create a SSH Key File: + +![](./static/add-file-secrets-57.png) +Click **Select an existing Secret** in the dialog and the dropdown lets you choose the file you added in **Secret Management:** + +![](./static/add-file-secrets-58.png) +### Step 3: Reference the Encrypted File by Identifier + +For an Encrypted File secret at the Project scope, you reference the secret in a Resource using its identifier and the expression:  + + +``` +<+secrets.getValue("file-secret-Id")> +``` +The identifier is immutable and is located in the secret settings: + +![](./static/add-file-secrets-59.png) +Always reference a secret in an expression using its identifier. Names will not work.You can reference a secret at the Org scope using an expression with `org`: + + +``` +<+secrets.getValue("org.file-secret-Id")> +``` +If your secret is scoped at the Account level, you can refer it using `account`: + + +``` +<+secrets.getValue("account.platformSecret-Id")> +``` diff --git a/docs/platform/6_Security/4-add-use-ssh-secrets.md b/docs/platform/6_Security/4-add-use-ssh-secrets.md new file mode 100644 index 00000000000..3da3a2262be --- /dev/null +++ b/docs/platform/6_Security/4-add-use-ssh-secrets.md @@ -0,0 +1,62 @@ +--- +title: Add SSH Keys +description: This document explains how to add and use SSH secrets. +# sidebar_position: 2 +helpdocs_topic_id: xmp9j0dk8b +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can add SSH keys for use in connecting to remote servers, such as an AWS EC2 instance. +### Add SSH Credential + +To add an SSH key that can be referenced in Harness entities, do the following: + +1. Select your **Account**/**Organization**/**Project**. +2. In **ACCOUNT SETUP**/**ORG SETUP**/**PROJECT SETUP**, Click **Secrets**. +3. Click **New Secret** and select **SSH Credential.** + + ![](./static/add-use-ssh-secrets-17.png) + + The **SSH Credential** settings appear. + + ![](./static/add-use-ssh-secrets-18.png) + +4. Enter a **Name** for the SSH Credential and click **Continue**. +5. Under **Select an Auth Scheme**, select one of the following: + 1. **SSH Key:** add SSH keys for Harness to use when connecting to remote servers. + 2. **Kerberos:** SSH into a target host via the Kerberos protocol. +6. In **User Name**, provide the username for the user account on the remote server. For example, if you want to SSH into an AWS EC2 instance, the username would be **ec2-user**. +7. In **Select or create a SSH Key**, click **Create or Select a Secret**. +8. You can do one of the following: + 1. Click **Create a new secret**. You can create an [Encrypted File Secret](./3-add-file-secrets.md) or an [Encrypted Text Secret](./2-add-use-text-secrets.md). + 2. Click **Select an existing secret.** You can add an existing [Encrypted File Secret](./3-add-file-secrets.md) or an [Encrypted Text Secret](./2-add-use-text-secrets.md) present at your Project, Account or Organization level. + + +:::note +If you are editing an existing SSH Key File, you will not be able to edit the existing inline key that you have entered earlier. Instead, you should select an existing file or create a new Encrypted SSH key file. + +::: + +9. In **Select Encrypted Passphrase**, add the SSH key [passphrase](https://www.ssh.com/ssh/passphrase) if one is required. It is **not** required by default for AWS or many other platforms. Make sure you use a Harness Encrypted Text secret to save the passphrase and refer to it here. Either select an existing secret from the drop-down list or create a new one by clicking  **Create or Select a Secret**. For more information on creating an Encrypted Text Secret, see [Add Text Secrets](./2-add-use-text-secrets.md). +10. In **SSH Port**, leave the default **22** or enter a different port if needed. +11. Click **Save and Continue**. +12. In **Host Name**, enter the hostname of the remote server you want to SSH into. For example, if it is an AWS EC2 instance, it will be something like, `ec2-76-939-110-125.us-west-1.compute.amazonaws.com`. +13. Click **Test Connection**. If the test is unsuccessful, you might see an error stating that no Harness Delegate could reach the host, or that a credential is invalid. Make sure that your settings are correct and that a Harness Delegate is able to connect to the server. +14. When a test is successful, click **Submit**. + +### Notes + +You can convert your OpenSSH key to a PEM format with: + +`ssh-keygen -p -m PEM -f your_private_key` + +This will convert your existing file headers from: + +`-----BEGIN OPENSSH PRIVATE KEY-----` + +to + +`-----BEGIN RSA PRIVATE KEY-----` + diff --git a/docs/platform/6_Security/5-add-secrets-manager.md b/docs/platform/6_Security/5-add-secrets-manager.md new file mode 100644 index 00000000000..9bfe19d3fe2 --- /dev/null +++ b/docs/platform/6_Security/5-add-secrets-manager.md @@ -0,0 +1,61 @@ +--- +title: Add a Secret Manager +description: This document explains how to store and use encrypted secrets (such as access keys) using the built-in Harness Secrets Manager, AWS KMS, Google Cloud KMS, HashiCorp Vault, Azure Key Vault, CyberArk, and SSH via Kerberos. +# sidebar_position: 2 +helpdocs_topic_id: bo4qbrcggv +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness includes a built-in Secret Management feature that enables you to store encrypted secrets, such as access keys, and use them in your Harness Connectors and Pipelines. + +Looking for specific secret managers? See: + +* [Add an AWS KMS Secret Manager](../6_Security/7-add-an-aws-kms-secrets-manager.md) +* [Add a HashiCorp Vault Secret Manager](../6_Security/12-add-hashicorp-vault.md) +* [Add an Azure Key Vault Secret Manager](../6_Security/8-azure-key-vault.md) +* [Add Google KMS as a Harness Secret Manager](../6_Security/10-add-google-kms-secrets-manager.md) +* [Add an AWS Secrets Manager](../6_Security/6-add-an-aws-secret-manager.md) + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Harness Secret Management Overview](../6_Security/1-harness-secret-manager-overview.md) + +### Step 1: Configure Secret Manager + +1. Select your **Account** or **Organization** or **Project**. +2. Select **Connectors** in **Setup**. +3. Create new **Connector.** The **Connectors** page appears**.** +4. Select a Secret Manager type under **Secret Managers**. See: +* [Add an AWS KMS Secret Manager](./7-add-an-aws-kms-secrets-manager.md) +* [Add a HashiCorp Vault Secret Manager](./12-add-hashicorp-vault.md) +* [Add an Azure Key Vault Secret Manager](./8-azure-key-vault.md) +* [Add Google KMS as a Harness Secret Manager](./10-add-google-kms-secrets-manager.md) +* [Add an AWS Secrets Manager](./6-add-an-aws-secret-manager.md) +5. Provide the account access information for the new secret manager. +6. If you choose to set this secret manager as the default, select **Use as Default Secret Manager**. +7. Click **Finish**. + +When a new Default Secret Manager is set up, only new Cloud Provider and/Connector secret fields are encrypted and stored in the new Default Secret Manager. Cloud Providers and Connectors that were created before the modification, are unaffected. Where is the Secret for the Secret Manager Stored? + +Harness stores all your secrets in your Secret Manager. + +The secret you use to connect Harness to your Secret Manager (password, etc) is stored in the Harness Default Secret Manager. + +You can't add secrets to the Org or Project scopes using an Account or Org Scope Secret Manager. + +### Next steps + +* Adding Secret Managers + + [Add an AWS KMS Secret Manager](./7-add-an-aws-kms-secrets-manager.md) + + [Add a HashiCorp Vault Secret Manager](./12-add-hashicorp-vault.md) + + [Add an Azure Key Vault Secret Manager](./8-azure-key-vault.md) + + [Add Google KMS as a Harness Secret Manager](./10-add-google-kms-secrets-manager.md) + + [Add an AWS Secrets Manager](./6-add-an-aws-secret-manager.md) +* Managing Secrets + + [Add Text Secrets](./2-add-use-text-secrets.md) + + [Add File Secrets](./3-add-file-secrets.md) + + [Add SSH Keys](./4-add-use-ssh-secrets.md) + diff --git a/docs/platform/6_Security/6-add-an-aws-secret-manager.md b/docs/platform/6_Security/6-add-an-aws-secret-manager.md new file mode 100644 index 00000000000..aaeb044f015 --- /dev/null +++ b/docs/platform/6_Security/6-add-an-aws-secret-manager.md @@ -0,0 +1,176 @@ +--- +title: Add an AWS Secrets Manager +description: This topic shows how to create an AWS Secret Manager. +# sidebar_position: 2 +helpdocs_topic_id: a73o2cg3pe +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use AWS Secrets Manager for your Harness secrets. + +Unlike AWS KMS, AWS Secrets Manager stores both secrets and encrypted keys. With AWS KMS, Harness stores the secret in its Harness store and retrieves the encryption keys from KMS. For information on using an AWS KMS Secrets Manager, see [Add an AWS KMS Secrets Manager](./7-add-an-aws-kms-secrets-manager.md). + +This topic describes how to add an AWS Secret Manager in Harness. + +### Before you begin + +* If you are adding an AWS Secrets Manager running on ROSA, you must also add an environment variable `AWS_REGION` with the appropriate region as its value. +For example `AWS_REGION=us-east-1`. + +### Permissions: Test AWS Permissions + +Harness uses the same minimum IAM policies for AWS secret manager access as the AWS CLI. + +The AWS account you use for the AWS Secret Manager must have the following policies at a minimum: + + +``` +{ + "Version": "2012-10-17", + "Statement": { + "Effect": "Allow", + "Action": [ + "secretsmanager:Describe*", + "secretsmanager:Get*", + "secretsmanager:List*" + ], + "Resource": "*" + } +} +``` +These policies let you list secrets which will allow you to add the Secret Manager and refer to secrets, but it will not let you read secrets values. + +The following policy list enables Harness to perform all the secrets operations you might need: + + +``` +{ + "Version": "2012-10-17", + "Statement": { + "Effect": "Allow", + "Action": [ + "secretsmanager:CreateSecret", + "secretsmanager:DescribeSecret", + "secretsmanager:DeleteSecret", + "secretsmanager:GetRandomPassword", + "secretsmanager:GetSecretValue", + "secretsmanager:ListSecretVersionIds", + "secretsmanager:ListSecrets", + "secretsmanager:PutSecretValue", + "secretsmanager:UpdateSecret" + ], + "Resource": "*" + } +} +``` +See [Using Identity-based Policies (IAM Policies) for Secret Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html) from AWS. + +To test use the AWS account when running [aws secretsmanager list-secrets](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/list-secrets.html#examples) on either the Harness Delegate host or another host. + +### Step 1: Add a Secret Manager + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a Connector from any module in your Project in Project SETUP, or in your Organization, or Account Resources. + +In **Connectors**, click **Connector**. + +In **Secret Managers**, click **AWS Secrets Manager**. The AWS Secrets Manager settings appear. + + +:::note +For information on restrictions on names and maximum quotas, see [Quotas for AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_limits.html). +::: + + +### Step 2: Overview + +Enter **Name** for your secret manager. + +You can choose to update the **ID** or let it be the same as your secret manager's name. For more information, see [Entity Identifier Reference](../20_References/entity-identifier-reference.md). + +Enter **Description** for your secret manager. + +Enter **Tags** for your secret manager. + +Click **Continue**. + +### Step 3: Details + +You can select the following options in **Credential Type** for authenticating with AWS: + +* **AWS Access Key.** +* **Assume IAM Role on Delegate.** +* **Assume Role Using STS on Delegate.** + +### Option: AWS Access Key + +Use your AWS IAM user login credentials. + +Gather **AWS - Access Key ID** and **AWS - Secret Access Key** from the JSON for the **Key Policy**, or in the AWS **IAM** console, under **Encryption keys**. + +For more information, see [Finding the Key ID and ARN](https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html#find-cmk-id-arn) from Amazon. + +#### AWS-Access Key ID + +Click **Create or Select a Secret**. + +In the secret settings dialog, you can create/select a [Secret](./2-add-use-text-secrets.md) and enter your AWS Access Key as it's value. + +The AWS Access Key is the AWS Access Key ID for the IAM user you want to use to connect to Secret Manager. + +#### AWS- Secret Access Key + +Click **Create or Select a Secret**. + +You can either create a new [Secret](./2-add-use-text-secrets.md) with your Access Key ID's secret key as its **Value** or use an existing secret. + +#### Secret Name Prefix + +Enter **Secret Name Prefix**. All the secrets under this secret manager would have this prefix. For example, `devops` will result in secrets like `devops/mysecret`. The prefix is not a folder name. + +#### Region + +Select the AWS **Region** for the Secret Manager. + +### Option: Assume IAM Role on Delegate + +If you select this option, Harness will authenticate using the IAM role assigned to the AWS host running the Delegate you select. You can select a Delegate using a Delegate Selector. + +Refer to [Secret Name Prefix](./6-add-an-aws-secret-manager.md#secret-name-prefix) and [Region](./6-add-an-aws-secret-manager.md#region) explained above to add these details. + +### Option: Assume Role Using STS on Delegate + +This option uses the [AWS Security Token Service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) (STS) feature. Typically, you use `AssumeRole` within your account or for AWS cross-account access. + +Refer to [Secret Name Prefix](./6-add-an-aws-secret-manager.md#secret-name-prefix) and [Region](./6-add-an-aws-secret-manager.md#region) explained above to add these details. + +#### Role ARN + +Enter the Amazon Resource Name (ARN) of the role that you want to assume. This role is an IAM role in the target deployment AWS account. + +#### External ID + +If the administrator of the account to which the role belongs provided you with an external ID, then enter that value. + +For more information, see [How to Use an External ID When Granting Access to Your AWS Resources to a Third Party](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) from AWS. + +#### Assume Role Duration + +Enter the AssumeRole Session Duration. See Session Duration in the [AssumeRole AWS docs](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html). + +### Step 4: Setup Delegates + +In **Setup Delegates,** enter [**Selectors**](../2_Delegates/delegate-guide/select-delegates-with-selectors.md#option-select-a-delegate-for-a-connector-using-tags) for specific **Delegates** that you want to allow to connect to this Connector. + +### Step 5: Test Connection + +Once the Test Connection succeeds, click Finish. You can now see the Connector in Connectors.​ + + +:::note +Important: Test Connection failsHarness tests connections by creating a dummy secret in the Secret Manager or Vault. For the **Test Connection** to function successfully, make sure you have **Create** permission for secrets. +The Test Connection fails if you do not have Create permission. However, Harness still creates the Connector for you. You may use this Connector to read secrets, if you have **View** permissions. +::: diff --git a/docs/platform/6_Security/7-add-an-aws-kms-secrets-manager.md b/docs/platform/6_Security/7-add-an-aws-kms-secrets-manager.md new file mode 100644 index 00000000000..94e8c3a4a9c --- /dev/null +++ b/docs/platform/6_Security/7-add-an-aws-kms-secrets-manager.md @@ -0,0 +1,109 @@ +--- +title: Add an AWS KMS Secret Manager +description: To store and use encrypted secrets (such as access keys), you can add an AWS KMS Secrets Manager. +# sidebar_position: 2 +helpdocs_topic_id: pt52h8sb6z +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To store and use encrypted secrets (such as access keys) and files, you can add an AWS KMS Secret Manager. + +This topic describes how to add an AWS KMS Secret Manager in Harness. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts). +* [Harness Secret Manager Overview](../6_Security/1-harness-secret-manager-overview.md). + +### Step 1: Add a Secret Manager + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a Connector from any module in your Project in Project setup, or in your Organization, or Account Resources. + +In **Connectors**, click **Connector**. + +In **Secret Managers**, click **AWS KMS**. The **AWS Key Management Service** settings appear. + +![](./static/add-an-aws-kms-secrets-manager-53.png) +### Step 2: Overview + +Enter **Name** for your secret manager. + +You can choose to update the **ID** or let it be the same as your secret manager's name. For more information, see [Entity Identifier Reference](../20_References/entity-identifier-reference.md). + +Enter **Description** for your secret manager. + +Enter **Tags** for your secret manager. + +Click **Continue**. + +### Option: Credential Type + +You can select the following options for authenticating with AWS: + +* **AWS Access Key.** +* **Assume IAM role on delegate.** +* **Assume Role using STS on delegate.** + +### Option: AWS Access Key + +Use your AWS IAM user login credentials. + +Either from the JSON for the **Key Policy**, or in the AWS **IAM** console, under **Encryption keys,** gather the **AWS Access Key ID**, **AWS Secret Key**, and **Amazon Resource Name (ARN)**. + +![](./static/add-an-aws-kms-secrets-manager-54.png) +For more information, see [Finding the Key ID and ARN](https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html#find-cmk-id-arn) from Amazon. + +#### AWS Access Key ID + +Click **Create or Select a Secret**. + +In the secret settings dialog, you can create/select a [Secret](./2-add-use-text-secrets.md) and enter your AWS Access Key as it's value. + +The AWS Access Key is the AWS Access Key ID for the IAM user you want to use to connect to Secret Manager. + +#### AWS Secret Access Key + +Click **Create or Select a Secret**. + +You can create a new [Secret](./2-add-use-text-secrets.md) with your Access Key ID's secret key as the **Secret Value**, or use an existing secret. + +#### AWS ARN + +Click **Create or Select a Secret**. + +As explained above, you can create a new [Secret](./2-add-use-text-secrets.md) with your ARN as the **Secret Value**, or use an existing secret. + +### Option: Assume IAM Role on Delegate + +If you select **Assume the IAM Role on Delegate** Harness will authenticate using the IAM role assigned to the AWS host running the Delegate, you select using a Delegate Selector. + +### Option: Assume Role using STS on Delegate + +This option uses the [AWS Security Token Service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) (STS) feature. Typically, you use `AssumeRole` within your account or for AWS cross-account access. + +#### Role ARN + +Enter the Amazon Resource Name (ARN) of the role that you want to assume. This is an IAM role in the target deployment AWS account. + +#### External ID + +If the administrator of the account to which the role belongs provided you with an external ID, then enter that value. + +For more information, see [How to Use an External ID When Granting Access to Your AWS Resources to a Third Party](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) from AWS. + +#### Assume Role Duration (seconds) + +This is the AssumeRole Session Duration. See Session Duration in the [AssumeRole AWS docs](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html). + +### Step 3: Setup Delegates + +In **Delegates** **Setup**, enter [**Selectors**](../2_Delegates/delegate-guide/select-delegates-with-selectors.md#option-select-a-delegate-for-a-connector-using-tags) for specific **Delegates** that you want to allow to connect to this Connector. Click **Save and Continue**. + +### Step 4: Test Connection + +In **Connection** **Test**, click **Finish** after your connection is successful**.** + diff --git a/docs/platform/6_Security/8-azure-key-vault.md b/docs/platform/6_Security/8-azure-key-vault.md new file mode 100644 index 00000000000..a9c883336ce --- /dev/null +++ b/docs/platform/6_Security/8-azure-key-vault.md @@ -0,0 +1,183 @@ +--- +title: Add an Azure Key Vault Secret Manager +description: This document explains steps to add and use Azure Key Vault to store and use encrypted secrets, such as access keys. +# sidebar_position: 2 +helpdocs_topic_id: 53jrd1cv4i +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +To store and use encrypted secrets (such as access keys) and files, you can add an Azure Key Vault Secret Manager. + +### Before you begin + +* See Harness [Secret Manager Overview](../6_Security/1-harness-secret-manager-overview.md). +* See [About Azure Key Vault](https://docs.microsoft.com/en-us/azure/key-vault/general/overview) by Microsoft. +* See [Azure Key Vault Basic Concepts](https://docs.microsoft.com/en-us/azure/key-vault/general/basic-concepts). +* Make sure you have set up an Azure account. +* Make sure you have **View** and **Create/Edit** permissions for secrets. + +### Review: Secret Manager Overview + +For a full overview of how your secrets are used with the Secrets Managers you configure in Harness, please see [Harness Secrets Management Overview](../6_Security/1-harness-secret-manager-overview.md) and [Harness Security FAQs](https://docs.harness.io/article/320domdle1-harness-security-faqs). + +Here's a visual summary: + +![](./static/azure-key-vault-00.png) +### Limitations + +* Key Vault stores and manages secrets as sequences of octets (8-bit bytes), with a maximum size of 25k bytes each. For more information, see [Azure Key Vault secrets](https://docs.microsoft.com/en-us/azure/key-vault/secrets/about-secrets). + +### Visual Overview + +Azure Key Vault safeguards cryptographic keys and secrets, encrypting authentication keys, storage account keys, data encryption keys, .pfx files, and passwords. + +![](./static/azure-key-vault-01.png) +### Step 1: Create Azure Reader Role + +To enable Harness to later fetch your Azure vaults (in Step 7 below), you must first set up a **Reader** role in Azure. You can do this two ways: + +* Azure Portal +* PowerShell Command + +### Step 2: Create a Reader Role in Azure + +To create a **Reader** role in the Azure portal UI: + +Navigate to Azure's **Subscriptions** page. + +![](./static/azure-key-vault-02.png) +Under **Subscription name**, select the subscription where your vaults reside. + +![](./static/azure-key-vault-03.png) + + +:::tip +Copy and save the **Subscription ID**. You can paste this value into Harness Manager below at Option: Enter Subscription.Select your **Subscription’s Access control (IAM)** property. +::: + + +![](./static/azure-key-vault-04.png) +On the resulting **Access control (IAM)** page, select **Add a role assignment**. + +In the resulting right pane, set the **Role** to **Reader**. + +![](./static/azure-key-vault-05.png) +Accept the default value: **Assign access to**: **Azure AD user**, **group, or service principal**. + +In the **Select** drop-down, select the name of your Azure App registration. + +![](./static/azure-key-vault-06.png) +Click **Save**. + +On the **Access control (IAM)** page, select the **Role assignments** tab. Make sure your new role now appears under the **Reader** group. + +![](./static/azure-key-vault-07.png) + + +:::note +Microsoft Azure's [Manage subscriptions](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/add-change-subscription-administrator#to-assign-a-user-as-an-administrator) documentation adds details about the above procedure but focuses on the **Administrator** rather than the **Reader** role. +::: + + +#### PowerShell Command + +You can also create a **Reader** role programmatically via this PowerShell command, after gathering the required parameters: + + +``` +New-AzRoleAssignment -ObjectId -RoleDefinitionName "Reader" -Scope /subscriptions/ +``` +For details and examples, see Microsoft Azure's [Add or remove role assignments](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-powershell#application-at-a-subscription-scope) documentation. + +### Step 3: Configure Secret Manager in Harness + +Select your **Account** or **Organization** or **Project.** + +Select **Connectors** under **ACCOUNT SETUP/ORG SETUP/PROJECT SETUP.** + +![](./static/azure-key-vault-08.png) +Click **New Connector.** The **Connectors** page appears**.** + +Scroll down to **Secret Managers** and click **Azure Key Vault**. + +![](./static/azure-key-vault-09.png) +Enter a **Name** for the secret manager. + +You can choose to update the **ID** or let it be the same as your secret manager's name. For more information, see [Entity Identifier Reference](../20_References/entity-identifier-reference.md). + +Enter **Description** and **Tags** for your secret manager. + +Click **Continue**. + +In the **Details** page, enter **Client ID**, **Tenant ID** corresponding to the fields highlighted below in the Azure UI: + +![](./static/azure-key-vault-10.png) +To provide these values: + +* In Azure, navigate to the **Azure Active Directory** > **App registrations** page, then select your App registration. (For details, see Azure's [Quickstart: Register an application with the Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v1-add-azure-ad-app).) +* Copy the **Application (client) ID** for the Azure App registration you are using, and paste it into the Harness dialog's **Client ID** field. +* Copy the **Directory (tenant) ID** of the Azure Active Directory (AAD) where you created your application, and paste it into the Harness dialog's **Tenant ID** field. (For details, see Microsoft Azure's [Get values for signing in](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#get-values-for-signing-in) topic.) +* In the **Subscription** field, you can optionally enter your Azure Subscription ID (GUID). + +To find this ID, navigate to Azure's **Subscriptions** page, as outlined above in [Step 1: Create Azure Reader Role](../6_Security/8-azure-key-vault.md#step-1-create-azure-reader-role). From the resulting list of subscriptions, copy the **Subscription ID** beside the subscription that contains your vaults. + +![](./static/azure-key-vault-11.png) + + +:::note +If you do not enter a GUID, Harness uses the default subscription for the [Client ID](#step-4-setup-delegates) you've provided above. +::: + + +Click **Create or Select a Secret** in the **Key** field. For detailed steps on creating a new secret, see [Add Text Secrets](./2-add-use-text-secrets.md). + +![](./static/azure-key-vault-12.png) + +The secret that you reference here should have the Azure authentication key as the **Secret Value**. The below image shows the creation of a secret with Azure authentication key as its value: + +![](./static/azure-key-vault-13.png) + +To create and exchange the azure authentication key, perform the following steps: + +* Navigate to Azure's **Certificates & secrets** page. (For details, see Microsoft Azure's [Create a new application secret](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal#get-application-id-and-authentication-key) documentation.) +* In the resulting page’s **Client secrets** section, select **New client secret**. + +![](./static/azure-key-vault-14.png) + +* Enter a **Description** and expiration option, then click **Add**. + +![](./static/azure-key-vault-15.png) + +* Find your new key in the **Client secrets** section, and copy its value to your clipboard. + +![](./static/azure-key-vault-16.png) + + +:::note +This is your only chance to view this key's value in Azure. Store the value somewhere secure, and keep it on your clipboard.Click **Continue**. +::: + + +### Step 4: Setup Delegates + +In **Delegates** **Setup**, enter [**Selectors**](../2_Delegates/delegate-guide/select-delegates-with-selectors.md#option-select-a-delegate-for-a-connector-using-tags) for specific **Delegates** that you want to allow to connect to this Connector. Click **Continue**. + +### Step 5: Setup Vault + +Click **Fetch Vault**. + +After a slight delay, the **Vault** drop-down list populates with vaults corresponding to your client secret. Select the Vault you want to use. + +Click **Save and Continue**. + +### Step 6: Test Connection + +Once the Test Connection succeeds, click Finish. You can now see the Connector in Connectors. + + +:::note +Important: Test Connection failsHarness tests connections by generating a fake secret in the Secret Manager or Vault. For the Test Connection to function successfully, make sure you have the Create permission for secrets. +The Test Connection fails if you do not have the Create permission. However, Harness still creates the Connector for you. You may use this Connector to read secrets if you have View permissions. +::: diff --git a/docs/platform/6_Security/9-custom-secret-manager.md b/docs/platform/6_Security/9-custom-secret-manager.md new file mode 100644 index 00000000000..ee9512d75cb --- /dev/null +++ b/docs/platform/6_Security/9-custom-secret-manager.md @@ -0,0 +1,98 @@ +--- +title: Add a Custom Secret Manager +description: This topic explains how to create and use a Custom Secret Manager. +# sidebar_position: 2 +helpdocs_topic_id: mg09uspsx1 +helpdocs_category_id: 48wnu4u0tj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind the feature flag `CUSTOM_SECRET_MANAGER_NG`. Contact Harness Support to enable the feature.Harness includes a built-in Secrets Management feature that enables you to store encrypted secrets, such as access keys, and use them in your Harness account. + +You can also access your encrypted secrets stored in third-party Secret Managers using the Harness Custom Secret Manager. + +This topic explains how to add and use a Custom Secret Manager in Harness. + +### Before you begin + +* [Harness Secret Manager Overview](../6_Security/1-harness-secret-manager-overview.md) + +### Permissions + +* Create/Edit Secrets +* Create/Edit Connectors![](./static/custom-secret-manager-31.png) + +### Important + +* Harness Custom Secret Manager is a read-only Secret Manager. +* Harness can read/decrypt secrets, but it cannot write secrets to the Custom Secrets Manager. + +### Harness Custom Secret Manager Overview + +Harness includes a built-in Secrets Management feature that enables you to store encrypted secrets, such as access keys, and use them in your Harness Account.  + +Harness integrates with the following third-party Secret Managers along with a built-in Secret Manager: + +* AWS KMS +* AWS Secrets Manager +* Azure Key Vault +* GCP KMS +* HashiCorp Vault + +You can also use third-party Secrets Managers that are not integrated with Harness, to store your encrypted secrets. With Harness Custom Secret Manager you can integrate Harness with your third-party Secret Managers and read or access your secrets. + +Your Custom Secret Manager uses a shell script that you can execute either on a Delegate or on a remote host which is connected to the Delegate. Harness fetches and reads your secrets from the third-party Secret Manager through this shell script. + +### Step 1: Create a Secret Manager Template + +You can create a Secret Manager Template at Account, Org, or Project scope. + +This topic shows you how to create a Secret Manager Template at the Project scope. + +1. In your Harness Account, go to your Project. +2. In Project Setup, click **Templates** and then click **New Template**.![](./static/custom-secret-manager-32.png) +3. Click Secret Manager. The Secret Manager Template settings appear. +4. Enter a **Name** for your Secret Manager Template. +5. Enter a **Version Label**. +6. Click **Start**.![](./static/custom-secret-manager-33.png) +7. Enter your script in **Script**. +8. Click **Save**. +For detailed steps to create a Secret Manager Template, see [Create a Secret Manager Template](../13_Templates/create-a-secret-manager-template.md). + +### Step 2: Add a Custom Secret Manager + +You can add a Custom Secret Manager at Account, Org, and Project scope. + +To do this, go to Project setup, Organization, or Account Resources. + +This topic shows you how to add a Custome Secret Manager in the Project scope. + +1. In your Harness Account, go to your Project. +2. In Project Setup, click **Connectors** and then click **New Connector**. +3. In **Secret Managers**, click **Custom Secret Manager**.![](./static/custom-secret-manager-34.png) +The Custom Secret Manager settings appear. +4. Enter a **Name** for your Custom Secret Manager. Click **Continue**. +5. Click **Select Template**. The Template Library appears with all the [Secret Manager Templates](../13_Templates/create-a-secret-manager-template.md) listed. +6. Select the desired scope and select a Secret Manager Template from the Template Library.![](./static/custom-secret-manager-35.png) +You can also search for a specific Secret Manager Template by entering its name in **Search**. +7. Once you select the Secret Manager Template, the details are displayed in the Template Studio. +Click **Use Template**. + 1. Enter values for the required Input Variables. + Harness allows you to use [Fixed Values and Runtime Inputs](../20_References/runtime-inputs.md). + + ![](./static/custom-secret-manager-36.png) + + Click **Fixed** to make the variable values fixed. Harness won't ask you these values when you create Secrets. +8. Click **Continue**. +9. In **Delegates** **Setup**, enter [**Selectors**](../2_Delegates/delegate-guide/select-delegates-with-selectors.md#option-select-a-delegate-for-a-connector-using-tags) for specific **Delegates** that you want to allow to connect to this Connector. Click **Save and Continue**. +10. In **Connection** **Test**, click **Finish** after your connection is successful**.** + +### Step 3: Use the Custom Secret Manager + +Create an Encrypted Text using the Custom Secrets Manager you created earlier. Enter the name and values of all the Input Variables defined while creating the Shell Script Template.  + +For more information on creating Encrypted Text Secret, see [Add Encrypted Text](./2-add-use-text-secrets.md). + +If you want to create a secret on a Target Host Custom Secrets Manager, you must also select the Connection Attribute. + diff --git a/docs/platform/6_Security/_category_.json b/docs/platform/6_Security/_category_.json new file mode 100644 index 00000000000..4299521930e --- /dev/null +++ b/docs/platform/6_Security/_category_.json @@ -0,0 +1 @@ +{"label": "Security", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Security"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "48wnu4u0tj"}} \ No newline at end of file diff --git a/docs/platform/6_Security/ref-security/_category_.json b/docs/platform/6_Security/ref-security/_category_.json new file mode 100644 index 00000000000..d7e16ae150e --- /dev/null +++ b/docs/platform/6_Security/ref-security/_category_.json @@ -0,0 +1 @@ +{"label": "Security Reference", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Security Reference"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "59dnj7vtao"}} \ No newline at end of file diff --git a/docs/platform/6_Security/ref-security/secrets-and-log-sanitization.md b/docs/platform/6_Security/ref-security/secrets-and-log-sanitization.md new file mode 100644 index 00000000000..e14f00bbe62 --- /dev/null +++ b/docs/platform/6_Security/ref-security/secrets-and-log-sanitization.md @@ -0,0 +1,145 @@ +--- +title: Secrets and Log Sanitization +description: This topic describes how Harness sanitizes logs and outputs to prevent secrets from being exposed. +# sidebar_position: 2 +helpdocs_topic_id: j07l1tbx5t +helpdocs_category_id: 59dnj7vtao +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness sanitizes deployment logs and any script outputs to mask text secret values. + +First, let's review secrets in Harness, and then look at how Harness sanitizes logs and outputs to prevent secrets from being exposed. + +### Review: Secrets in Harness + +You can create secrets in Harness as described in: + +* [Add and Reference Text Secrets](../2-add-use-text-secrets.md) +* [Add and Reference File Secrets](../3-add-file-secrets.md) +* [Add SSH Secrets](../4-add-use-ssh-secrets.md) + +For text and file secrets, the secrets are stored in the Secrets Manager. For steps to add a Secret Manager, see [Add a Secret Manager](../5-add-secrets-manager.md). + +Once a secret is added, you can use other Harness entities instead of settings. + +You can reference an Encrypted Text secret created in the Org [scope](../../4_Role-Based-Access-Control/1-rbac-in-harness.md#rbac-scope) using the secret identifier in the expression: `<+secrets.getValue("org.your_secret_Id")>`. + +You can reference a file secret created in the Org [scope](../../4_Role-Based-Access-Control/1-rbac-in-harness.md#rbac-scope) using the expression `<+secrets.getValue(“org.file-secret-Id”)>`. + +At deployment runtime, the Harness Delegate uses the Secrets Manager to decrypt and read the secret only when it is needed. + +Harness sends only encrypted data to the Secrets Manager, as follows:  + +1. Your browser sends data over HTTPS to Harness Manager. +2. Harness Manager relays encrypted data to the Harness Delegate, also over HTTPS. +3. The Delegate exchanges a key pair with the secrets manager, over an encrypted connection. +4. The Harness Delegate uses the encrypted key and the encrypted secret, and then discards them. The keys never leave the Delegate. + +Any secrets manager requires a running Harness Delegate to encrypt and decrypt secrets. Any Delegate that references a secret requires direct access to the secrets manager.You can manage your secrets in Harness using either a Key Management Service or third-party Secrets Managers. + +### Sanitization + +When a text secret is displayed in a deployment log, Harness substitutes the text secret value with asterisks (\*) so that the secret value is never displayed.​ + +For example, if you have a Harness text secret with the identifier **doc-secret** containing `foo`.​ + +You can reference it in a Shell Script step like this:​ + + +``` +echo "text secret is: " <+secrets.getValue("doc-secret")> +``` +When you deploy the Pipeline, the log is sanitized and the output is:​ + + +``` +Executing command ... +text secret is: ************** +Command completed with ExitCode (0)​ +``` +File secrets are not masked in Harness logs. As noted above they can be encoded in different formats, but they are not masked from users.​ + +#### Quotes and secrets in a script + +By default, secret expressions use quotes for the secret identifier:​ `<+secrets.getValue("secret_identifier")>`. + +If the secret value itself includes quotes, either single or double, and anywhere in the secret value, you must use the opposite quote when you use the expression in a script (echo, etc).​ + +If you do not use the opposite quote you will expose the secret value.​Single quote example:​ + +Here, the secret value is `'mysecret'` and the identifier is `secret_identifier`.​ To echo, use double quotes: + +`echo "<+secrets.getValue('secret_identifier')>"`​ + +`echo "<+secrets.getValue("secret_identifier")>"​​` + +Double quote example:​ + +Here, the secret value is `"mysecret"` and the identifier is `secret_identifier` .​ + +`echo '<+secrets.getValue('secret_identifier')>'` + +Avoid using `$` in your secret value. ​If your secret value includes `$`, you must use single quotes when you use the expression in a script. +For example, if your secret value is `'my$secret'`, and the identifier is `secret_identifier`, to echo, use single quotes: +`echo '<+secrets.getValue("secret_identifier")>'` + +#### Kubernetes secret objects + +When you deploy a [Kubernetes Secret object](https://kubernetes.io/docs/concepts/configuration/secret/) using Harness, Harness substitutes the secret values with asterisks (\*).​ + +Here is a Secret example from the manifest in the Harness Service (using Go templating):​ + + +``` +{{- if .Values.dockercfg}}​ +apiVersion: v1 +kind: Secret +metadata: + name: {{.Values.name}}-dockercfg + annotations: + harness.io/skip-versioning: true +data: + .dockercfg: {{.Values.dockercfg}} +type: kubernetes.io/dockercfg +--- +{{- end}} +``` +Here is the deployed Secret in the log:​ + + +``` +apiVersion: v1​ +kind: Secret +metadata: + name: harness-example +stringData: + key2: '***' +``` +### Changing Secrets in Scripts and RBAC + +Harness log sanitizing only detects exact matches of a secret or any line of it if it is multi-line. + +If an operation within a script changes the value of the secret and Harness cannot match it to the expression, the newly modified string is displayed in the output exposing the secret value.​ + +If the modification is minor, the secret value can be easily deciphered which is a security concern. + +To avoid this issue, use Harness RBAC to control which users can access a secret.​ + +### Log Sanitizer Detects Exact Matches Only + +The log sanitizer detects only exact matches of the secret or any line of the secret if the secret is multiline.​ + +### Secrets 3 Characters Minimum + +The log sanitizer only works on secrets that are three characters or longer.​ + +If the secret value is `ab`, then the log will show:​ + + +``` +Executing command ...​ +text secret is: ab +Command completed with ExitCode (0) +``` diff --git a/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-39.png b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-39.png new file mode 100644 index 00000000000..679913e96fb Binary files /dev/null and b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-39.png differ diff --git a/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-40.png b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-40.png new file mode 100644 index 00000000000..d1c79e4a25c Binary files /dev/null and b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-40.png differ diff --git a/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-41.png b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-41.png new file mode 100644 index 00000000000..a87843c4032 Binary files /dev/null and b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-41.png differ diff --git a/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-42.png b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-42.png new file mode 100644 index 00000000000..10d995df657 Binary files /dev/null and b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-42.png differ diff --git a/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-43.png b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-43.png new file mode 100644 index 00000000000..8b7769ef7ef Binary files /dev/null and b/docs/platform/6_Security/static/add-a-google-cloud-secret-manager-43.png differ diff --git a/docs/platform/6_Security/static/add-an-aws-kms-secrets-manager-53.png b/docs/platform/6_Security/static/add-an-aws-kms-secrets-manager-53.png new file mode 100644 index 00000000000..7ba3d3ec846 Binary files /dev/null and b/docs/platform/6_Security/static/add-an-aws-kms-secrets-manager-53.png differ diff --git a/docs/platform/6_Security/static/add-an-aws-kms-secrets-manager-54.png b/docs/platform/6_Security/static/add-an-aws-kms-secrets-manager-54.png new file mode 100644 index 00000000000..0735e68e07c Binary files /dev/null and b/docs/platform/6_Security/static/add-an-aws-kms-secrets-manager-54.png differ diff --git a/docs/platform/6_Security/static/add-file-secrets-55.png b/docs/platform/6_Security/static/add-file-secrets-55.png new file mode 100644 index 00000000000..a11f88b7afa Binary files /dev/null and b/docs/platform/6_Security/static/add-file-secrets-55.png differ diff --git a/docs/platform/6_Security/static/add-file-secrets-56.png b/docs/platform/6_Security/static/add-file-secrets-56.png new file mode 100644 index 00000000000..22a3b7d60ed Binary files /dev/null and b/docs/platform/6_Security/static/add-file-secrets-56.png differ diff --git a/docs/platform/6_Security/static/add-file-secrets-57.png b/docs/platform/6_Security/static/add-file-secrets-57.png new file mode 100644 index 00000000000..4e8d4b4c94c Binary files /dev/null and b/docs/platform/6_Security/static/add-file-secrets-57.png differ diff --git a/docs/platform/6_Security/static/add-file-secrets-58.png b/docs/platform/6_Security/static/add-file-secrets-58.png new file mode 100644 index 00000000000..1e8262ef5ba Binary files /dev/null and b/docs/platform/6_Security/static/add-file-secrets-58.png differ diff --git a/docs/platform/6_Security/static/add-file-secrets-59.png b/docs/platform/6_Security/static/add-file-secrets-59.png new file mode 100644 index 00000000000..93fdfcac476 Binary files /dev/null and b/docs/platform/6_Security/static/add-file-secrets-59.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-63.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-63.png new file mode 100644 index 00000000000..0829a0b9318 Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-63.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-64.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-64.png new file mode 100644 index 00000000000..9e0f2ece38a Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-64.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-65.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-65.png new file mode 100644 index 00000000000..6e9954c93cc Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-65.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-66.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-66.png new file mode 100644 index 00000000000..7b4c8c527ed Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-66.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-67.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-67.png new file mode 100644 index 00000000000..d02e729c1d5 Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-67.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-68.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-68.png new file mode 100644 index 00000000000..610506923c5 Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-68.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-69.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-69.png new file mode 100644 index 00000000000..f3a8496ea2d Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-69.png differ diff --git a/docs/platform/6_Security/static/add-google-kms-secrets-manager-70.png b/docs/platform/6_Security/static/add-google-kms-secrets-manager-70.png new file mode 100644 index 00000000000..53c3f8ed3fd Binary files /dev/null and b/docs/platform/6_Security/static/add-google-kms-secrets-manager-70.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-19.png b/docs/platform/6_Security/static/add-hashicorp-vault-19.png new file mode 100644 index 00000000000..72e5c3104d9 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-19.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-20.png b/docs/platform/6_Security/static/add-hashicorp-vault-20.png new file mode 100644 index 00000000000..d186740de18 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-20.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-21.png b/docs/platform/6_Security/static/add-hashicorp-vault-21.png new file mode 100644 index 00000000000..327d1c66da1 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-21.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-22.png b/docs/platform/6_Security/static/add-hashicorp-vault-22.png new file mode 100644 index 00000000000..831ec7a39fd Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-22.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-23.png b/docs/platform/6_Security/static/add-hashicorp-vault-23.png new file mode 100644 index 00000000000..4dee671b9cf Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-23.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-24.png b/docs/platform/6_Security/static/add-hashicorp-vault-24.png new file mode 100644 index 00000000000..efd60900049 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-24.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-25.png b/docs/platform/6_Security/static/add-hashicorp-vault-25.png new file mode 100644 index 00000000000..a545f671b80 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-25.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-26.png b/docs/platform/6_Security/static/add-hashicorp-vault-26.png new file mode 100644 index 00000000000..b3b211f8714 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-26.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-27.png b/docs/platform/6_Security/static/add-hashicorp-vault-27.png new file mode 100644 index 00000000000..d7e739c95b2 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-27.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-28.png b/docs/platform/6_Security/static/add-hashicorp-vault-28.png new file mode 100644 index 00000000000..bba5f2aa1f7 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-28.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-29.png b/docs/platform/6_Security/static/add-hashicorp-vault-29.png new file mode 100644 index 00000000000..fe07e1bc7f8 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-29.png differ diff --git a/docs/platform/6_Security/static/add-hashicorp-vault-30.png b/docs/platform/6_Security/static/add-hashicorp-vault-30.png new file mode 100644 index 00000000000..ad0590fb217 Binary files /dev/null and b/docs/platform/6_Security/static/add-hashicorp-vault-30.png differ diff --git a/docs/platform/6_Security/static/add-use-ssh-secrets-17.png b/docs/platform/6_Security/static/add-use-ssh-secrets-17.png new file mode 100644 index 00000000000..266f56b5bb0 Binary files /dev/null and b/docs/platform/6_Security/static/add-use-ssh-secrets-17.png differ diff --git a/docs/platform/6_Security/static/add-use-ssh-secrets-18.png b/docs/platform/6_Security/static/add-use-ssh-secrets-18.png new file mode 100644 index 00000000000..d1ea424d8f0 Binary files /dev/null and b/docs/platform/6_Security/static/add-use-ssh-secrets-18.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-45.png b/docs/platform/6_Security/static/add-use-text-secrets-45.png new file mode 100644 index 00000000000..a11f88b7afa Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-45.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-46.png b/docs/platform/6_Security/static/add-use-text-secrets-46.png new file mode 100644 index 00000000000..71f7299a004 Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-46.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-47.png b/docs/platform/6_Security/static/add-use-text-secrets-47.png new file mode 100644 index 00000000000..ff9013f2e1b Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-47.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-48.png b/docs/platform/6_Security/static/add-use-text-secrets-48.png new file mode 100644 index 00000000000..9bb6cc66782 Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-48.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-49.png b/docs/platform/6_Security/static/add-use-text-secrets-49.png new file mode 100644 index 00000000000..38ffb11ba87 Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-49.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-50.png b/docs/platform/6_Security/static/add-use-text-secrets-50.png new file mode 100644 index 00000000000..5764adb4b2b Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-50.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-51.png b/docs/platform/6_Security/static/add-use-text-secrets-51.png new file mode 100644 index 00000000000..3d7dc5c8690 Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-51.png differ diff --git a/docs/platform/6_Security/static/add-use-text-secrets-52.png b/docs/platform/6_Security/static/add-use-text-secrets-52.png new file mode 100644 index 00000000000..3ff36bbc1d4 Binary files /dev/null and b/docs/platform/6_Security/static/add-use-text-secrets-52.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-00.png b/docs/platform/6_Security/static/azure-key-vault-00.png new file mode 100644 index 00000000000..203f08e6086 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-00.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-01.png b/docs/platform/6_Security/static/azure-key-vault-01.png new file mode 100644 index 00000000000..cbe0a4c2461 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-01.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-02.png b/docs/platform/6_Security/static/azure-key-vault-02.png new file mode 100644 index 00000000000..c548a5ffb89 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-02.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-03.png b/docs/platform/6_Security/static/azure-key-vault-03.png new file mode 100644 index 00000000000..6bc529db71a Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-03.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-04.png b/docs/platform/6_Security/static/azure-key-vault-04.png new file mode 100644 index 00000000000..117e9f17fd5 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-04.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-05.png b/docs/platform/6_Security/static/azure-key-vault-05.png new file mode 100644 index 00000000000..743a84b024d Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-05.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-06.png b/docs/platform/6_Security/static/azure-key-vault-06.png new file mode 100644 index 00000000000..9f4b10641d3 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-06.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-07.png b/docs/platform/6_Security/static/azure-key-vault-07.png new file mode 100644 index 00000000000..59d168a8fe0 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-07.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-08.png b/docs/platform/6_Security/static/azure-key-vault-08.png new file mode 100644 index 00000000000..588234c4636 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-08.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-09.png b/docs/platform/6_Security/static/azure-key-vault-09.png new file mode 100644 index 00000000000..6b03d4a6511 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-09.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-10.png b/docs/platform/6_Security/static/azure-key-vault-10.png new file mode 100644 index 00000000000..a02df91dc85 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-10.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-11.png b/docs/platform/6_Security/static/azure-key-vault-11.png new file mode 100644 index 00000000000..6bc529db71a Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-11.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-12.png b/docs/platform/6_Security/static/azure-key-vault-12.png new file mode 100644 index 00000000000..5d38faf4f53 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-12.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-13.png b/docs/platform/6_Security/static/azure-key-vault-13.png new file mode 100644 index 00000000000..62a35039329 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-13.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-14.png b/docs/platform/6_Security/static/azure-key-vault-14.png new file mode 100644 index 00000000000..7aaf98b4792 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-14.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-15.png b/docs/platform/6_Security/static/azure-key-vault-15.png new file mode 100644 index 00000000000..f94449965df Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-15.png differ diff --git a/docs/platform/6_Security/static/azure-key-vault-16.png b/docs/platform/6_Security/static/azure-key-vault-16.png new file mode 100644 index 00000000000..e89c505b0c0 Binary files /dev/null and b/docs/platform/6_Security/static/azure-key-vault-16.png differ diff --git a/docs/platform/6_Security/static/custom-secret-manager-31.png b/docs/platform/6_Security/static/custom-secret-manager-31.png new file mode 100644 index 00000000000..ce8014f7276 Binary files /dev/null and b/docs/platform/6_Security/static/custom-secret-manager-31.png differ diff --git a/docs/platform/6_Security/static/custom-secret-manager-32.png b/docs/platform/6_Security/static/custom-secret-manager-32.png new file mode 100644 index 00000000000..2af05637d36 Binary files /dev/null and b/docs/platform/6_Security/static/custom-secret-manager-32.png differ diff --git a/docs/platform/6_Security/static/custom-secret-manager-33.png b/docs/platform/6_Security/static/custom-secret-manager-33.png new file mode 100644 index 00000000000..b474e064a86 Binary files /dev/null and b/docs/platform/6_Security/static/custom-secret-manager-33.png differ diff --git a/docs/platform/6_Security/static/custom-secret-manager-34.png b/docs/platform/6_Security/static/custom-secret-manager-34.png new file mode 100644 index 00000000000..db93baeeb4b Binary files /dev/null and b/docs/platform/6_Security/static/custom-secret-manager-34.png differ diff --git a/docs/platform/6_Security/static/custom-secret-manager-35.png b/docs/platform/6_Security/static/custom-secret-manager-35.png new file mode 100644 index 00000000000..9cbf5669e50 Binary files /dev/null and b/docs/platform/6_Security/static/custom-secret-manager-35.png differ diff --git a/docs/platform/6_Security/static/custom-secret-manager-36.png b/docs/platform/6_Security/static/custom-secret-manager-36.png new file mode 100644 index 00000000000..de50979c446 Binary files /dev/null and b/docs/platform/6_Security/static/custom-secret-manager-36.png differ diff --git a/docs/platform/6_Security/static/disable-harness-secret-manager-37.png b/docs/platform/6_Security/static/disable-harness-secret-manager-37.png new file mode 100644 index 00000000000..b6c40170892 Binary files /dev/null and b/docs/platform/6_Security/static/disable-harness-secret-manager-37.png differ diff --git a/docs/platform/6_Security/static/disable-harness-secret-manager-38.png b/docs/platform/6_Security/static/disable-harness-secret-manager-38.png new file mode 100644 index 00000000000..2397b53c468 Binary files /dev/null and b/docs/platform/6_Security/static/disable-harness-secret-manager-38.png differ diff --git a/docs/platform/6_Security/static/harness-secret-manager-overview-44.png b/docs/platform/6_Security/static/harness-secret-manager-overview-44.png new file mode 100644 index 00000000000..1f48dcfe55e Binary files /dev/null and b/docs/platform/6_Security/static/harness-secret-manager-overview-44.png differ diff --git a/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-60.png b/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-60.png new file mode 100644 index 00000000000..955aa67b812 Binary files /dev/null and b/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-60.png differ diff --git a/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-61.png b/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-61.png new file mode 100644 index 00000000000..79922108273 Binary files /dev/null and b/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-61.png differ diff --git a/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-62.png b/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-62.png new file mode 100644 index 00000000000..e27fa92c12b Binary files /dev/null and b/docs/platform/6_Security/static/reference-existing-secret-manager-secrets-62.png differ diff --git a/docs/platform/7_Connectors/_category_.json b/docs/platform/7_Connectors/_category_.json new file mode 100644 index 00000000000..6a0e618861d --- /dev/null +++ b/docs/platform/7_Connectors/_category_.json @@ -0,0 +1 @@ +{"label": "Connectors", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Connectors"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "o1zhrfo8n5"}} \ No newline at end of file diff --git a/docs/platform/7_Connectors/add-a-git-hub-connector.md b/docs/platform/7_Connectors/add-a-git-hub-connector.md new file mode 100644 index 00000000000..b652dce5c14 --- /dev/null +++ b/docs/platform/7_Connectors/add-a-git-hub-connector.md @@ -0,0 +1,102 @@ +--- +title: Add a GitHub Connector +description: This topic describes how to add a GitHub Code Repo Connector. +# sidebar_position: 2 +helpdocs_topic_id: jd77qvieuw +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Code Repository Connectors connect your Harness account with your Git platform. Connectors are used to pull important files, such as Helm charts, Kubernetes manifests, and Terraform scripts. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Step 1: Add a GitHub Code Repo Connector + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a Connector from any module in your Project in Project setup, or in your Organization, or Account Resources. + +This topic shows you how to add a ServiceNow Connector to your Project. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and then click **GitHub**. The GitHub Connector settings appear. + +![](./static/add-a-git-hub-connector-34.png) +Enter a name for this Connector. + +You can choose to update the **ID** or let it be the same as your ServiceNow Connector's name. For more information, see [Entity Identifier Reference](../20_References/entity-identifier-reference.md). + +Enter **Description** and **Tags** for your Connector. + +Click **Continue**. + +For details on each setting, see [GitHub Connector Settings Reference](ref-source-repo-provider/git-hub-connector-settings-reference.md). + +### Step 2: Details + +Select **Account** or **Repository** in **URL Type**. + +Select **Connection Type** as **HTTP** or **SSH**. + +Enter your **GitHub Account URL**. + +In **Test Repository**, enter your repository name to test the connection. + +![](./static/add-a-git-hub-connector-35.png) +Click **Continue**. + +For SSH, ensure that the key is not OpenSSH, but rather PEM format. To generate an SSHv2 key, use: `ssh-keygen -t ecdsa -b 256 -m PEM` The `-m PEM` ensure that the key is PEM. Next, follow the prompts to create the PEM key. For more information, see the [ssh-keygen man page](https://linux.die.net/man/1/ssh-keygen) and [Connecting to GitHub with SSH](https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh). + +### Step 3: Credentials + +In **Credentials,** enter your **Username**. + +You can either create a new [Encrypted Text](../6_Security/2-add-use-text-secrets.md) or use an existing one. + +![](./static/add-a-git-hub-connector-36.png) +In **Personal Access Token**, either create a new [Encrypted Text](../6_Security/2-add-use-text-secrets.md) or use an existing one that has your Git token. Harness requires the token for API access. Generate the token in your account on the Git provider and add it to Harness as a Secret. + +To use a personal access token with a GitHub organization that uses SAML single sign-on (SSO), you must first authorize the token. See [Authorizing a personal access token for use with SAML single sign-on](https://docs.github.com/en/enterprise-cloud@latest/authentication/authenticating-with-saml-single-sign-on/authorizing-a-personal-access-token-for-use-with-saml-single-sign-on) from GitHub.* The GitHub user account used to create the Personal Access Token must have admin permissions on the repo. +* GitHub doesn't provide a way of scoping a PAT for read-only access to repos. You must select the following permissions: + +![](./static/add-a-git-hub-connector-37.png) +If you selected **SSH** as the connection protocol, you must add the **SSH Key** to use with the connection as a [Harness Encrypted Text secret](../6_Security/2-add-use-text-secrets.md). For detailed steps to create an SSH Key, see [Add new SSH Key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account) from GitHub. + +Make sure the **Username** for your **SSH Credential** is `git` for the Test Connection to be successful. + + +![](./static/add-a-git-hub-connector-38.png) +Harness also supports [GitHub deploy keys](https://docs.github.com/en/developers/overview/managing-deploy-keys#deploy-keys). Deploy keys grant access to a single repo. Using a deploy key ensures that the Connector only works with the specific repo you selected in **URL Type**. + +#### Enable API access + +This option is required for using Git-based triggers, Webhooks management, and updating Git statuses. + +You can use the same token you used in **Personal Access Token**. + +Click **Continue**. + +### Step 4: Connect to the Provider + +In **Select Connectivity Mode**, you have two options: + +* **Connect Through Harness Platform:** Harness SaaS will connect to your Git repo whenever it needs to pull code or a file or sync. +* **Connect Through a Harness Delegate:** Harness will make all connections using the Harness Delegate. This option is used for Harness Self-Managed Enterprise Edition Overview often, but it is also used for Harness SaaS. See [Harness Self-Managed Enterprise Edition Overview](https://docs.harness.io/article/tb4e039h8x-harness-on-premise-overview).![](./static/add-a-git-hub-connector-39.png) + +**Secrets:** if you select **Connect Through Harness Platform**, the Harness Manager exchanges a key pair with the Secrets Manager configured in Harness using an encrypted connection. Next, the Harness Manager uses the encrypted key and the encrypted secret and then discards them. The keys never leave the Harness Manager. Secrets are always encrypted in transit, in memory, and in the Harness database.If you select **Connect Through** **Harness Platform**, click **Save and Continue**. + +If you select **Connect Through a Harness Delegate**, click **Continue** and then select/add the Delegate you want to use in **Delegates Setup**. See [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview). + +![](./static/add-a-git-hub-connector-40.png) +Click **Save and Continue**. + +Harness tests the connection. Click **Finish** once the verification is successful. + +![](./static/add-a-git-hub-connector-41.png) +The GitHub connector is listed in Connectors. + diff --git a/docs/platform/7_Connectors/add-a-kubernetes-cluster-connector.md b/docs/platform/7_Connectors/add-a-kubernetes-cluster-connector.md new file mode 100644 index 00000000000..a4425ed7fb4 --- /dev/null +++ b/docs/platform/7_Connectors/add-a-kubernetes-cluster-connector.md @@ -0,0 +1,173 @@ +--- +title: Add a Kubernetes Cluster Connector +description: Connect Harness to a Kubernetes cluster using a Harness Kubernetes Cluster Connector. +# sidebar_position: 2 +helpdocs_topic_id: 1gaud2efd4 +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can connect Harness with your Kubernetes clusters using a Kubernetes Cluster Connector or [Google Cloud Platform (GCP) Connector](connect-to-google-cloud-platform-gcp.md). This topic explains how to set up the Kubernetes Cluster Connector. + +Once connected, you can use Kubernetes and Harness for provisioning infrastructure, running a CI build farm, and deploying microservices and other workloads to clusters. + +**What roles should my Kubernetes account have?** What roles and policies needed by the account used in the Connector depend on what operations you are using Harness for in the cluster. For a list of roles and policies, see [Kubernetes Cluster Connector Settings Reference](ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md). + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) + +### Visual Summary + +Here's a quick video that shows you how to add a Kubernetes Cluster Connector and install the Kubernetes Delegate in the target cluster at the same time: + +### Review: Roles and Policies for the Connector + +The IAM roles and policies needed by the account used in the Connector depend on what operations you are using with Harness and what operations you want Harness to perform in the cluster. + +You can use different methods for authenticating with the Kubernetes cluster, but all of them use a Kubernetes Role. + +The Role used must have either the `cluster-admin` permission in the target cluster or admin permissions in the target namespace. + +For a detailed list of roles and policies, see [Kubernetes Cluster Connector Settings Reference](ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md). + +In general, the following permissions are require: + +* **Deployments:** A Kubernetes service account with permission to create entities in the target namespace is required. The set of permissions should include `list`, `get`, `create`, `watch` (to fetch the pod events), and `delete` permissions for each of the entity types Harness uses. In general, cluster admin permission or namespace admin permission is sufficient. +* **Builds:** A Kubernetes service account with CRUD permissions on Secret, Service, Pod, and PersistentVolumeClaim (PVC). + +If you don’t want to use `resources: [“*”]` for the Role, you can list out the resources you want to grant. Harness needs `configMap`, `secret`, `event`, `deployment`, and `pod` at a minimum for deployments, as stated above. Beyond that, it depends on the resources you are deploying via Harness. + +If you don’t want to use `verbs: [“*”]` for the Role, you can list out all of the verbs (`create`, `delete`, `get`, `list`, `patch`, `update`, `watch`). + +The YAML provided for the Harness Delegate defaults to `cluster-admin` because that ensures anything could be applied. Any restriction must take into account the actual manifests to be deployed. + +### Review: Kubernetes Cluster Connector for EKS + +If you want to connect Harness to Elastic Kubernetes Service (Amazon EKS), use the platform-agnostic Kubernetes Cluster Connector discussed here. Do not use an [AWS Connector](add-aws-connector.md). + +### Review: AKS Clusters Must have Local Accounts Enabled + +To use an AKS cluster for deployment, the AKS cluster must have local accounts enabled (AKS property `disableLocalAccounts=false`). + +### Review: Switching IAM Policies + +If the IAM role used by your Connector does not have the policies required, you can modify or switch the role. + +You simply change the role assigned to the cluster or the Harness Delegate your Connector is using. + +When you switch or modify the IAM role, it might take up to 5 minutes to take effect. + +### Supported Platforms and Technologies + +For a list of the platforms and technologies supported by Harness, see [Supported Platforms and Technologies](https://docs.harness.io/article/1e536z41av). + +### Step 1: Add a Kubernetes Cluster Connector + +Open a Harness Project. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and click **Kubernetes Cluster**. The Kubernetes Cluster Connector settings appear. + +![](./static/add-a-kubernetes-cluster-connector-06.png) +In **Name**, enter a name for this connector. + +Harness automatically creates the corresponding Id ([entity identifier](../20_References/entity-identifier-reference.md)). + +Click **Continue**. + +### Step 2: Enter Credentials + +Choose the method for Harness to use when connecting to the cluster. + +Select one of the following: + +* **Specify master URL and credentials**: + + You provide the Kubernetes master node URL. The easiest method to obtain the master URL is using kubectl: `kubectl cluster-info`. + + Next, enter the **Service Account Key** or other credentials. +* **Use the credentials of a specific Harness Delegate**: Select this option to have the Connector inherit the credentials used by the Harness Delegate running in the cluster. You can install a Delegate as part of adding this Connector. + +For details on all of the credential settings, see [Kubernetes Cluster Connector Settings Reference](ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md). + +#### Obtaining the Service Account Token using kubectl + +To use a Kubernetes Service Account (SA) and token, you will need to either use an existing SA that has the `cluster-admin` permission (or namespace `admin`) or create a new SA and grant it the `cluster-admin` permission (or namespace `admin`). + +For example, here's a manifest that creates a new SA named `harness-service-account` in the `default` namespace. + + +``` +# harness-service-account.yml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: harness-service-account + namespace: default +``` +Next, you apply the SA. + + +``` +kubectl apply -f harness-service-account.yml +``` +Next, grant the SA the `cluster-admin` permission. + + +``` +# harness-clusterrolebinding.yml +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: harness-admin +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin +subjects: +- kind: ServiceAccount + name: harness-service-account + namespace: default +``` +Next, apply the ClusterRoleBinding. + + +``` +kubectl apply -f harness-clusterrolebinding.yml +``` +Once you have the SA added, you can gets its token using the following commands. + + +``` +SERVICE_ACCOUNT_NAME={SA name} + +NAMESPACE={target namespace} + +SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o=jsonpath='{.secrets[].name}') + +TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o=jsonpath='{.data.token}' | base64 -d) + +echo $TOKEN +``` +The `| base64 -d` piping decodes the token. You can now enter it into the Connector. + +### Step 3: Set Up Delegates + +Regardless of which authentication method you selected, you select Harness Delegates to perform authentication for this Connector. + +If you do not have Harness Delegates, click **Install New Delegate** to add one to the cluster, or any cluster in your environment that can connect to the cluster. + +Harness uses Kubernetes Cluster Connectors at Pipeline runtime to authenticate and perform operations with Kubernetes. Authentications and operations are performed by Harness Delegates. + +You can select **Any Available Harness Delegate** and Harness will select the Delegate. For a description of how Harness picks Delegates, see [Delegates Overview](../2_Delegates/delegates-overview.md). + +You can use Delegate Tags to select one or more Delegates. For details on Delegate Tags, see [Select Delegates with Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md). + +If you need to install a Delegate, see [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview) or the [Visual Summary](#visual_summary) above. + +Click **Save and Continue**. + +Harness tests the credentials you provided using the Delegates you selected. + diff --git a/docs/platform/7_Connectors/add-a-microsoft-azure-connector.md b/docs/platform/7_Connectors/add-a-microsoft-azure-connector.md new file mode 100644 index 00000000000..b3bd3e2915d --- /dev/null +++ b/docs/platform/7_Connectors/add-a-microsoft-azure-connector.md @@ -0,0 +1,376 @@ +--- +title: Add a Microsoft Azure Cloud Connector +description: Connect Harness to Azure using the Azure Cloud Connector. +# sidebar_position: 2 +helpdocs_topic_id: 9epdx5m9ae +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic explains how to connect Harness to the Microsoft Azure cloud. Using this Connector, you can pull Azure artifacts and deploy your applications to Azure using Harness. + +Using Harness **Cloud Cost Management (CCM)**? See [Set Up Cloud Cost Management for Azure](https://docs.harness.io/article/v682mz6qfd-set-up-cost-visibility-for-azure). + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [CD Pipeline Basics](https://docs.harness.io/article/cqgeblt4uh-cd-pipeline-basics) + +### Limitations + +* Currently, the Microsoft Azure Cloud Connector is for: ACR, AKS, Web Apps, and Virtual Machines for Traditional (SSH) deployments. Support for other services such as ARM and Blueprint are coming soon. +* To use an AKS cluster for deployment, the AKS cluster must have local accounts enabled (AKS property `disableLocalAccounts=false`). + +### Visual Summary + +The following example shows how to connect Harness to Azure using the Azure Cloud Connector and an Azure App registration. + +![](./static/add-a-microsoft-azure-connector-63.png) +### Review: Permissions + +This section assumes you're familiar with Azure RBAC. For details, [Assign Azure roles using the Azure portal](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal) from Azure. + +This graphic from Azure can be helpful as a reminder of how Azure manages RBAC: + +![](./static/add-a-microsoft-azure-connector-64.png) + +For security reasons, Harness uses an application object and service principal rather than a user identity. The process is described in [How to: Use the portal to create an Azure AD application and service principal that can access resources](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal) from Azure. + +### AKS Cluster Setup Requirements + +* AKS managed AAD (enabled or disabled). +* Kubernetes RBAC enabled. +* Azure RBAC (enabled or disabled). + + See **Deployments (CD)** in [Kubernetes Cluster Connector Settings Reference](ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md). +* AKS property `disableLocalAccounts` (enabled or disabled). + +##### Permissions List + +We cover the roles needed for Azure services in later sections. In this section, we provide the permissions needed in case you want to use them with a custom role. + +The following permissions (actions) are necessary for any user (Service Principal or Managed Identity): + +* Service Principal and/or Managed Identity Azure permissions (these are necessary regardless if you are using Kubernetes RBAC or Azure RBAC): + + Microsoft.ContainerRegistry/registries/read + + Microsoft.ContainerRegistry/registries/builds/read + + Microsoft.ContainerRegistry/registries/metadata/read + + Microsoft.ContainerRegistry/registries/pull/read + + Microsoft.ContainerService/managedClusters/read + + Microsoft.ContainerService/managedClusters/listClusterUserCredential/action + + Microsoft.Resource/subscriptions/resourceGroup/read +* For Helm deployments, the version of Helm must be >= 3.2.0 (the Harness `HELM_VERSION_3_8_0` feature flag must be activated). +* Pod Assigned Managed Identity and System Assigned Managed Identity cannot be used for the same cluster. + +Here is the JSON for creating a custom role with these permissions (replace `xxxx` with the role name, subscription Id, and resource group Id): + + +``` +{ + "id": "/subscriptions/xxxx/providers/Microsoft.Authorization/roleDefinitions/xxxx", + "properties": { + "roleName": "xxxx", + "description": "", + "assignableScopes": [ + "/subscriptions/xxxx/resourceGroups/xxxx" + ], + "permissions": [ + { + "actions": [], + "notActions": [], + "dataActions": [ + "Microsoft.ContainerService/managedClusters/configmaps/read", + "Microsoft.ContainerService/managedClusters/configmaps/write", + "Microsoft.ContainerService/managedClusters/configmaps/delete", + "Microsoft.ContainerService/managedClusters/secrets/read", + "Microsoft.ContainerService/managedClusters/secrets/write", + "Microsoft.ContainerService/managedClusters/secrets/delete", + "Microsoft.ContainerService/managedClusters/apps/deployments/read", + "Microsoft.ContainerService/managedClusters/apps/deployments/write", + "Microsoft.ContainerService/managedClusters/apps/deployments/delete", + "Microsoft.ContainerService/managedClusters/events/read", + "Microsoft.ContainerService/managedClusters/events/write", + "Microsoft.ContainerService/managedClusters/events/delete", + "Microsoft.ContainerService/managedClusters/namespaces/read", + "Microsoft.ContainerService/managedClusters/nodes/read", + "Microsoft.ContainerService/managedClusters/pods/read", + "Microsoft.ContainerService/managedClusters/pods/write", + "Microsoft.ContainerService/managedClusters/pods/delete", + "Microsoft.ContainerService/managedClusters/services/read", + "Microsoft.ContainerService/managedClusters/services/write", + "Microsoft.ContainerService/managedClusters/services/delete", + "Microsoft.ContainerService/managedClusters/apps/statefulsets/read", + "Microsoft.ContainerService/managedClusters/apps/statefulsets/write", + "Microsoft.ContainerService/managedClusters/apps/statefulsets/delete", + "Microsoft.ContainerService/managedClusters/apps/replicasets/read", + "Microsoft.ContainerService/managedClusters/apps/replicasets/write", + "Microsoft.ContainerService/managedClusters/apps/replicasets/delete" + ], + "notDataActions": [] + } + ] + } +} +``` +#### Azure Kubernetes Services (AKS) Roles + +If you use Microsoft Azure Cloud Connector and Service Principal or Managed Identity credentials, you can use a custom role or the Owner role. + +If you do not use a custom role, the **Owner** role must be assigned. + +Here are the options for connecting Harness to your target AKS cluster: + +* Install a [Kubernetes Delegate](../2_Delegates/delegate-guide/install-a-kubernetes-delegate.md) in the target AKS cluster and use it for authentication in a Harness [Kubernetes Cluster Connector](add-a-kubernetes-cluster-connector.md). The Harness Kubernetes Cluster Connector is platform-agnostic. + + You won't need to provide Microsoft Azure Service Principal or Managed Identity credentials. +* Install a [Kubernetes Delegate](../2_Delegates/delegate-guide/install-a-kubernetes-delegate.md) in the target AKS cluster and use it for authentication in a Harness **Microsoft Azure Cloud Connector**, as described in this topic. + + You'll need to provide the Microsoft Azure Environment. + + If you use a User Assigned Managed Identity, you will need to provide the Application (client) Id. + + If you use a System Assigned Managed Identity, you do not need to provide any Ids. +* Use a **Microsoft Azure Cloud Connector** and Service Principal or Managed Identity credentials, as described in this topic. In this option, the **Owner** role must be assigned. + +#### Azure RBAC Example + +Here's an example of Azure RBAC permissions used for System Assigned Managed Identity: + + +``` +{ + "id": "/subscriptions/xxxx/providers/Microsoft.Authorization/roleDefinitions/xxxx", + "properties": { + "roleName": "HarnessSysMSIRole", + "description": "", + "assignableScopes": [ + "/subscriptions/xxxx/resourceGroups/xxxx" + ], + "permissions": [ + { + "actions": [], + "notActions": [], + "dataActions": [ + "Microsoft.ContainerService/managedClusters/configmaps/read", + "Microsoft.ContainerService/managedClusters/configmaps/write", + "Microsoft.ContainerService/managedClusters/configmaps/delete", + "Microsoft.ContainerService/managedClusters/secrets/read", + "Microsoft.ContainerService/managedClusters/secrets/write", + "Microsoft.ContainerService/managedClusters/secrets/delete", + "Microsoft.ContainerService/managedClusters/apps/deployments/read", + "Microsoft.ContainerService/managedClusters/apps/deployments/write", + "Microsoft.ContainerService/managedClusters/apps/deployments/delete", + "Microsoft.ContainerService/managedClusters/events/read", + "Microsoft.ContainerService/managedClusters/events/write", + "Microsoft.ContainerService/managedClusters/events/delete", + "Microsoft.ContainerService/managedClusters/namespaces/read", + "Microsoft.ContainerService/managedClusters/nodes/read", + "Microsoft.ContainerService/managedClusters/pods/read", + "Microsoft.ContainerService/managedClusters/pods/write", + "Microsoft.ContainerService/managedClusters/pods/delete", + "Microsoft.ContainerService/managedClusters/services/read", + "Microsoft.ContainerService/managedClusters/services/write", + "Microsoft.ContainerService/managedClusters/services/delete", + "Microsoft.ContainerService/managedClusters/apps/statefulsets/read", + "Microsoft.ContainerService/managedClusters/apps/statefulsets/write", + "Microsoft.ContainerService/managedClusters/apps/statefulsets/delete", + "Microsoft.ContainerService/managedClusters/apps/replicasets/read", + "Microsoft.ContainerService/managedClusters/apps/replicasets/write", + "Microsoft.ContainerService/managedClusters/apps/replicasets/delete" + ], + "notDataActions": [] + } + ] + } +} +``` +#### Kubernetes RBAC Example + +Here's an example of Kubernetes RBAC: + + +``` +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: cdp-qa-deployer-role + namespace: default +rules: + - apiGroups: ["", "apps"] + resources: ["pods", "configmaps", "deployments", "secrets", "events", "services", "replicasets", "deployments/scale", "namespaces", "resourcequotas", "limitranges"] + verbs: ["get", "watch", "list", "create", "update", "patch", "delete"] + +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: cdp-qa-deployer-role-binding + namespace: default +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: cdp-qa-deployer-role +subjects: + - kind: Group + namespace: default + name: +``` +### Azure Container Repository (ACR) Roles + +If you do not use a custom role, the **Reader** role must be assigned. **This is the minimum requirement.** + +![](./static/add-a-microsoft-azure-connector-65.png) +You must provide the **Reader** role in the role assignment at the **Subscription** level used by the Application (Client) Id entered in the Connector. The application needs permission to list **all** container registries. + +Some common mistakes: + +* If you put the **Reader** role in a different IAM section of Azure. +* If you provide only the **AcrPull** role instead of **Reader**. It might appear that the AcrPull role gives access to a specific registry, but Harness needs to list **all** registries. + +Harness supports 500 images from an ACR repo. If you don't see some of your images you might have exceeded this limit. This is the result of an Azure API limitation. + +If you connect to an ACR repo via the platform agnostic [Docker Connector](ref-cloud-providers/docker-registry-connector-settings-reference.md), the limit is 100. + +### Azure Web App Permissions + +If you use Microsoft Azure Cloud Connector and Service Principal or Managed Identity credentials, you can use a custom role or the **Contributor** role. The Contributor role is the minimum requirement. + +Below are the Azure RBAC permissions used for System Assigned Managed Identity permissions to perform Azure Web App deployments for container and non-container artifacts. + + +``` +[ + "microsoft.web/sites/slots/deployments/read", + "Microsoft.Web/sites/Read", + "Microsoft.Web/sites/config/Read", + "Microsoft.Web/sites/slots/config/Read", + "microsoft.web/sites/slots/config/appsettings/read", + "Microsoft.Web/sites/slots/*/Read", + "Microsoft.Web/sites/slots/config/list/Action", + "Microsoft.Web/sites/slots/stop/Action", + "Microsoft.Web/sites/slots/start/Action", + "Microsoft.Web/sites/slots/config/Write", + "Microsoft.Web/sites/slots/Write", + "microsoft.web/sites/slots/containerlogs/action", + "Microsoft.Web/sites/config/Write", + "Microsoft.Web/sites/slots/slotsswap/Action", + "Microsoft.Web/sites/config/list/Action", + "Microsoft.Web/sites/start/Action", + "Microsoft.Web/sites/stop/Action", + "Microsoft.Web/sites/Write", + "microsoft.web/sites/containerlogs/action", + "Microsoft.Web/sites/publish/Action", + "Microsoft.Web/sites/slots/publish/Action" +] +``` +### Step 1: Add the Azure Cloud Connector + +You can add the Azure Cloud Connector inline, when adding artifacts or setting up the target infrastructure for a deployment Pipeline stage, or you can add the Connector separately and use it whenever you need it. + +To add the Connector separately, in your Account, Org, or Project **Connectors**, click **New Connector**. + +Click **Azure**. + +Enter a name for the Connector. Harness automatically creates the Id ([Entity Identifier](../20_References/entity-identifier-reference.md)) for the Connector. You can edit the Id before the Connector is saved. Once it is saved, it is immutable. + +Add a Description and [Tags](../20_References/tags-reference.md) if needed. + +Click **Continue**. + +### Option: Credentials or Inherit from Delegate + +In **Details**, you can select how you'd like Harness to authenticate with Azure. + +#### Delegate + +If you have a Harness Delegate installed in your Azure subscription (preferably in your target AKS cluster) you can select **Use the credentials of a specific Harness Delegate**. + +For steps on installing a Delegate, see [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview). + +![](./static/add-a-microsoft-azure-connector-66.png) +In **Environment**, select **Azure Global** or **US Government**. + +In **Authentication**, select **System Assigned Managed Identity** or **User Assigned Managed Identity**. + +See [Use managed identities in Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity) and [How to use managed identities with Azure Container Instances](https://docs.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity) from Azure.If you selected **User Assigned Managed Identity**, in **Client Id**, enter the Client Id from your Managed Identity. + +![](./static/add-a-microsoft-azure-connector-67.png) +If you selected **User Assigned Managed Identity**, you can also use a [Pod Assigned Managed identity](https://docs.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity). + +If you selected **System Assigned Managed Identity**, click **Continue**. + +#### System Assigned Managed Identity Notes + +* If you select **System Assigned Managed Identity** in the Harness Azure Connector, the identity used is actually AKS cluster predefined [Kubelet Managed Identity](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity#summary-of-managed-identities). +* Kubelet Managed Identity (which has name format `-agentpool`) must have the **acrPull** permission on ACR (if used for image storage). +* [Control plane AKS Managed Identity](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity#summary-of-managed-identities) (which has name format ``) must have the **Reader** permission on the AKS cluster itself. + +#### Credentials + +Using Azure credentials is covered in the following steps. + +### Step 2: Gather the Required Information + +In Microsoft Azure, you can find the information you need on the App registration **Overview** page: + +![](./static/add-a-microsoft-azure-connector-68.png) +### Step 3: Environment + +In **Environment**, select **Azure Global** or **US Government**. + +### Step 4: Application (Client) Id + +This is the **Application (Client) Id** for the Azure app registration you are using. It is found in the Azure Active Directory (AAD) **App registrations** or **Managed Identity**. For more information, see [Quickstart: Register an app with the Azure Active Directory v1.0 endpoint](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-v1-add-azure-ad-app) from Microsoft. + +To access resources in your Azure subscription, you must assign the Azure App registration using this Application Id to a role in that subscription. + +For more information, see [Assign the application to a role](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#assign-the-application-to-a-role) and [Use the portal to create an Azure AD application and service principal that can access resources](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal) from Microsoft. + +### Step 5: Tenant (Directory) Id + +The **Tenant Id** is the ID of the Azure Active Directory (AAD) in which you created your application. This Id is also called the **Directory ID**. For more information, see [Get tenant ID](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal#get-tenant-id) and [Use the portal to create an Azure AD application and service principal that can access resources](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal) from Azure. + +### Step 6: Secret or Certificate + +Harness supports PEM files only. Currently, Harness does not support PFX files.In **Authentication**, select **Secret** or **Certificate**. + +This is the authentication key for your application. This is found in **Azure Active Directory**, **App Registrations**. Click the App name. Click **Certificates & secrets**, and then click **New client secret**. + +![](./static/add-a-microsoft-azure-connector-69.png) +(./static/add-a-microsoft-azure-connector-69.png) +You cannot view existing secret values, but you can create a new key. For more information, see [Create a new application secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#create-a-new-application-secret) from Azure. + +If you select **Secret**, create or use an existing [Harness Text Secret](../6_Security/2-add-use-text-secrets.md). + +If you select **Certificate**, create or use an existing [Harness File Secret](../6_Security/3-add-file-secrets.md). + +### Step 7: Delegates Setup + +Select the Delegate(s) to use with this Connector. + +Click **Save and Continue**. + +In **Connection Test**, the connection is verified. + +If you run into errors, make sure that your Delegate is running and that your credentials are valid. For example, check that the secret has not expired in your App registration. + +### Review: Using ${HARNESS\_KUBE\_CONFIG\_PATH} with Azure + +The Harness `${HARNESS_KUBE_CONFIG_PATH}` expression resolves to the path to a Harness-generated kubeconfig file containing the credentials you provided to Harness. + +The credentials can be used by kubectl commands by exporting its value to the `KUBECONFIG` environment variable. + +For example, you could use a Harness Shell Script step and the expression like this: + + +``` +export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH} kubectl get pods -n default +``` +Steps can be executed on any Delegate or you can select specific Delegates using the steps [Delegate Selector](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) setting. + +For Azure deployments, note the following: + +* If the Azure Connector used in the Stage's **Infrastructure** uses Azure Managed Identity for authentication, then the Shell Script step must use a Delegate Selector for a Delegate running in AKS. +* If the Azure Connector used in the Stage's **Infrastructure** uses Azure Service Principal for authentication, then the Shell Script step can use any Delegate. + +### See also + +* [Azure ACR to AKS CD Quickstart](https://docs.harness.io/article/m7nkbph0ac-azure-cd-quickstart) +* [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) + diff --git a/docs/platform/7_Connectors/add-aws-connector.md b/docs/platform/7_Connectors/add-aws-connector.md new file mode 100644 index 00000000000..5ccef85575a --- /dev/null +++ b/docs/platform/7_Connectors/add-aws-connector.md @@ -0,0 +1,112 @@ +--- +title: Add an AWS Connector +description: Add a Harness AWS Connector. +# sidebar_position: 2 +helpdocs_topic_id: 98ezfwox9u +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +AWS is integrated with Harness using a Harness AWS Connector. You can use AWS with Harness for obtaining artifacts, communicating with AWS services, provisioning infrastructure, and deploying microservices and other workloads. + +This topic explains how to set up the AWS Connector. + +**What IAM roles should my AWS account have?** What IAM roles and policies needed by the AWS account used in the Connector depend on what AWS service you are using with Harness and what operations you want Harness to perform in AWS. For a list of roles and policies, see [AWS Connector Settings Reference](ref-cloud-providers/aws-connector-settings-reference.md).The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure. + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Review: IAM Roles and Policies for the Connector + +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure.The IAM roles and policies needed by the AWS account used in the Connector depend on what AWS service you are using with Harness and what operations you want Harness to perform in AWS. + +For a list of roles and policies, see [AWS Connector Settings Reference](ref-cloud-providers/aws-connector-settings-reference.md). + +The AWS [IAM Policy Simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) is a useful tool for evaluating policies and access. + +### Review: Kubernetes Cluster Connector for EKS + +If you want to connect Harness to Elastic Kubernetes Service (Amazon EKS), use the platform-agnostic [Kubernetes Cluster Connector](connect-to-a-cloud-provider.md). + +### Review: Switching IAM Policies + +If the IAM role used by your AWS Connector does not have the policies required by the AWS service you want to access, you can modify or switch the role. + +You simply change the role assigned to the AWS account or the Harness Delegate your AWS Connector is using. + +When you switch or modify the IAM role, it might take up to 5 minutes to take effect. + +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure. + +### Supported Platforms and Technologies + +For a list of the platforms and technologies supported by Harness, see [Supported Platforms and Technologies](https://docs.harness.io/article/1e536z41av). + +### Step 1: Add an AWS Connector + +Open a Harness Project. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and click **AWS**. The AWS Connector settings appear. + +![](./static/add-aws-connector-77.png) +In **Name**, enter a name for this connector. + +Harness automatically creates the corresponding Id. + +Click **Continue**. + +### Step 2: Enter Credentials + +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure.There are three options for authenticating with AWS: + +* **AWS Access Key:** enter the AWS access and secret key for an AWS IAM user. +* **Assume IAM role on Delegate:** add or select a Harness Delegate running in AWS. The AWS IAM role used when installing the Delegate in AWS is used for authentication by the AWS Connector. +For example, you can add or select a Harness Kubernetes Delegate running in Amazon Elastic Kubernetes Service (Amazon EKS). +* **Use IRSA:** have the Harness Kubernetes Delegate in AWS EKS use a specific IAM role when making authenticated requests to resources. This option uses [IRSA (IAM roles for service accounts)](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html). + +All of the settings for these options are described in detail in [AWS Connector Settings Reference](ref-cloud-providers/aws-connector-settings-reference.md). + +### Test Region and AWS GovCloud Support + +By default, Harness uses the **us-east-1** region to test the credentials for this Connector. + +If you want to use an AWS GovCloud account for this Connector, select it in **Test Region**. + +GovCloud is used by organizations such as government agencies at the federal, state, and local level, as well as contractors, educational institutions. It is also used for regulatory compliance with these organizations. + +#### Restrictions + +You can access AWS GovCloud with AWS GovCloud credentials (AWS GovCloud account access key and AWS GovCloud IAM user credentials). + +You cannot access AWS GovCloud with standard AWS credentials. Likewise, you cannot access standard AWS regions using AWS GovCloud credentials. + +### Step 3: Set Up Delegates + +Harness uses AWS Connectors at Pipeline runtime to authenticate and perform operations with AWS. Authentications and operations are performed by Harness Delegates. + +You can select **Any Available Harness Delegate** and Harness will select the Delegate. For a description of how Harness picks Delegates, see [Delegates Overview](../2_Delegates/delegates-overview.md). + +You can use Delegate Tags to select one or more Delegates. For details on Delegate Tags, see [Select Delegates with Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md). + +If you need to install a Delegate, see [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview). + +Click **Save and Continue**. + +Harness tests the credentials you provided using the Delegates you selected. + +If the credentials fail, you'll see an error such as `AWS was not able to validate the provided access credentials`. Check your credentials by using them with the AWS CLI or console. + +The AWS [IAM Policy Simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) is a useful tool for evaluating policies and access.The credentials might work fine for authentication, but might fail later when you use the Connector with a Pipeline because the IAM role the Connector is using does not have the roles and policies needed for the Pipeline's operations. + +For example, the [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Cloud Providers regardless of what AWS service you are using for your target infrastructure. + +If you run into any error with an AWS Connector, verify that the IAM roles and policies it using are correct. + +For a list of roles and policies, see [AWS Connector Settings Reference](ref-cloud-providers/aws-connector-settings-reference.md). + +Click **Finish**. + diff --git a/docs/platform/7_Connectors/connect-to-a-azure-repo.md b/docs/platform/7_Connectors/connect-to-a-azure-repo.md new file mode 100644 index 00000000000..660ab83d6d1 --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-a-azure-repo.md @@ -0,0 +1,111 @@ +--- +title: Connect to Azure Repos +description: This topic explains how to connect to Azure Repos. +# sidebar_position: 2 +helpdocs_topic_id: swe06e41w7 +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Azure Repos is a set of version control tools that you can use to manage your code. Azure Repos provide the following kinds of version control: + +* **Git**: distributed version control +* **Team Foundation Version Control** (TFVC): centralized version control + +This topic explains how to connect your Harness Accounts, Organizations or Projects with one of the Azure Repos. You can do this by adding an Azure Repos connector to Harness. + +### Before you begin + +* Make sure you have set up your Azure Project and Repo. +* Make sure you have **Create/Edit** permissions to add an Azure Repos connector in Harness. + +### Step 1: Add an Azure Repos Connector + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md).​ + +You can add a Connector from any module in your Project in Project setup. + +This topic shows you how to create an Azure Repos Connector from the CD module. To do this, perform the below steps:​ + +In Harness, click **Deployments** and select your Project. + +Click **Project Setup** and then click **Connectors**. + +Click **New Connector**. + +In **Code Repositories**, click **Azure Repos**. + +The **Azure Repos Connector** settings appear. + +Enter a **Name** for your Azure Repos Connector. + +![](./static/connect-to-a-azure-repo-00.png) +Click **Continue**. + +### Step 2: Details + +Select **Project** or **Repository** in the **URL Type**. + +![](./static/connect-to-a-azure-repo-01.png) +#### Option: Project + +Select **HTTP** or **SSH** in the **Connection Type**. + +Enter your Azure Repos Project URL. For example: `https://dev.azure.com/mycomp/myproject`. + +Be careful when copying the project URL because if you are looking a Repos the URL will include `_git` in the path.If you selected Project, enter the **name** of your repository in **Test Repository**, not the full URL. + +![](./static/connect-to-a-azure-repo-02.png) +You can get the project URL from your browser's location field. + +#### Option: Repository + +Enter your **Azure Repos Repository URL**. For example: `https://johnsmith@dev.azure.com/johnsmith/MyProject/_git/myrepo`. + + You can get the repo URL from the Azure repo: + +![](./static/connect-to-a-azure-repo-03.png) +Click **Continue**. + +### Step 3: Credentials + +Enter the username and password from the repo. + +![](./static/connect-to-a-azure-repo-04.png) +#### Enable API Access + +This option is required for using Git-based triggers, Webhooks management, and updating Git statuses.​ + +This is a common option for code repos. + +In **Personal Access Token**, either create a new [Encrypted Text](../6_Security/2-add-use-text-secrets.md) or use an existing one that has your Azure Personal Access Token.​ Harness requires the token for API access. Generate the token in your Azure account and add it to Harness as a Secret. + +To create a Personal Acces Token in Azure, see [Create a PAT](https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=Windows#create-a-pat). + +![](./static/connect-to-a-azure-repo-05.png) +If you selected **SSH** as the **Connection Type**, you must add your SSH Private key to use with the connection as a [Harness Encrypted Text](../6_Security/2-add-use-text-secrets.md). + +To create a private SSH Key, enter the following command in your terminal window: + + +``` + ssh-keygen -t rsa +``` +For more information​, see [Create SSH Keys](https://docs.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops#step-1-create-your-ssh-keys). + +Click **Continue**. + +### Delegates Setup + +Select one of the following: + +* **Use any available Delegate:** to let Harness select a Delegate at runtime. +* **Only use Delegates with all of the following tags:** to use specific Delegates using their Tags. + +Click **Save and Continue**. + +Harness tests the connection. Click **Finish** once the verification is successful.​ + +The Azure Repos connector is listed in Connectors. + diff --git a/docs/platform/7_Connectors/connect-to-a-cloud-provider.md b/docs/platform/7_Connectors/connect-to-a-cloud-provider.md new file mode 100644 index 00000000000..1cf2b6ac97a --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-a-cloud-provider.md @@ -0,0 +1,53 @@ +--- +title: Connect to a Cloud Provider +description: Steps explaining how to create a Cloud Provider connector. +# sidebar_position: 2 +helpdocs_topic_id: s9j6cggx1p +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Connect Harness to your AWS and GCP accounts or a platform-agnostic Kubernetes cluster by adding a Cloud Provider Connector. + +Connectors contain the information necessary for Harness to integrate and work with 3rd party tools. + +Harness uses Connectors at Pipeline runtime to authenticate and perform operations with a 3rd party tool. + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Add a Cloud Provider Connector + +You can add a Cloud Provider Connector at the Account, Org, or Project scope. This topic will explain how to add it at the Project scope. The process is same for Org and Account. You can add an AWS, GCP, or Kubernetes Cluster as Cloud Provider Connector. + +### Step: Add an AWS Connector + +For steps on setting up an AWS Connector, see [Add an AWS Connector](add-aws-connector.md). + +For details on settings and permissions, see [AWS Connector Settings Reference](ref-cloud-providers/aws-connector-settings-reference.md). + +### Step: Add a GCP Connector + +For steps on setting up a GCP Connector, see [Add a Google Cloud Platform (GCP) Connector](connect-to-google-cloud-platform-gcp.md). + +For details on settings and permissions, see [Google Cloud Platform (GCP) Connector Settings Reference](ref-cloud-providers/gcs-connector-settings-reference.md). + +### Step: Add a Kubernetes Cluster Connector + +For steps on setting up a Kubernetes Cluster Connector, see [Add a Kubernetes Cluster Connector](add-a-kubernetes-cluster-connector.md). + +For details on settings and permissions, see [Kubernetes Cluster Connector Settings Reference](ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md). + +Here's a quick video that shows you how to add a Kubernetes Cluster Connector and install the Kubernetes Delegate in the target cluster at the same time: + +### Step: Add a Microsoft Azure Cloud Connector + +For steps on setting up a Microsoft Azure Cloud Connector, see [Add a Microsoft Azure Cloud Connector](add-a-microsoft-azure-connector.md). + +### See also + +* [Install a Kubernetes Delegate](../2_Delegates/delegate-guide/install-a-kubernetes-delegate.md) +* [Select Delegates with Selectors](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) + diff --git a/docs/platform/7_Connectors/connect-to-an-artifact-repo.md b/docs/platform/7_Connectors/connect-to-an-artifact-repo.md new file mode 100644 index 00000000000..415f13bb07f --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-an-artifact-repo.md @@ -0,0 +1,169 @@ +--- +title: Connect to an Artifact Repo +description: Doc explaining steps to create Artifactory Repository connector. +# sidebar_position: 2 +helpdocs_topic_id: xxvnk67c5x +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You connect Harness to an artifact repo by adding an **Artifact Repositories** Connector. + +You can connect to an artifact repo inline when developing your Pipeline, or separately in your Account/Org/Project **Connectors**. Once add the Connector, it'll be available in Pipelines and Connectors of the same Account/Org/Project. + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Review: AWS, Azure, and Google Cloud Storage Artifacts + +Connectors for artifacts stored in Google Cloud Storage or Amazon S3 are added as **Cloud Providers** Connectors, not **Artifact Repositories**. + +If you are using Google Cloud Storage or Amazon S3, see [Cloud Platform Connectors](https://docs.harness.io/category/cloud-platform-connectors). + +For Azure ACR, use the **Docker Registry** Connector, described below. + +### Review: Artifact Repository Connectors Scope + +You can add an Artifact Repository Connector at the Account/Org/Project scope. + +This topic will explain how to add it at the Project scope. The process is same for Org and Account. + +Steps on adding the Artifact Repository Connector inline when developing a Pipeline are covered in the relevant How-to and Technical Reference topics. For example, adding a Docker Registry is covered in the [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) and [Docker Connector Settings Reference](ref-cloud-providers/docker-registry-connector-settings-reference.md). + +### Step: Add an Artifactory Repository + +For details on settings and permissions, see [Artifactory Connector Settings Reference](ref-cloud-providers/artifactory-connector-settings-reference.md). + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **Artifactory** in **Artifact Repositories**. The Artifactory Repository settings appear. +4. In **Name**, enter a name for this connector. +5. Click **Continue**. +6. Enter the **Artifactory Repository URL**. +7. In **Authentication**, select one of the following options: + 1. **Username and Password** - Once you choose this option, you need to enter the **Username** and **Password**. For Password you can either create a new Secret or use an existing one. + 2. **Anonymous (no credentials required)**. +8. Click **Continue**. +9. In **Delegates Setup,**use any Delegate or enter [Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) for specific Delegates that you want to allow to connect to his Connector. +10. Click **Save and Continue**. +11. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +### Step: Add a Docker Registry + +For details on settings and permissions, see [Docker Connector Settings Reference](ref-cloud-providers/docker-registry-connector-settings-reference.md).The Docker Connector is platform-agnostic and can be used to connect to any Docker container registry, but Harness provides first class support for registries in AWS and GCR. See [Add an AWS Connector](add-aws-connector.md), [Google Cloud Platform (GCP) Connector Settings Reference](connect-to-google-cloud-platform-gcp.md). + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **Docker Registry** in **Artifact Repositories**. The Docker Registry settings appear. +4. In **Name**, enter a name for this connector. +5. Click **Continue**. +6. Enter the **Docker Registry URL**. +7. Select a **Provider Type**. +8. In **Authentication**, select one of the following options: + 1. **Username and Password** - Once you choose this option, you need to enter the **Username** and **Password**. For Password you can either create a new Secret or use an existing one. + 2. **Anonymous (no credentials required)**. +9. Click **Continue**. +10. In **Delegates Setup**, use any Delegate or enter [Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) for specific Delegates that you want to allow to connect to his Connector. +11. Click **Save and Continue**. +12. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +### Step: Add an HTTP Helm Repo + +You can add Helm Charts from an HTTP Helm Repo. Once you set up the Connector, you can use it in a Stage to add your Helm Chart. + +Since Harness lets you use the `<+artifact.image>` expression in your Helm Chart Values YAML files, Helm Charts are added a Stage Service in **Manifests** and not **Artifacts**. If you use the `<+artifact.image>` expression in your Helm Chart Values YAML files, then Harness will pull the image you add to **Artifacts**. See [Deploy Helm Charts](https://docs.harness.io/article/7owpxsaqar-deploy-helm-charts). + +For details on settings and permissions, see [HTTP Helm Repo Connector Settings Reference](ref-source-repo-provider/http-helm-repo-connector-settings-reference.md). + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **HTTP Helm** in **Artifact Repositories**. The HTTP Helm Repo settings appear. +4. In **Name**, enter a name for this connector. +5. Click **Continue**. +6. Enter the **Helm Repository URL**. +7. In **Authentication**, select one of the following options: + 1. **Username and Password** - Once you choose this option, you need to enter the **Username** and **Password**. For Password you can either create a new Secret or use an existing one. + 2. **Anonymous (no credentials required)**. +8. Click **Continue**. +9. In **Delegates Setup**, use any Delegate or enter [Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) for specific Delegates that you want to allow to connect to his Connector. +10. Click **Save and Continue**. +11. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +### Step: Add a Helm 3 OCI Helm Registry + +You can add Helm Charts from an [OCI Helm Registry](https://helm.sh/docs/topics/registries/). Once you set up the Connector, you can use it in a Stage to add your Helm Chart. + +Since Harness lets you use the `<+artifact.image>` expression in your Helm Chart Values YAML files, Helm Charts are added a Stage Service in **Manifests** and not **Artifacts**. If you use the `<+artifact.image>` expression in your Helm Chart Values YAML files, then Harness will pull the image you add to **Artifacts**. See [Deploy Helm Charts](https://docs.harness.io/article/7owpxsaqar-deploy-helm-charts). + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **OCI Helm Registry** in **Artifact Repositories**. The OCI Helm Registry settings appear. +4. In **Name**, enter a name for this connector. +5. Click **Continue**. +6. Enter the **Helm Repository URL**. +You don't need to include the `oci://` scheme in **Helm Repository URL**. Harness will preface the domain name you enter with `oci://`. +7. In **Authentication**, in **Username and Password**, enter the **Username** and **Password**. For Password you can either create a new Secret or use an existing one. +8. Click **Continue**. +9. In **Delegates Setup**, use any Delegate or enter [Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) for specific Delegates that you want to allow to connect to his Connector. +10. Click **Save and Continue**. +11. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +#### OCI Registry Notes + +* Helm supports OCI registries officially for Helm version 3.8 and above. Experimental support is available with versions below 3.8. +* You cannot use OCI Helm Registries with [Helm Chart Triggers](../11_Triggers/trigger-pipelines-on-new-helm-chart.md). +* Harness OCI support is cloud-agnostic. You can use OCI registries in ACR, GCR, and ECR. + +#### Google GCR Authentication Supported + +For GCR as an OCI registry, Harness support authentication using the following: + +* Access token +* A JSON key file where username is `_json_key_base64` and password is base64-encoded JSON key file content. + +Harness does not support a username of `_json_key` and password as unencrypted JSON key file content. + +#### AWS ECR authentication supported + +For **Helm Repository URL**, enter the URL for the repo in the format `https://.dkr.ecr..amazonaws.com`. + +For example, something like `https://0838475738302113.dkr.ecr.us-west-2.amazonaws.com`. + +For **Username**, enter `AWS`. + +For **Password**, create a new Harness text secret. + +Use the following command to retrieve the password from AWS ECR: + +`aws ecr get-login-password --region ` + +For example: `aws ecr get-login-password --region us-west-2` + +Copy the password and paste it into a Harness text secret. + +The AWS ECR authorization token in only valid for 12 hours. [This is an AWS limitation.](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/get-login-password.html#description)For information on ECR authentication, go to [Private registry authentication](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html) from AWS. + +### Step: Add a Nexus Repository + +For details on settings and permissions, see [Nexus Connector Settings Reference](../8_Pipelines/w_pipeline-steps-reference/nexus-connector-settings-reference.md).1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **Nexus** in **Artifact Repositories**. The Nexus Repository settings appear. +4. In **Name**, enter a name for this connector. +5. Click **Continue**. +6. Enter the **Nexus Repository URL**. +7. Select a **Version**. +8. In **Authentication**, select one of the following options: + 1. **Username and Password** - Once you choose this option, you need to enter the **Username** and **Password**. For Password you can either create a new Secret or use an existing one. + 2. **Anonymous (no credentials required)**. +9. Click **Continue**. +10. In **Delegates Setup****,**use any Delegate or enter [Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) for specific Delegates that you want to allow to connect to his Connector. +11. Click **Save and Continue**. +12. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +### See also + +* [Select Delegates with Selectors](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) +* [Add a Secrets Manager](../6_Security/5-add-secrets-manager.md) + diff --git a/docs/platform/7_Connectors/connect-to-code-repo.md b/docs/platform/7_Connectors/connect-to-code-repo.md new file mode 100644 index 00000000000..f0c4c15b420 --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-code-repo.md @@ -0,0 +1,165 @@ +--- +title: Connect to a Git Repo +description: An overview of Code Repo Connectors. +# sidebar_position: 2 +helpdocs_topic_id: zbhehjzsnv +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Code Repository Connectors connect your Harness account with your Git platform. Connectors are used to pull important files, such as Helm charts, Kubernetes manifests, and Terraform scripts. + +### Connect to Your Git Repositories + +You can add a Code Repo Connector to the Account, Org, or Project scope. This topic will explain how to add it to the Project scope. The process is the same for Org and Account. + +### Limitations + +* Harness performs a `git clone` to fetch files. If the fetch is timing out, it can be because the repo is too large for the network connection to fetch it before timing out. To fetch very large repos, enable the feature flag `OPTIMIZED_GIT_FETCH_FILES`. When this feature flag is enabled, Harness will use provider-specific APIs to improve performance.Currently, this feature is behind the feature flag `OPTIMIZED_GIT_FETCH_FILES`. Contact [Harness Support](mailto:support@harness.io) to enable the feature. + +### Permissions + +In general, the Git provider user account you use to set up the Connector needs the same permissions it would need if you were working from Git. + +So, if you are using the Harness Connector to pull manifests from a repo, the user account you use in the Connector must have a `read repo` permission for your Git provider. + +For Harness Git Experience, see [Source Code Manager Settings](ref-source-repo-provider/source-code-manager-settings.md). + +A public Git repo does not require a username and password/token. Harness does not validate public Git repo credentials. + +### Add a Git Repo + +In most cases, you'll want to add a Connector for a popular Git provider like GitHub, described in [Add a GitHub Connector](add-a-git-hub-connector.md). You can also add a platform-agnostic connection to a Git provider using **Git Repo**. + +For more details on the settings to create this connector, see [Git Connector Settings Reference](ref-source-repo-provider/git-connector-settings-reference.md). + +1. In your **Project** select a module such as CD. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **Git** in **Code Repositories**. The Git settings appear. + + ![](./static/connect-to-code-repo-08.png) + +4. In **Name**, enter a name for this connector. +5. Select **Account** (which is an Organization) or **Repository** in **URL Type**. +6. Select **Connection Type** as **HTTP** or **SSH**. For more information, see [Connection Type](ref-source-repo-provider/git-hub-connector-settings-reference.md#connection-type). +7. Enter the Git Account (org) or repo URL. +8. If you selected **Account**, in **Test Repository**, enter a repository name to test the connection. +9. Click **Continue**. +10. In **Credentials,** Enter your **Username**. +11. In **Secret Key** you can either create a new [Encrypted Text](../6_Security/2-add-use-text-secrets.md) or use an existing one. +12. Click **Continue**. +13. In **Setup Delegates**, you can choose **Connect via any delegate** or **Connect only via delegates which has all of the following tags**. +14. Click **Save and Continue**. +15. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +### ​Add GitHub Repo + +See [Add a GitHub Connector](add-a-git-hub-connector.md). + +### Add AWS CodeCommit Repo + +For details on settings and permissions, see [AWS CodeCommit Connector Settings Reference](https://docs.harness.io/article/jed9he2i45-aws-code-commit-connector-settings-reference). + + +:::note +For steps on setting up the IAM user for CodeCommit connections, go to [Setup for HTTPS users using Git credentials from AWS](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html). + +::: + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **AWS CodeCommit** in **Code Repositories**. The AWS CodeCommit settings appear. +4. In **Name**, enter a name for this connector. +5. Select **Region** or **Repository**. + * **Region:** Connect to an entire AWS region. This enables you to use one Connector for all repos in the region. If you select this, you must provide a repository name to test the connection. + * **Repository:** Connect to one repo. + +6. Enter the repository URL in **AWS CodeCommit Repository URL**. For example, `https://git-codecommit.us-west-2.amazonaws.com/v1/repos/doc-text`. + You can get this URL from your CodeCommit repo by using its **Clone URL** menu and selecting **Clone HTTPS**. +7. Click **Save and Continue**. +8. Enter IAM user's access key in **Access Key**. +9. Enter the corresponding secret key in **Secret Key**. You can either create a new [Encrypted Text](../6_Security/2-add-use-text-secrets.md) or use an existing one. +10. Click **Save and Continue**. +11. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +#### Required Credentials + +The IAM account you use to connect to CodeCommit must have the following policies: + +* **AWSCodeCommitPowerUser** +* **DescribeRegions** + +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS connections regardless of what AWS resource you are using with Harness. + +For more details, go to [Setup for HTTPS users using Git credentials from AWS](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html) and [AWS managed policies for CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/security-iam-awsmanpol.html) from AWS. + +#### Connect to CodeCommit using the platform-agnostic Git Connector + +You can also connect to CodeCommit using the Harness platform-agnostic **Git Connector** as opposed to the **AWS CodeCommit Connector**. + +With the Git Connector, you use the IAM user's User Name and Password and not the Access Key and Secret Key. + +In the IAM User you want to use, click **Security credentials**, and then generate credentials in **HTTPS Git credentials for AWS CodeCommit**. + +In the Harness CodeCommit Connector, in **Access Key**, enter the **User name** from the IAM user credentials you generated. + +In **Secret Key**, use a [Harness Encrypted Text secret](../6_Security/2-add-use-text-secrets.md) for the **Password** of the IAM user account. + +![](./static/connect-to-code-repo-09.png) + +### Add a Bitbucket Repo + + +:::note +For more details on the settings to create this connector, see [Bitbucket Connector Settings Reference](ref-source-repo-provider/bitbucket-connector-settings-reference.md). + +::: + + +:::note +Harness supports both Cloud and Data Center (On-Prem) versions of Bitbucket. + +::: + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **Bitbucket** in **Code Repositories**. The Bitbucket settings appear. +4. In **Name**, enter a name for this connector. +5. Select **Account** or **Repository** in **URL Type**. +6. Select **Connection Type** as **HTTP** or **SSH**. For more information, see [Connection Type](ref-source-repo-provider/bitbucket-connector-settings-reference.md#connection-type). +7. Enter your **Bitbucket Account URL**. +For **HTTP**, the format for the URL should be `https://bitbucket.org//.git`. +8. In **Test Repository**, enter your repository name to test the connection. +9. Click **Save and Continue**. +10. Enter your **Username**. +11. In **Secret Key** you can either create a new [Encrypted Text](../6_Security/2-add-use-text-secrets.md) or use an existing one. +12. Click **Continue**. +13. In **Setup Delegates**, you can choose **Connect via any delegate** or **Connect only via delegates which has all of the following tags**. +14. Click **Save and Continue**. +15. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + +### Add GitLab Repo + + +:::note +For more details on the settings to create this connector, see [GitLab Connector Settings Reference](ref-source-repo-provider/git-lab-connector-settings-reference.md). + +::: + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **New Connector**, and click **GitLab** in **Code Repositories**. The GitLab Connector settings appear. +4. In **Name**, enter a name for this connector. +5. Select **Account** or **Repository** in **URL Type**. +6. Select **Connection Type** as **HTTP** or **SSH**. For more information, see [Connection Type](ref-source-repo-provider/git-lab-connector-settings-reference.md#connection-type). +7. Enter your **GitLab Account URL**. +8. In **Test Repository**, enter your repository name to test the connection. +9. Click **Continue**. +10. In **Credentials,** Enter your **Username**. +11. In **Secret Key** you can either create a new [Encrypted Text](../6_Security/2-add-use-text-secrets.md) or use an existing one. +12. Click **Continue**. +13. In **Setup Delegates**, you can choose **Connect via any delegate** or **Connect only via delegates which has all of the following tags**. +14. Click **Save and Continue**. +15. Once the Test Connection succeeds, click **Finish**. The Connector is listed in Connectors. + diff --git a/docs/platform/7_Connectors/connect-to-google-cloud-platform-gcp.md b/docs/platform/7_Connectors/connect-to-google-cloud-platform-gcp.md new file mode 100644 index 00000000000..d14d08584da --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-google-cloud-platform-gcp.md @@ -0,0 +1,105 @@ +--- +title: Add a Google Cloud Platform (GCP) Connector +description: Add a Harness GCP Connector. +# sidebar_position: 2 +helpdocs_topic_id: cii3t8ra3v +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Google Cloud Platform (GCP) is integrated with Harness using a Harness GCP Connector. You can use GCP with Harness for obtaining artifacts, communicating with GCP services, provisioning infrastructure, and deploying microservices, and other workloads. + +This topic explains how to set up the GCP Connector. + +**What IAM roles should my GCP account have?** What IAM roles and policies needed by the GCP account used in the Connector depend on what GCP service you are using with Harness and what operations you want Harness to perform in GCP. For a list of roles and policies, see [Google Cloud Platform (GCP) Connector Settings Reference](ref-cloud-providers/gcs-connector-settings-reference.md). + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Limitations + +Harness supports GKE 1.19 and later. + +If you use a version prior to GKE 1.19, please enable Basic Authentication. If Basic authentication is inadequate for your security requirements, use the [Kubernetes Cluster Connector](add-a-kubernetes-cluster-connector.md). + +### Supported Platforms and Technologies + +For a list of the platforms and technologies supported by Harness, see [Supported Platforms and Technologies](https://docs.harness.io/article/1e536z41av-supported-platforms-and-technologies). + +### Review: Connecting to Kubernetes Clusters + +You can connect to a Kubernetes cluster running in GCP using a GCP Connector or the platform-agnostic Kubernetes Cluster Connector. + +See [Kubernetes Cluster Connector Settings Reference](ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md). + +### Review: IAM Roles and Policies for the Connector + +The IAM roles and policies needed by the GCP account used in the Connector depend on what GCP service you are using with Harness and what operations you want Harness to perform in GCP. + +For a list of roles and policies, see [Google Cloud Platform (GCP) Connector Settings Reference](ref-cloud-providers/gcs-connector-settings-reference.md). + +The [GCP Policy Simulator](https://cloud.google.com/iam/docs/simulating-access) is a useful tool for evaluating policies and access. + +### Review: Switching IAM Policies + +If the IAM role used by your GCP Connector does not have the policies required by the GCP service you want to access, you can modify or switch the role. + +You simply change the role assigned to the GCP account or the Harness Delegate your GCP Connector is using. + +When you switch or modify the IAM role, it might take up to 5 minutes to take effect. + +### Review: GCP Workload Identity + +If you installed the Harness [Kubernetes Delegate](https://docs.harness.io/article/4ax6q6dxa4-install-an-immutable-kubernetes-delegate) in a Kubernetes cluster (in GKE) that has [GCP Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity?hl=tr#enable_on_cluster) enabled and uses the same service account and node pool annotation, the Google Cloud Platform (GCP) Connector will inherit these credentials if it uses that Delegate. + +### Step 1: Add a GCP Connector + +Open a Harness Project. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and click **GCP**. The GCP Connector settings appear. + +![](./static/connect-to-google-cloud-platform-gcp-07.png) +In **Name**, enter a name for this connector. + +Harness automatically creates the corresponding Id ([entity identifier](../20_References/entity-identifier-reference.md)). + +Click **Continue**. + +### Step 2: Enter Credentials + +There are two options for authenticating with GCP: + +* **Specify credentials here:** use a GCP Service Account Key. +* **Use the credentials of a specific Harness Delegate:** have the Connector inherit the credentials used by the Harness Delegate running in GCP. The GCP IAM role used when installing the Delegate in GCP is used for authentication by the GCP Connector. +For example, you can add or select a Harness Kubernetes Delegate running in Google Kubernetes Engine (GKE). + +All of the settings for these options are described in detail in [Google Cloud Platform (GCP) Connector Settings Reference](ref-cloud-providers/gcs-connector-settings-reference.md). + +### Step 3: Set Up Delegates + +Harness uses GCP Connectors at Pipeline runtime to authenticate and perform operations with GCP. Authentications and operations are performed by Harness Delegates. + +You can select **Any Available Harness Delegate** and Harness will select the Delegate. For a description of how Harness picks Delegates, see [Delegates Overview](../2_Delegates/delegates-overview.md). + +You can use Delegate Tags to select one or more Delegates. For details on Delegate Tags, see [Select Delegates with Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md). + +If you need to install a Delegate, see [Delegate Installation Overview](https://docs.harness.io/article/re8kk0ex4k-delegate-installation-overview). + +Click **Save and Continue**. + +Harness tests the credentials you provided using the Delegates you selected. + +If the credentials fail, you'll see an error. Check your credentials by using them with the GCP CLI or console. + +The [GCP Policy Simulator](https://cloud.google.com/iam/docs/simulating-access) is a useful tool for evaluating policies and access.The credentials might work fine for authentication, but might fail later when you use the Connector with a Pipeline because the IAM role the Connector is using does not have the roles and policies needed for the Pipeline's operations. + +If you run into any error with a GCP Connector, verify that the IAM roles and policies it using are correct. + +For a list of roles and policies, see [Google Cloud Platform (GCP) Connector Settings Reference](ref-cloud-providers/gcs-connector-settings-reference.md). + +Click **Finish**. + diff --git a/docs/platform/7_Connectors/connect-to-harness-container-image-registry-using-docker-connector.md b/docs/platform/7_Connectors/connect-to-harness-container-image-registry-using-docker-connector.md new file mode 100644 index 00000000000..5ed34a5afdc --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-harness-container-image-registry-using-docker-connector.md @@ -0,0 +1,127 @@ +--- +title: Connect to Harness Container Image Registry Using Docker Connector +description: This topic explains how to set up the account-level Docker Connector to connect to the Harness Container Image Registry. +# sidebar_position: 2 +helpdocs_topic_id: my8n93rxnw +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +By default, at CIE Pipeline runtime, Harness pulls certain images from public Docker Hub repos. These images are only used for backend processes. At runtime, the Harness Delegate makes an outbound connection to the public repo and pulls the images. + +The Harness Container Image Registry is dedicated exclusively to Harness-supported images. You might want to override the default behavior and download your build images from this repo instead. To view the list of images in this registry, enter the following command. + + +``` +curl -X GET https://app.harness.io/registry/_catalog +``` +You can override the default behavior at the Account level and the Stage level: + +* **Account-level override:** If you do not want the Harness Delegate to pull images from a public repo for security reasons, you can add a special Harness Connector to your Harness account, and the Delegate will pull these images from the Harness Container Image Registry only. +* **Stage-level override:** You can configure a Build Stage to override the default Delegate and use a dedicated Connector that downloads build images from the Harness Container Image Registry. This is useful when the Delegate cannot access the public repo (for example, if the build infrastructure is running in a private cloud). + +Since you and the Harness Delegate are already connected to Harness securely, there are no additional connections to worry about. + + +:::note +If you choose to override the `harnessImageConnector`, you may also avoid triggering any rate limiting or throttling.This topic explains how to set up the Docker Connector to connect to the Harness Container Image Registry. + +::: + +### Before you begin + +* [CI Enterprise Concept](../../continuous-integration/ci-quickstarts/ci-concepts.md) +* [Harness Delegate Overview](../2_Delegates/delegates-overview.md) +* [Docker Connector Settings Reference](ref-cloud-providers/docker-registry-connector-settings-reference.md) + +### Review: Allowlist app.harness.io + +Since you and the Harness Delegate are already connected to Harness securely, app.harness.io should already be allowlisted and there are no additional connections to worry about. + +In the case that app.harness.io is not allowlisted, please allowlist it before proceeding. + + +:::note +In general, and as a Best Practice, you should allowlist Harness Domains and IPs. See **Allowlist Harness Domains and IPs** in [Delegate Requirements and Limitations](../2_Delegates/delegate-reference/delegate-requirements-and-limitations.md). + +::: + +### Step 1: Create a Docker Connector in Harness + + +:::note +You must create the Harness Docker Connector at the Account level. Make sure that you have the **Account** > **Connectors** > **Create/Edit/View** permission for Harness Platform. See [Permission Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) for details on the list of permissions. + +::: + +1. In **Account Settings**, **Account Resources**, click **Connectors**. + + ![](./static/connect-to-harness-container-image-registry-using-docker-connector-45.png) + +2. Click **New Connector**, and under **Artifact Repositories** click **Docker Registry**.  + + ![](./static/connect-to-harness-container-image-registry-using-docker-connector-46.png) + + The Docker Registry Connector settings appear. + + ![](./static/connect-to-harness-container-image-registry-using-docker-connector-47.png) + +3. In **Name**, enter a name for this connector. + Harness automatically generates the corresponding ID ([entity identifier](../20_References/entity-identifier-reference.md)). + If you want to override the Account-level Connector, modify the ID and set it to `harnessImage`. You must use the Id `harnessImage`. + Harness gives precedence to the Connector with the `harnessImage` identifier, and uses it to pull from the Harness Container Image Registry, as opposed to pulling from DockerHub directly. +4. Click **Continue**. + +### Step 2: Enter Credentials + +Select or enter the following options: + + + +| | | +| --- | --- | +| **Docker Registry URL** | Enter `https://app.harness.io/registry` | +| **Provider Type** | Select **Other (Docker V2 compliant)** | +| **Authentication** | Select **Anonymous (no credentials required)** | + +Click **Continue**. + +### Step 3: Set Up Delegates + +Harness uses Docker Registry Connectors at Pipeline runtime to pull images and perform operations. You can select Any Available Harness Delegate and Harness will select the best Delegate at runtime. For a description of how Harness picks Delegates, see [Delegates Overview](../2_Delegates/delegates-overview.md). + +You can use Delegate Tags to select one or more Delegates. For details on Delegate Tags, see [Select Delegates with Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md). + +If you need to install a Delegate, see [Delegate Installation Overview](https://ngdocs.harness.io/article/re8kk0ex4k-delegate-installation-overview). + +Click **Save and Continue**. + +### Step 4: Verify Test Connection + +Harness tests the credentials you provided using the Delegates you selected. + +![](./static/connect-to-harness-container-image-registry-using-docker-connector-48.png) +If the credentials fail, you'll see an error. Click **Edit Credentials** to modify your credentials. + +Click **Finish**. + +### Step 5: Override the Connector in the Build Stage (*Optional*) + +This step is only applicable when you want to override the default Delegate and download build images using the Connector you just created.  + +In the Build Stage, go to the Infrastructure tab and specify your build-image Connector in the **Override Image Connector** field. The Delegate will use this Connector to download images from the Harness repository.  + +![](./static/connect-to-harness-container-image-registry-using-docker-connector-49.png) +### Step 6: Run the Pipeline + +You can now run your Pipeline. Harness will now pull images from the Harness Registry at Pipeline runtime using the configured Connector. + +If a connector with`harnessImage` identifier already exists on your **Account**, you need to update the connector instead of creating a new connector. + +### See also + +* [Permission Reference](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) +* [Harness CI Image List](../../continuous-integration/ci-technical-reference/harness-ci.md) +* [CI Build Image Updates](../../continuous-integration/ci-technical-reference/ci-build-image-updates.md) + diff --git a/docs/platform/7_Connectors/connect-to-jenkins.md b/docs/platform/7_Connectors/connect-to-jenkins.md new file mode 100644 index 00000000000..3a482b3d953 --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-jenkins.md @@ -0,0 +1,80 @@ +--- +title: Connect to Jenkins +description: Connect Harness to Jenkins using a Harness Jenkins Connector. +# sidebar_position: 2 +helpdocs_topic_id: 7frr40zml5 +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Continuous Integration (CI) can be performed in Harness using the module and [CI Pipelines](../../continuous-integration/ci-quickstarts/ci-pipeline-basics.md). + +If you are using Harness Continuous Delivery (CD) but not Harness Continuous Integration (CI), you can still perform CI using the **Jenkins** step in your CD Stage. + +You can connect Harness to Jenkins using a Harness Jenkins Connector. This Connector allows you to Jenkins jobs in [Jenkins steps](https://docs.harness.io/article/as4dtppasg-run-jenkins-jobs-in-cd-pipelines). + +This topic shows you how to add a Jenkins Connector to Harness. + +### Limitations + +* Harness does support SAML authentication for Jenkins connections. + +### Review: Jenkins Permissions + +Make sure the user account for this connection has the following required permissions in the Jenkins Server. + +* Overall: Read. +* Job: Build. + +For token-based authentication, go to **http://Jenkins-IP-address/jobs/me/configure** to check and change your API access token. The token is added as part of the HTTP header. + +See [Jenkins Matrix-based security](https://wiki.jenkins.io/display/JENKINS/Matrix-based+security). + +#### Okta or Two-Factor Authentication + +If you use Okta or 2FA for connections to Jenkins, use **API Token** for **Authentication** in the Harness Jenkins Connector. + +### Step 1: Add a Jenkins Connector + +You can add a Jenkins Connector at the Project, Org, or Account level. We'll cover Projects here. The process is the same for Org and Account. + +You can also add the Jenkins Connector when setting up the Jenkins step. We'll cover adding it to the Project's Connectors here. + +Open a Harness Project. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and then click **Jenkins**. The Jenkins Connector settings appear. + +In **Name**, enter a name for this connection. You will use this name to select this connection in Jenkins steps. + +Click **Continue**. + +### Step 2: Enter the Jenkins URL + +Enter the URL of the Jenkins server. + +If you are using the Jenkins SaaS (cloud) edition, the URL is in your browser's location field. + +If you are using the standalone edition of Jenkins, the URL is located in **Manage Jenkins**, **Jenkins Location**: + +![](./static/connect-to-jenkins-10.png) +### Step 3: Authentication + +If you use Okta or 2FA for connections to Jenkins, use **API token** for **Authentication** in the Harness Jenkins Connector.Enter the credentials to authenticate with the server. + +* **Username:** enter the user account username. +* **Password/API Token:** select/create a Harness Encrypted Text secret using the Jenkins API token or password. +For token-based authentication, go to `http://Jenkins-IP-address/jobs/me/configure` to check and change your API access token. The token is added as part of the HTTP header. +* **Bearer Token (HTTP Header):** select/create a Harness Encrypted Text secret using the OpenShift OAuth Access Token in **Bearer Token (HTTP Header)**. +The **Bearer Token (HTTP Header)** option is only for Jenkins servers hosted/embedded in an OpenShift cluster and using this authentication method. For more information, see [Authentication](https://docs.openshift.com/container-platform/3.7/architecture/additional_concepts/authentication.html) from OpenShift. + +Click **Submit**. + +The Jenkins Connector is added. + +### See also + +* [Run Jenkins Jobs in CD Pipelines](https://docs.harness.io/article/as4dtppasg-run-jenkins-jobs-in-cd-pipelines) + diff --git a/docs/platform/7_Connectors/connect-to-jira.md b/docs/platform/7_Connectors/connect-to-jira.md new file mode 100644 index 00000000000..03140ee6a2a --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-jira.md @@ -0,0 +1,66 @@ +--- +title: Connect to Jira +description: Connect Harness to Jira as a Harness Jira Connector. +# sidebar_position: 2 +helpdocs_topic_id: e6s32ec7i7 +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can connect Harness to Jira using a Harness Jira Connector. This Connector allows you to create and update Jira issues, and to use Jira issues in Approval steps. + +Looking for How-tos? See [Create Jira Issues in CD Stages](https://docs.harness.io/article/yu40zr6cvm-create-jira-issues-in-cd-stages), [Update Jira Issues in CD Stages](https://docs.harness.io/article/urdkli9e74-update-jira-issues-in-cd-stages), and [Adding Jira Approval Stages and Steps](../9_Approvals/adding-jira-approval-stages.md). + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Limitations + +* Your Jira REST API account must have permissions to create and edit issues in the relevant Jira projects. The **Administer Jira** permission includes all relevant permissions (as does the **Administrator** or **Member** permission on [Jira next-gen](https://confluence.atlassian.com/jirasoftwarecloud/overview-of-permissions-in-next-gen-projects-959283605.html)). +For details, see Atlassian's documentation on [Operation Permissions](https://developer.atlassian.com/cloud/jira/platform/rest/v3/?utm_source=%2Fcloud%2Fjira%2Fplatform%2Frest%2F&utm_medium=302#permissions), [Issues](https://developer.atlassian.com/cloud/jira/platform/rest/v3/?utm_source=%2Fcloud%2Fjira%2Fplatform%2Frest%2F&utm_medium=302#api-group-Issues), and [Managing Project Permissions](https://confluence.atlassian.com/adminjiracloud/managing-project-permissions-776636362.html#Managingprojectpermissions-Projectpermissionsoverview). +* When you set up the Jira Connector, **Username** requires the **full email address** you use to log into Jira. + +### Step: Add Jira Connector + +You can add a Jira Connector at the Project, Org, or Account level. We'll cover Projects here. The process is the same for Org and Account. + +You can also add the Jira Connector when setting up the Jira Create, Jira Approval, or Jira Update steps. We'll cover adding it to the Project's Connectors here. + +Open a Harness Project. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and then click **Jira**. The Jira Connector settings appear. + +In **Name**, enter a name for this connection. You will use this name to select this connection in Jira steps. + +Click **Continue**. + +In **URL**, enter the base URL by which your users access your Jira applications. For example: `https://mycompany.atlassian.net`. + +In Jira, the base URL is set to the same URL that Web browsers use to view your Jira instance. For details, see [Configuring the Base URL](https://confluence.atlassian.com/adminjiraserver071/configuring-the-base-url-802593107.html) from Atlassian.If you are using on-premises Jira server with HTTPS redirects enabled, use the HTTPS URL to ensure the [JIRA client follows redirects](https://confluence.atlassian.com/adminjiraserver/running-jira-applications-over-ssl-or-https-938847764.html#:~:text=If%20you%20want%20to%20only,to%20the%20corresponding%20HTTPS%20URLs.).Enter your credentials. For username, use the **full email address** you use to log into Jira. + +For **API Key**, use a Harness [Text Secret](../6_Security/2-add-use-text-secrets.md). See [Manage API tokens for your Atlassian account](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/) from Atlassian. + +Click **Continue**. + +Select the Harness Delegate(s) to use when making a connection to Jira using this Connector. + +Click **Save and Continue**. + +Harness tests the connection. + +![](./static/connect-to-jira-42.png) +Click **Finish**. + +The Jira Connector is listed in Connectors. + +### See also + +* [Create Jira Issues in CD Stages](https://docs.harness.io/article/yu40zr6cvm-create-jira-issues-in-cd-stages) +* [Update Jira Issues in CD Stages](https://docs.harness.io/article/urdkli9e74-update-jira-issues-in-cd-stages) +* [Adding Jira Approval Stages and Steps](../9_Approvals/adding-jira-approval-stages.md) +* [Adding Jira Approval Stages](../9_Approvals/adding-jira-approval-stages.md) + diff --git a/docs/platform/7_Connectors/connect-to-monitoring-and-logging-systems.md b/docs/platform/7_Connectors/connect-to-monitoring-and-logging-systems.md new file mode 100644 index 00000000000..251d45df9f2 --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-monitoring-and-logging-systems.md @@ -0,0 +1,266 @@ +--- +title: Connect to Monitoring and Logging Systems +description: You can connect Harness to Monitoring and Logging Systems. +# sidebar_position: 2 +helpdocs_topic_id: g21fb5kfkg +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Connect Harness to monitoring and logging systems by adding a verification provider Connector. + +You can add a verification provider Connector inline when developing your pipeline, or separately in your Account/Org/Project's resources. Once you add the Connector, it is available in Pipelines of the same Account/Org/Project. + + +### Monitoring and Logging Systems Scope + +You can add a verification provider Connector at the Account/Org/Project scope. + +This topic will explain how to add it at the Project scope. The process is same for Org and Account. + +### Step: Add AppDynamics + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **+** **Connector**, and click **AppDynamics** in **Monitoring and Logging Systems**. The AppDynamics connector settings appear. + + ![](./static/connect-to-monitoring-and-logging-systems-11.png) + +4. In **Name**, enter a name for this connector. You will use this name when selecting the Verification Provider in Harness Environments and Workflows. If you plan to use multiple providers of the same type, ensure that you give each provider a different name. +5. Click **Continue**. +6. In the **Controller URL** field, enter the URL of the AppDynamic controller in the format: + **http://:/controller ** + For example: + **https://xxxx.saas.appdynamics.com/controller** + + ![](./static/connect-to-monitoring-and-logging-systems-12.png) + +7. In **Account Name**, enter the name of AppDynamics account you want to use. + + +:::note +For Harness On-Prem, enter **customer1**. +::: + +8. In **Authentication**, you can choose one of the following options: + * **Username and Password**: In **User Name** and **Password**, enter the credentials to authenticate with the AppDynamics server. In **Password**, you can choose [Create or Select a secret](../6_Security/2-add-use-text-secrets.md)**.** + * **API Client**: In **Client Id** and **Client Secret** fields, enter a valid Id and secret string that the application uses to prove its identity when requesting a token. In **Client Secret**, you can choose [Create or Select a secret](../6_Security/2-add-use-text-secrets.md). + +9. Click **Continue**. The Setup Delegates settings appear. +10. You can choose **Connect via any available delegate** or **Connect only via delegates which has all of the following tags.** If you select a Delegate, Harness will always use that Delegate for this Connector. +11. Click **Save and Continue**. +12. Once the Test Connection succeeds, click **Finish**. AppDynamics is listed under the list of Connectors. + +### Step: Add Prometheus + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **+** **Connector**, and click **Prometheus**in **Monitoring and Logging Systems**. The Prometheus connector settings appear. + + ![](./static/connect-to-monitoring-and-logging-systems-13.png) + +4. In **Name**, enter a name for this connector. If you are going to use multiple providers of the same type, ensure that you give each provider a different name. +5. Click **Continue**. +6. In the **URL** field, enter the URL of your Prometheus account. You cannot use a Grafana URL. + + ![](./static/connect-to-monitoring-and-logging-systems-14.png) + + +:::note +You cannot use a Grafana URL. +::: + + +7. Click **Next**. The Setup Delegates settings appear. +8. You can choose **Connect via any available delegate** or **Connect only via delegates which has all of the following tags.** If you select a Delegate, Harness will always use that Delegate for this Connector. +9. Click **Save and Continue**. +10. Once the Test Connection succeeds, click **Finish**. Prometheus is listed under the list of Connectors. + +### Step: Add New Relic + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **+** **Connector**, and click **New Relic**in **Monitoring and Logging Systems**. The New Relic connector settings appear. + + ![](./static/connect-to-monitoring-and-logging-systems-15.png) + +4. In **Name**, enter a name for this connector. If you are going to use multiple providers of the same type, ensure you give each provider a different name. +5. Click **Continue**. +6. In the **New Relic** **URL** field, enter the URL of your New Relic account.  + + ![](./static/connect-to-monitoring-and-logging-systems-16.png) + +7. To get the **New Relic Account ID** for your New Relic account, copy the number after the **/accounts/** portion of the URL in the New Relic Dashboard. +8. In **Encrypted** **API Key**, you can choose **Create or Select a secret.** + +For secrets and other sensitive settings, select or create a new [Text Secret.](../6_Security/2-add-use-text-secrets.md)Enter the API key needed to connect with the server. + +For steps on generating the New Relic API key, follow this doc from New Relic: [Insights query API](https://docs.newrelic.com/docs/apis/insights-apis/query-insights-event-data-api/). + +If you have trouble finding step on generating the **Insights query key**, look for the API key types help in the New Relic help panel: + +![](./static/connect-to-monitoring-and-logging-systems-17.png) + +9. Click **Continue**. The Setup Delegates settings appear. +10. You can choose **Connect via any available delegate** or **Connect only via delegates which has all of the following tags.** If you select a Delegate, Harness will always use that Delegate for this Connector. +11. Click **Save and Continue**. +12. Once the Test Connection succeeds, click **Finish**. New Relic is listed under the list of Connectors. + +Usage scope is inherited from the secrets used in the settings. Pro or higher subscription level is needed. For more information, see [Introduction to New Relic's REST API Explorer](https://docs.newrelic.com/docs/apis/rest-api-v2/api-explorer-v2/introduction-new-relics-rest-api-explorer) from New Relic. + +### Step: Add Splunk + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **+** **Connector**, and click **Splunk**in **Monitoring and Logging Systems**. The Splunk connector settings appear. + + ![](./static/connect-to-monitoring-and-logging-systems-18.png) + +4. In **Name**, enter a name for this connector. If you are going to use multiple providers of the same type, ensure you give each provider a different name. +5. Click **Continue**. +6. In the **URL** field, enter the URL for accessing the REST API on the Splunk server. Include the port number in the format **https://.cloud.splunk.com:8089. ** The default port number is 8089, which is required for hosted Splunk, also. For example: **https://mycompany.splunkcloud.com:8089**. + + ![](./static/connect-to-monitoring-and-logging-systems-20.png) + + +Splunk APIs require that you authenticate with a non-SAML account. To access your Splunk Cloud deployment using the Splunk REST API and SDKs, submit a support case requesting access on the Support Portal. For managed deployments, Splunk Support opens port 8089 for REST access. You can specify a range of IP addresses to control who can access the REST API. For self-service deployments, Splunk Support defines a dedicated user and sends you credentials that enable that user to access the REST API. For information see [Using the REST API with Splunk Cloud](http://docs.splunk.com/Documentation/Splunk/7.2.0/RESTTUT/RESTandCloud). + +Ensure that the Splunk user account used to authenticate Harness with Splunk is assigned to a role that contains the following REST-related capabilities: + +* Search. +* Access to the indexes you want to search. + +In the following example we've created a new Splunk role named **Harness User**, and assigned it search capability: + +![](./static/connect-to-monitoring-and-logging-systems-22.png) + +We've given this role access to **All non-internal indexes**. However, we could restrict the access to only the few relevant indexes: + +![](./static/connect-to-monitoring-and-logging-systems-23.png) + +7. In the **Username** field, enter the username of your Splunk account. +8. In **Password** field, you can choose **Create or Select a secret.** + + +:::note +For secrets and other sensitive settings, select or create a new [Text Secret.](../6_Security/2-add-use-text-secrets.md) + +::: + +9. Click **Connect and Save**. The Setup Delegates settings appear. +10. You can choose **Connect via any available delegate** or **Connect only via delegates which has all of the following tags.** If you select a Delegate, Harness will always use that Delegate for this Connector. +11. Click **Save and Continue**. +12. Once the Test Connection succeeds, click **Finish**. Splunk is listed under the list of Connectors. + +### Step: Add Google Cloud Operations (formerly Stackdriver) + + +:::note +For details on settings and permissions, see [Google Cloud Platform (GCP) Connector Settings Reference](ref-cloud-providers/gcs-connector-settings-reference.md). + +::: + +Google Cloud Metrics and Google Cloud Logs are supported with GCP connector. See [Add a GCP Connector](connect-to-a-cloud-provider.md#step-add-a-gcp-connector). + +The following roles must be attached to the account used to connect Harness and Google Cloud Operations as a Google Cloud Provider: + +* **Stackdriver Logs** - The minimum role requirement is **logging.viewer** +* **Stackdriver Metrics** - The minimum role requirements are **compute.networkViewer** and **monitoring.viewer**. + +See [Access control](https://cloud.google.com/monitoring/access-control) from Google. + +### Step: Add Datadog + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **+** **Connector**, and click **Datadog**in **Monitoring and Logging Systems**. The Datadog connector settings appear. + + ![](./static/connect-to-monitoring-and-logging-systems-24.png) + +4. In **Name**, enter a name for this connector. If you are going to use multiple providers of the same type, ensure you give each provider a different name. +5. Click **Continue**. +6. In **URL**, enter the URL of the Datadog server. Simply take the URL from the Datadog dashboard, such as https://app.datadoghq.com/ and add the API and version: **https://app.datadoghq.com/api/**. + + +:::note + The trailing forward slash after `api` (`api/`) in mandatory. Also, if your URL has `v1` at the end of it, remove `v1`. + +::: + + ![](./static/connect-to-monitoring-and-logging-systems-25.png) + +7. In **Encrypted APP Key**, enter the application key. + + To create an application key in Datadog, do the following: + 1. In **Datadog**, hover over **Integrations**, and then click **APIs**. The **APIs** page appears. + + ![](./static/connect-to-monitoring-and-logging-systems-26.png) + + 2. In **Application Keys**, in **New application key**, enter a name for the application key, such as **Harness**, and click **Create Application Key**. + 3. Copy the API key and, in **Harness**, paste it into the **Application Key** field. +8. In Encrypted API Key, enter the API key for API calls. +To create an API key in Datadog, do the following: + 1. In **Datadog**, hover over **Integrations**, and then click **APIs**. The **APIs** page appears. + + ![](./static/connect-to-monitoring-and-logging-systems-27.png) + + 2. In **API Keys**, in **New API key**, enter the name for the new API key, such as **Harness**, and then click **Create API key**. + 3. Copy the API key and, in **Harness**, paste it into the **API Key** field. +1. Click **Next**. The Setup Delegates settings appear. +2. You can choose **Connect via any available delegate** or **Connect only via delegates which has all of the following tags.** If you select a Delegate, Harness will always use that Delegate for this Connector. +3. Click **Save and Continue**. +4. Once the Test Connection succeeds, click **Finish**. Datadog is listed under the list of Connectors. + +### Step: Add Dynatrace + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **+** **Connector**, and click **Dynatrace**in **Monitoring and Logging Systems**. The Custom Health connector settings appear. + + ![](./static/connect-to-monitoring-and-logging-systems-28.png) + +4. In **Name**, enter a name for this connector. If you are going to use multiple providers of the same type, ensure you give each provider a different name. +5. Click **Continue**. +6. In **URL**, enter the URL of your Dynatrace account. The URL has the following syntax: **https://*****your\_environment\_ID*****.live.dynatrace.com.** HTTPS is mandatory for Dynatrace connections. +7. In **API Token**, enter the API token generated in Dynatrace. To generate a Dynatrace access token, perform the following steps: + 1. Log into your Dynatrace environment. + 2. In the navigation menu, click **Settings**, and then click **Integration**. + 3. Select **Dynatrace API**. The Dynatrace API page appears. + + ![](./static/connect-to-monitoring-and-logging-systems-29.png) + + 4. Enter a token name in the text field. The default Dynatrace API token switches are sufficient for Harness. + 5. Click **Generate**. The token appears in the token list. + 6. Click **Edit**. The token details appear. + + ![](./static/connect-to-monitoring-and-logging-systems-31.png) + + 7. Click **Copy**. You will use this token when connecting Harness to Dynatrace. +8. Click **Next**. The Setup Delegates settings appear. +9. You can choose **Connect via any available Delegate** or **Connect only via Delegates which has all of the following tag****.** If you select a Delegate, Harness will always use that Delegate for this Connector. +10. Click **Save and Continue**. +11. Once the Test Connection succeeds, click **Finish**. Dynatrace is listed under the list of Connectors. + +### Step: Add Custom Health + +1. Open a Harness Project. +2. In **Project Setup**, click **Connectors**. +3. Click **+** **Connector**, and click **Custom Health**in **Monitoring and Logging Systems**. The Custom Health connector settings appear. + + ![](./static/connect-to-monitoring-and-logging-systems-33.png) + +4. In **Name**, enter a name for this connector. If you are going to use multiple providers of the same type, ensure you give each provider a different name. +5. Click **Continue**. +6. In **URL**, enter the URL of the metrics data provider. For example, **https://mycompany.appd.com.** +7. In **Headers**, enter the query headers required by your metrics data provider. In **Key**, enter a valid query key. In **Value**, you can create or select a key by clicking [**Create or Select a Secret**](../6_Security/2-add-use-text-secrets.md)**.** You can also enter a **Plaintext** value**.** +8. Click **Next**. The **Parameters** setting appears. +9. In **Parameters**, enter the request parameters. In **Key**, enter a valid query key. In **Value**, you can create or select by clicking [**Create or Select a Secret**](../6_Security/2-add-use-text-secrets.md) or enter a **Plaintext** value**.** +10. Click **Next**. The **Validation Path** settings appear. +11. In **Request Method**, select **GET** or **POST**. +12. In **Validation Path**, enter the query string from your metric provider. +13. Click **Next**. The Setup Delegates settings appear. +14. You can choose **Connect via any available Delegate** or **Connect only via Delegates which has all of the following tag****.** If you select a Delegate, Harness will always use that Delegate for this Connector. +15. Click **Save and Continue**. +16. Once the Test Connection succeeds, click **Finish**. Custom Health is listed under the list of Connectors. + diff --git a/docs/platform/7_Connectors/connect-to-service-now.md b/docs/platform/7_Connectors/connect-to-service-now.md new file mode 100644 index 00000000000..bb406aef1f8 --- /dev/null +++ b/docs/platform/7_Connectors/connect-to-service-now.md @@ -0,0 +1,62 @@ +--- +title: Connect to ServiceNow +description: Connect Harness to ServiceNow as a Harness ServiceNow Connector. +# sidebar_position: 2 +helpdocs_topic_id: illz8off8q +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can connect Harness to ServiceNow using a Harness ServiceNow Connector. This Connector allows you to approve and reject Pipeline steps. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Limitations + +* Your ServiceNow account should ideally have the `admin` role. If this is not possible, it should have at least the `itil_admin` or `itil` role to create and modify tickets. +* Your account should also have the `import_admin` or `import_transformer` role to manage import set transform maps. For details, see ServiceNow's [Base System Roles](https://docs.servicenow.com/bundle/newyork-platform-administration/page/administer/roles/reference/r_BaseSystemRoles.html) documentation. +* Your ServiceNow REST API account must have permission to view tickets. + +### Step: Add ServiceNow Connector + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can add a Connector from any module in your Project in Project setup, or in your Organization, or Account Resources. + +This topic shows you how to add a ServiceNow Connector to your Project. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and then click **ServiceNow**. The ServiceNow Connector settings appear. + +![](./static/connect-to-service-now-43.png) +Enter **Name** for this Connector. + +You can choose to update the **Id** or let it be the same as your ServiceNow Connector's name. For more information, see [Entity Identifier Reference](../20_References/entity-identifier-reference.md). + +Enter **Description** and **Tags** for your Connector. + +Click **Continue**. + +Enter your **Username**. + +In **URL**, enter the base URL by which your users will access ServiceNow. For example: `https://example.service-now.com`**.** + +Enter your credentials. For **API Key**, use a Harness [Text Secret](../6_Security/2-add-use-text-secrets.md).  + +Click **Continue**. + +Select the Harness Delegate(s) to use when making a connection to ServiceNow using this Connector. + +Click **Save and Continue**. + +Harness tests the connection. + +![](./static/connect-to-service-now-44.png) +Click **Finish**. + +The ServiceNow Connector is listed in Connectors. + diff --git a/docs/platform/7_Connectors/create-a-connector-using-yaml.md b/docs/platform/7_Connectors/create-a-connector-using-yaml.md new file mode 100644 index 00000000000..b5ee72adf91 --- /dev/null +++ b/docs/platform/7_Connectors/create-a-connector-using-yaml.md @@ -0,0 +1,111 @@ +--- +title: Create a Connector using YAML +description: To solve [problem], [general description of How-to solution]. In this topic -- Before you begin. Visual Summary. Step 1 -- Title. Step 2 -- Title. Next steps. Before you begin. Your target environment must… +# sidebar_position: 2 +helpdocs_topic_id: m0awmzipdp +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness [Connectors](https://docs.harness.io/category/connectors) integrate Harness with your cloud platforms, codebase and artifact repos, and collaboration and monitoring tools. + +You can add Connectors using the Harness GUI or via YAML using the Harness YAML Builder. + +This topic shows you how to add a Connector using the YAML Builder. + + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) + +### Step 1: Create Secrets or Keys + +Typically, Connectors use passwords or SSH keys to authenticate with platforms, tools, etc. + +In Harness, you create a Harness secret for the password or SSH key, and then reference that secret's ID when you create your Connector. + +You can create a secret at the Project, Org, or account level. In this example, we'll use Projects. + +In **Resources** for a Project, Org, or account, click **Secrets**. + +Click **Create via YAML Builder**. + +Use a snippet to create the secret. + +Here's an example of a Harness inline text secret in YAML: + + +``` +secret: + type: SecretText + name: docs-dockerhub-password + identifier: docsdockerhubpassword + orgIdentifier: Doc + tags: {} + spec: + secretManagerIdentifier: harnessSecretManager + valueType: Inline +``` +The `identifier` value (in this example, `docsdockerhubpassword`) is what you'll reference when you add your Connector in YAML. + +For steps on other types of secrets, see [Secrets and Secret Management](https://docs.harness.io/category/security). + +### Step 2: Create the Connector + +In **Resources**, click **Connectors**. + +Click **Create via YAML Builder**. + +Copy and paste the snippet for the Connector you want to create. + +For example, here is the snippet for a DockerHub Connector: + + +``` +connector: + name: SampleDockerConnector + identifier: SampleDockerConnectorId + description: Sample Docker Connector + orgIdentifier: Doc + projectIdentifier: Example + type: DockerRegistry + spec: + dockerRegistryUrl: somedockerregistryurl + providerType: DockerHub + auth: + type: UsernamePassword + spec: + username: someuser + passwordRef: somepasswordref +``` +Replace the values for the `name` and `identifier` keys. + +Replace any URL values. + +Provide values for the credentials keys. + +For any password/key labels, paste the secret/key's `identifier` value. + +For example, using the `identifier` from the secret created earlier (`docsdockerhubpassword`), the DockerHub Connector would now be: + + +``` +connector: + name: ExampleDockerConnector + identifier: ExampleDockerConnectorId + description: Example Docker Connector + orgIdentifier: Doc + projectIdentifier: Example + type: DockerRegistry + spec: + dockerRegistryUrl: https://registry.hub.docker.com/v2/ + providerType: DockerHub + auth: + type: UsernamePassword + spec: + username: john.doe@example.com + passwordRef: docsdockerhubpassword +``` +Click **Save**. The Connector is added and can be select in Pipeline stages. + diff --git a/docs/platform/7_Connectors/git-hub-app-support.md b/docs/platform/7_Connectors/git-hub-app-support.md new file mode 100644 index 00000000000..7bef9be0af4 --- /dev/null +++ b/docs/platform/7_Connectors/git-hub-app-support.md @@ -0,0 +1,179 @@ +--- +title: Use a GitHub App in a GitHub Connector +description: Harness supports API access to GitHub using a GitHub App. GitHub recommends using GitHub Apps when integrating with GitHub. GitHub Apps offer more granular permissions to access data than typical aut… +# sidebar_position: 2 +helpdocs_topic_id: nze5evmqu1 +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports API access to GitHub using a GitHub App. + +GitHub recommends using GitHub Apps when integrating with GitHub. GitHub Apps offer more granular permissions to access data than typical authentication methods. + +Harness supports GitHub Apps in its Harness GitHub Connector. + +For more information, see [GitHub Connector Settings Reference](ref-source-repo-provider/git-hub-connector-settings-reference.md). Also, if you're new to GitHub apps, see [About apps](https://docs.github.com/en/developers/apps/about-apps) and [Installing GitHub Apps](https://docs.github.com/en/developers/apps/installing-github-apps) from GitHub + +### Before you begin + +* [GitHub Connector Settings Reference](ref-source-repo-provider/git-hub-connector-settings-reference.md) +* [Quickstarts](https://docs.harness.io/article/u8lgzsi7b3-quickstarts) + +### Review: Requirements + +One or more of the following GitHub permissions are required: + +* You have the GitHub permissions required to install GitHub Apps on your personal account or under any organization where you have administrative access. +* If you have admin permissions in a GitHub repo in a GitHub organization, you can install GitHub Apps in that repo. +* If a GitHub App is installed in a repository and requires an organization's permission, the organization owner must approve the application. + +### Step 1: Create a GitHub app + +You can create and register a GitHub App under your personal account or under any organization where you have administrative access. You create your GitHub App in your personal account, and then register it where you have the required GitHub permissions. + +See [Creating a GitHub App](https://docs.github.com/en/developers/apps/creating-a-github-app) from GitHub. + +In your GitHub personal account, click **Settings**. + +![](./static/git-hub-app-support-50.png) +Click **Developer settings**. + +![](./static/git-hub-app-support-51.png) +Click **New GitHub App**. + +Enter the following settings, and then click **Create GitHub App**. + +* **GitHub App name:** enter the name for your app. +* **Homepage URL:** enter **https://harness.io/**. +* **Webhook:** uncheck **Active**. +* **Repository permissions:** + + **Administration:** select **Access: Read & write**. + + **Commit statuses**: select **Access: Read & write**. + + **Contents:** select **Access: Read & write**. + + **Metadata:** select **Access: Read-only**. + + **Pull requests**: select **Access: Read & write**. This permission is required for the **Issue Comment** event trigger for Github. + + **Webhooks:** select **Read & write**. +* **Where can this GitHub App be installed?** Select **Any account**. + +![](./static/git-hub-app-support-52.png) +The app is created. + +By default the application you created is **Public**. + +If your application is **Private**, make sure to make it **Public**. + +To do this, open the app by clicking **Edit.** + +Select **Advanced**. + +In **Make this GitHub App public**, click **Make public**, and click **OK**. + +![](./static/git-hub-app-support-53.png) +Now you can install the app. + +### Step 2: Install the GitHub App + +In the same GitHub App, click **Install App**. + +In **Repository access**, select **Only select repositories**, and then select the same repo you are connecting with Harness. + +In **Permissions**, set the following permissions: + +* **Read access to metadata** +* **Read and write access to code, commit statuses, and pull requests** + +Install the new app. + +Once the app is installed, you'll need to record the following information to use in the Harness Connector: + +* **Installation ID:** the Installation ID is located in the URL of the installed app. + +![](./static/git-hub-app-support-54.png) +* **App ID:** the App ID is located in the GitHub app's **General** tab. + +![](./static/git-hub-app-support-55.png) +### Step 3: Generate and Download Key + +Now we'll create the private key for the GitHub app that you will use in your Harness Connector. + +Open the GitHub app you created. + +In **Private keys**, click **Generate a private key**. + +![](./static/git-hub-app-support-56.png) +Download the private key to your local machine. + +Open a terminal and navigate to the folder containing the key. + +Run the following command, replacing `.pem` with the name of your PEM file: + + +``` +openssl pkcs8 -topk8 -inform PEM -outform PEM -in .pem -out converted-github-app.pem -nocrypt +``` +In the next step, you'll add the file as a new Harness file secret. + +### Step 4: Create a Harness Secret with the Key Value + +In Harness, click the account, org, or project where you want to store your secret. + +Click **Project Setup**, and then click **Secrets**. + +Click **New Secret**, and then click **File**. + +In **Secrets Manager**, select a Secrets Manager. See [Harness Secrets Manager Overview](../6_Security/1-harness-secret-manager-overview.md). + +In **Secret Name**, enter a name for the secret. You'll use this name to select the secret in Harness Connectors and other settings. + +In **Secret File**, upload the PEM file. + +Click **Save**. + +Now we can add the GitHub app to the Harness GitHub Connector. + +### Step 5: Use GitHub App and Secret in Harness GitHub Connector + +Create or open the GitHub Connector used in your Pipeline codebase. For steps on creating the Connector, see [GitHub Connector Settings Reference](ref-source-repo-provider/git-hub-connector-settings-reference.md). + +You can open a Connector from **Resources** in an account, org, or project, or from the stage's settings. + +For example, in a CI stage, click **Codebase**. The Connector for the codebase is displayed. + +![](./static/git-hub-app-support-57.png) +Click the Connector, and then click the edit button. The GitHub Connector is displayed. + +In the Connector **Credentials**, enter a username and Personal Access Token (PAT), and then select **Enable API access**. + +In **API Authentication**, select **GitHub App**. + +Enter the following settings: + +* **GitHub Installation ID:** enter the Installation ID located in the URL of the installed GitHub App.![](./static/git-hub-app-support-58.png) +* **GitHub Application ID:** enter the GitHub **App ID** from the GitHub App **General** tab.![](./static/git-hub-app-support-59.png) +* **GitHub Private Key:** select the Harness secret you created for the PEM file key. + +When you're done, the settings will look something like this: + +![](./static/git-hub-app-support-60.png) +Click **Save and Continue**. The connection and authentication is verified. + +![](./static/git-hub-app-support-61.png) +Click **Finish**. + +Now you can run a Pipeline and verify that the GitHub app credentials are working. + +### Step 6: Test GitHub Connector + +Run a Pipeline that uses the GitHub Connector configured with the GitHub app credentials. + +For PR events, use a Git Webhook Trigger to execute the Pipeline. + +Make sure the Webhook definition in GitHub sends events for **Pull Request** in its **Events** settings. + +If you haven't set a Git Webhook Trigger up, see [Trigger Pipelines using Git Events](../11_Triggers/triggering-pipelines.md).The Git Webhook Trigger should use the same repo as the GitHub App used in your Connector. + +You can see the build stages status in the GitHub PR view. + +![](./static/git-hub-app-support-62.png) \ No newline at end of file diff --git a/docs/platform/7_Connectors/ref-cloud-providers/_category_.json b/docs/platform/7_Connectors/ref-cloud-providers/_category_.json new file mode 100644 index 00000000000..b6d622b9280 --- /dev/null +++ b/docs/platform/7_Connectors/ref-cloud-providers/_category_.json @@ -0,0 +1 @@ +{"label": "Cloud Platform Connectors", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Cloud Platform Connectors"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "1ehb4tcksy"}} \ No newline at end of file diff --git a/docs/platform/7_Connectors/ref-cloud-providers/artifactory-connector-settings-reference.md b/docs/platform/7_Connectors/ref-cloud-providers/artifactory-connector-settings-reference.md new file mode 100644 index 00000000000..52aae15acc3 --- /dev/null +++ b/docs/platform/7_Connectors/ref-cloud-providers/artifactory-connector-settings-reference.md @@ -0,0 +1,118 @@ +--- +title: Artifactory Connector Settings Reference +description: Harness supports both cloud and on-prem versions of Artifactory. This topic provides settings and permissions for the Artifactory Connector. Artifactory Permissions. Make sure the following permissio… +# sidebar_position: 2 +helpdocs_topic_id: euueiiai4m +helpdocs_category_id: 1ehb4tcksy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness supports both cloud and on-prem versions of Artifactory. + +This topic provides settings and permissions for the Artifactory Connector. + +### Artifactory Permissions + +Make sure the following permissions are granted to the user: + +* Privileged User is required to access API, whether Anonymous or a specific username (username and passwords are not mandatory). +* Read permission to all Repositories. + +If used as a Docker Repo, user needs: + +* List images and tags +* Pull images + +See [Managing Permissions: JFrog Artifactory User Guide](https://www.jfrog.com/confluence/display/RTF/Managing+Permissions). + +### Artifact and File Type Support + +Legend: + +* **M/F** - Metadata or file. This includes Docker image and registry information. For AMI, this means AMI ID-only. +* **Blank** - Coming soon. + + + +| | | | | | | | | | | | | | | +| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +| **Sources** | **Docker Image**(Kubernetes) | **Terraform** | **Helm Chart** | **AWS** **AMI** | **AWS CodeDeploy** | **AWS Lambda** | **JAR** | **RPM** | **TAR** | **WAR** | **ZIP** | **Tanzu** (**PCF)** | **IIS** | +| Artifactory | M | | F | | | | | | | | **F** | | | + +If you are new to using Artifactory as a Docker repo, see [Getting Started with Artifactory as a Docker Registry](https://www.jfrog.com/confluence/display/RTF6X/Getting+Started+with+Artifactory+as+a+Docker+Registry) from JFrog. + +### Artifactory Artifact Server + +The Harness Artifactory Artifact server connects your Harness account to your Artifactory artifact resources. It has the following settings. + +#### Name + +The unique name for this Connector. + +#### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +#### Description + +Text string. + +#### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +#### Artifactory Repository URL + +Enter in your base URL followed by your module name. + +For most artifacts, use **https://mycompany.jfrog.io/artifactory**. + +In some cases, you can use **https://*****server\_name*****/artifactory**. + +The URL really depends on how you have set up Artifactory, and whether it is local, virtual, remote, or behind a proxy. + +To ensure you use the correct URL, copy it from your Artifactory settings. + +![](./static/artifactory-connector-settings-reference-08.png) +See [Repository Management](https://www.jfrog.com/confluence/display/JFROG/Repository+Management) from JFrog. + +#### Username + +Username for the Artifactory account user. + +#### Password + +Select or create a new [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md). + +### Artifact Details + +#### Repository URL + +This applies to the JFrog Artifactory default configuration. This URL may change if your infrastructure is customized. + +Select your repository via the JFrog site. Select **Set Me Up**. The **Set Me Up** settings appear. + +Copy the name of the server from the `docker login` command and enter it in **Repository URL**. + +![](./static/artifactory-connector-settings-reference-09.png) + +See [Configuring Docker Repositories](https://www.jfrog.com/confluence/display/RTF/Docker+Registry#DockerRegistry-ConfiguringDockerRepositories) from JFrog for more information. It describes the URLs for local, remote, and virtual repositories. + +#### Repository + +Enter the name of the repository where the artifact source is located. + +Harness supports only the Docker repository format as the artifact source. + +#### Artifact Path + +Enter the name of the artifact you want to deploy. + +The repository and artifact path must not begin or end with `/`. + +#### Tag + +Select a Tag from the list. + +![](./static/artifactory-connector-settings-reference-11.png) \ No newline at end of file diff --git a/docs/platform/7_Connectors/ref-cloud-providers/aws-connector-settings-reference.md b/docs/platform/7_Connectors/ref-cloud-providers/aws-connector-settings-reference.md new file mode 100644 index 00000000000..ec9f83a743f --- /dev/null +++ b/docs/platform/7_Connectors/ref-cloud-providers/aws-connector-settings-reference.md @@ -0,0 +1,818 @@ +--- +title: AWS Connector Settings Reference +description: This topic provides settings and permissions for the AWS Connector. +# sidebar_position: 2 +helpdocs_topic_id: m5vkql35ca +helpdocs_category_id: 1ehb4tcksy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +AWS is used as a Harness Connector for activities such as obtaining artifacts, building and deploying services, and verifying deployments. + +This topic provides settings and permissions for the AWS Connector. + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target or build infrastructures. + +::: + +### AWS Permissions + + +:::note +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target or build infrastructure. + +::: + +The AWS role policy requirements depend on what AWS services you are using for your artifacts and target infrastructure. + +Here are the user and access type requirements that you need to consider. + +**User:** Harness requires the IAM user be able to make API requests to AWS. For more information, see [Creating an IAM User in Your AWS Account](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) from AWS. + +**User Access Type:** **Programmatic access**. This enables an access key ID and secret access key for the AWS API, CLI, SDK, and other development tools. + +As described below, `DescribeRegions` is required for all AWS Cloud Provider connections. + +### All AWS Connectors: DescribeRegions Required + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target or build infrastructure. + +::: + +Harness needs a policy with the `DescribeRegions` action so that it can list the available regions for you when you define your target architecture. + +Create a [Customer Managed Policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies), add the `DescribeRegions` action to list those regions, and add that to any role used by the Connector. + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": "ec2:DescribeRegions", + "Resource": "*" + } + ] +} +``` +### AWS Policies Required + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target or build infrastructure. + +::: + +### AWS S3 + +#### Reading from AWS S3 + +There are two policies required: + +* The Managed Policy **AmazonS3ReadOnlyAccess**. +* The [Customer Managed Policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) you create using `ec2:DescribeRegions`. + + +:::warning +The AWS [IAM Policy Simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) is a useful tool for evaluating policies and access. + +::: + +**Policy Name**: `AmazonS3ReadOnlyAccess`. + +**Policy ARN:** `arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess`. + +**Description:** Provides read-only access to all buckets via the AWS Management Console. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:Get*", + "s3:List*" + ], + "Resource": "*" + } + ] +} +``` +**Policy Name:** `HarnessS3`. + +**Description:** Harness S3 policy that uses EC2 permissions. This is a customer-managed policy you must create. In this example we have named it `HarnessS3`. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": "ec2:DescribeRegions", + "Resource": "*" + } + ] +} +``` + +:::note +If you want to use an S3 bucket that is in a separate account than the account used to set up the AWS Cloud Provider, you can grant cross-account bucket access. For more information, see [Bucket Owner Granting Cross-Account Bucket Permissions](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html) from AWS. + +::: + +#### Writing to AWS S3 + +There are two policies required: + +* The [Customer Managed Policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) you create, for example **HarnessS3Write**. +* The Customer Managed Policy you create using `ec2:DescribeRegions`. + +**Policy Name**:`HarnessS3Write`. + +**Description:** Custom policy for pushing to S3. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "AllObjectActions", + "Effect": "Allow", + "Action": "s3:*Object", + "Resource": ["arn:aws:s3:::bucket-name/*"] + } + ] +} +``` +**Policy Name:** `HarnessS3`. + +**Description:** Harness S3 policy that uses EC2 permissions. This is a customer-managed policy you must create. In this example we have named it `HarnessS3`. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": "ec2:DescribeRegions", + "Resource": "*" + } + ] +} +``` + +:::note +If you want to use an S3 bucket that is in a separate account than the account used to set up the AWS Cloud Provider, you can grant cross-account bucket access. For more information, see [Bucket Owner Granting Cross-Account Bucket Permissions](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html) from AWS. + +::: + +#### Read and Write to AWS S3 + +You can have a single policy that reads and writes with an S3 bucket. + +See [Allows read and write access to objects in an S3 Bucket](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html) and [Allows read and write access to objects in an S3 Bucket, programmatically and in the console](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket-console.html) from AWS. + +Here is an example that includes AWS console access: + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "ConsoleAccess", + "Effect": "Allow", + "Action": [ + "s3:GetAccountPublicAccessBlock", + "s3:GetBucketAcl", + "s3:GetBucketLocation", + "s3:GetBucketPolicyStatus", + "s3:GetBucketPublicAccessBlock", + "s3:ListAllMyBuckets" + ], + "Resource": "*" + }, + { + "Sid": "ListObjectsInBucket", + "Effect": "Allow", + "Action": "s3:ListBucket", + "Resource": ["arn:aws:s3:::bucket-name"] + }, + { + "Sid": "AllObjectActions", + "Effect": "Allow", + "Action": "s3:*Object", + "Resource": ["arn:aws:s3:::bucket-name/*"] + } + ] +} +``` +### AWS Elastic Container Registry (ECR) + +#### Pulling from ECR + +**Policy Name**:`AmazonEC2ContainerRegistryReadOnly`. + +**Policy ARN:** `arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly`. + +**Description:** Provides read-only access to Amazon EC2 Container Registry repositories. + +**Policy JSON:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "ecr:GetAuthorizationToken", + "ecr:BatchCheckLayerAvailability", + "ecr:GetDownloadUrlForLayer", + "ecr:GetRepositoryPolicy", + "ecr:DescribeRepositories", + "ecr:ListImages", + "ecr:DescribeImages", + "ecr:BatchGetImage" + ], + "Resource": "*" + } + ] +} +``` +#### Pushing to ECR + +**Policy Name**: `AmazonEC2ContainerRegistryFullAccess`. + +**Policy ARN:** `arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess`. See [AWS managed policies for Amazon Elastic Container Registry](https://docs.aws.amazon.com/AmazonECR/latest/userguide/security-iam-awsmanpol.html) from AWS. + +**Policy JSON Example:** + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "ecr:*", + "cloudtrail:LookupEvents" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "iam:CreateServiceLinkedRole" + ], + "Resource": "*", + "Condition": { + "StringEquals": { + "iam:AWSServiceName": [ + "replication.ecr.amazonaws.com" + ] + } + } + } + ] +} +``` +### AWS CloudFormation + +The credentials required for provisioning depend on what you are provisioning. + +For example, if you wanted to give full access to create and manage EKS clusters, you could use a policy like this: + + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "autoscaling:*", + "cloudformation:*", + "ec2:*", + "eks:*", + "iam:*", + "ssm:*" + ], + "Resource": "*" + } + ] + } +``` +If you wanted to provide limited permissions for EKS clusters, you might use a policy like this: + + +``` + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "autoscaling:CreateAutoScalingGroup", + "autoscaling:DescribeAutoScalingGroups", + "autoscaling:DescribeScalingActivities", + "autoscaling:UpdateAutoScalingGroup", + "autoscaling:CreateLaunchConfiguration", + "autoscaling:DescribeLaunchConfigurations", + "cloudformation:CreateStack", + "cloudformation:DescribeStacks", + "ec2:AuthorizeSecurityGroupEgress", + "ec2:AuthorizeSecurityGroupIngress", + "ec2:RevokeSecurityGroupEgress", + "ec2:RevokeSecurityGroupIngress", + "ec2:CreateSecurityGroup", + "ec2:createTags", + "ec2:DescribeImages", + "ec2:DescribeKeyPairs", + "ec2:DescribeRegions", + "ec2:DescribeSecurityGroups", + "ec2:DescribeSubnets", + "ec2:DescribeVpcs", + "eks:CreateCluster", + "eks:DescribeCluster", + "iam:AddRoleToInstanceProfile", + "iam:AttachRolePolicy", + "iam:CreateRole", + "iam:CreateInstanceProfile", + "iam:CreateServiceLinkedRole", + "iam:GetRole", + "iam:ListRoles", + "iam:PassRole", + "ssm:GetParameters" + ], + "Resource": "*" + } + ] + } +``` +### Use Kubernetes Cluster Connector for EKS + +If you want to connect Harness to Elastic Kubernetes Service (Amazon EKS), use the platform-agnostic [Kubernetes Cluster Connector](kubernetes-cluster-connector-settings-reference.md). + +### AWS Serverless Lambda + +There are three authentication options for the AWS Connector when used for AWS ECS images for AWS Serverless Lambda deployments: + +* [AWS Access Key](#aws-access-key) +* [Assume IAM Role on Delegate](#assume-iam-role-on-delegate) +* [Use IRSA](#use-irsa-iam-roles-for-service-accounts) +* [Enable cross-account access (STS Role)](#enable-cross-account-access-sts-role) + + Requires that the AWS CLI is installed on the Delegate. See [Serverless and ​Enable cross-account access (STS Role)](#serverless-and-​enable-cross-account-access-sts-role). + +For steps on Serverless Lambda deployments, see [Serverless Lambda CD Quickstart](https://docs.harness.io/article/5fnx4hgwsa-serverless-lambda-cd-quickstart). + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target or build infrastructure. + +::: + +#### Permissions + +All authentication methods for Serverless deployments require an AWS User with specific AWS permissions, as described in [AWS Credentials](https://www.serverless.com/framework/docs/providers/aws/guide/credentials) from Serverless. To create the AWS User, do the following: + +* Log into your AWS account and go to the Identity & Access Management (IAM) page. +* Click **Users**, and then **Add user**. Enter a name. Enable **Programmatic access** by clicking the checkbox. Click **Next** to go to the **Permissions** page. Do one of the following: + + **Full Admin Access:** click on **Attach existing policies directly**. Search for and select **AdministratorAccess** then click **Next: Review**. Check to make sure everything looks good and click **Create user**. + + **Limited Access:** click on **Create policy**. Select the **JSON** tab, and add the JSON using the following code from the [Serverless gist](https://gist.github.com/ServerlessBot/7618156b8671840a539f405dea2704c8): + +IAMCredentials.json +``` +{ + "Statement": [ + { + "Action": [ + "apigateway:*", + "cloudformation:CancelUpdateStack", + "cloudformation:ContinueUpdateRollback", + "cloudformation:CreateChangeSet", + "cloudformation:CreateStack", + "cloudformation:CreateUploadBucket", + "cloudformation:DeleteStack", + "cloudformation:Describe*", + "cloudformation:EstimateTemplateCost", + "cloudformation:ExecuteChangeSet", + "cloudformation:Get*", + "cloudformation:List*", + "cloudformation:UpdateStack", + "cloudformation:UpdateTerminationProtection", + "cloudformation:ValidateTemplate", + "dynamodb:CreateTable", + "dynamodb:DeleteTable", + "dynamodb:DescribeTable", + "dynamodb:DescribeTimeToLive", + "dynamodb:UpdateTimeToLive", + "ec2:AttachInternetGateway", + "ec2:AuthorizeSecurityGroupIngress", + "ec2:CreateInternetGateway", + "ec2:CreateNetworkAcl", + "ec2:CreateNetworkAclEntry", + "ec2:CreateRouteTable", + "ec2:CreateSecurityGroup", + "ec2:CreateSubnet", + "ec2:CreateTags", + "ec2:CreateVpc", + "ec2:DeleteInternetGateway", + "ec2:DeleteNetworkAcl", + "ec2:DeleteNetworkAclEntry", + "ec2:DeleteRouteTable", + "ec2:DeleteSecurityGroup", + "ec2:DeleteSubnet", + "ec2:DeleteVpc", + "ec2:Describe*", + "ec2:DetachInternetGateway", + "ec2:ModifyVpcAttribute", + "events:DeleteRule", + "events:DescribeRule", + "events:ListRuleNamesByTarget", + "events:ListRules", + "events:ListTargetsByRule", + "events:PutRule", + "events:PutTargets", + "events:RemoveTargets", + "iam:AttachRolePolicy", + "iam:CreateRole", + "iam:DeleteRole", + "iam:DeleteRolePolicy", + "iam:DetachRolePolicy", + "iam:GetRole", + "iam:PassRole", + "iam:PutRolePolicy", + "iot:CreateTopicRule", + "iot:DeleteTopicRule", + "iot:DisableTopicRule", + "iot:EnableTopicRule", + "iot:ReplaceTopicRule", + "kinesis:CreateStream", + "kinesis:DeleteStream", + "kinesis:DescribeStream", + "lambda:*", + "logs:CreateLogGroup", + "logs:DeleteLogGroup", + "logs:DescribeLogGroups", + "logs:DescribeLogStreams", + "logs:FilterLogEvents", + "logs:GetLogEvents", + "logs:PutSubscriptionFilter", + "s3:CreateBucket", + "s3:DeleteBucket", + "s3:DeleteBucketPolicy", + "s3:DeleteObject", + "s3:DeleteObjectVersion", + "s3:GetObject", + "s3:GetObjectVersion", + "s3:ListAllMyBuckets", + "s3:ListBucket", + "s3:PutBucketNotification", + "s3:PutBucketPolicy", + "s3:PutBucketTagging", + "s3:PutBucketWebsite", + "s3:PutEncryptionConfiguration", + "s3:PutObject", + "sns:CreateTopic", + "sns:DeleteTopic", + "sns:GetSubscriptionAttributes", + "sns:GetTopicAttributes", + "sns:ListSubscriptions", + "sns:ListSubscriptionsByTopic", + "sns:ListTopics", + "sns:SetSubscriptionAttributes", + "sns:SetTopicAttributes", + "sns:Subscribe", + "sns:Unsubscribe", + "states:CreateStateMachine", + "states:DeleteStateMachine" + ], + "Effect": "Allow", + "Resource": "*" + } + ], + "Version": "2012-10-17" +} +``` +* View and copy the API Key and Secret to a temporary place. You'll need them when setting up the Harness AWS Connector later in this quickstart. + +#### Installing Serverless on the Delegate + +The Delegate(s) used by the AWS Connector must have Serverless installed. + +To install Serverless on a Kubernetes Delegate, edit the Delegate YAML to install Serverless when the Delegate pods are created. + +Open the Delegate YAML in a text editor. + +Locate the Environment variable `INIT_SCRIPT` in the `StatefulSet`. + + +``` +... + - name: INIT_SCRIPT + value: "" +... +``` +Replace the value with the follow Serverless installation script. + + +``` +... + - name: INIT_SCRIPT + value: |- + #!/bin/bash + echo "Start" + export DEBIAN_FRONTEND=noninteractive + echo "non-inte" + apt-get update + echo "updagte" + apt install -yq npm + echo "npm" + npm install -g serverless@v2.50.0 + echo "Done" +... +``` + +:::note +In rare cases when the Delegate OS does not support `apt` (like Red Hat Linux), you can can edit this script to install `npm`. The rest of the code should remain the same.Save the YAML file as **harness-delegate.yml**. + +::: + +You can now apply the Delegate YAML: `kubectl apply -f harness-delegate.yml`. + +#### Serverless and ​Enable cross-account access (STS Role) + +If you use the ​**Enable cross-account access (STS Role)** option in the AWS Connector for a Serverless Lambda deployment, the Delegate that is used by the Connector must have the AWS CLI installed. + +The AWS CLI is not required for the other authentication methods. + +For steps on installing software with the Delegate, see [Run Initialization Scripts on Delegates](../../2_Delegates/delegate-guide/run-scripts-on-delegates.md). + +### Switching Policies + +If the IAM role used by your AWS Connector does not have the policies required by the AWS service you want to access, you can modify or switch the role. + +This entails changing the role assigned to the AWS account or Harness Delegate your AWS Connector is using. + +When you switch or modify the IAM role used by the Connector, it might take up to 5 minutes to take effect. + +### AWS Connector Settings + +The AWS Connector settings are described below. + +#### Name + +The unique name for this Connector. + +#### ID + +See [Entity Identifier Reference](https://newdocs.helpdocs.io/article/li0my8tcz3-entity-identifier-reference). + +#### Description + +Text string. + +#### Tags + +See [Tags Reference](https://newdocs.helpdocs.io/article/i8t053o0sq-tags-reference). + +#### Credentials + + +:::note +Ensure that the AWS IAM roles applied to the credentials you use (the Harness Delegate or the access key) includes the policies needed by Harness to deploy to the target AWS service. + +::: + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure. + +::: + +Credentials that enable Harness to connect your AWS account. + +There are three options: + +* Assume IAM Role on Delegate +* AWS Access Keys manually +* Use IRSA + +The settings for each option are described below. + +### Assume IAM Role on Delegate + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure. + +::: + +This is often the simplest method for connecting Harness to your AWS account and services. + +Once you select this option, you can select a Delegate in the next step of the AWS Connector. + +Typically, the Delegate(s) is running in the target infrastructure. + +### AWS Access Key + +The access key and your secret key of the IAM Role to use for the AWS account. + +You can use Harness secrets for both. See [Add Text Secrets](../../6_Security/2-add-use-text-secrets.md). + +#### Access and Secret Keys + +See [Access Keys (Access Key ID and Secret Access Key)](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) from AWS. + +### Use IRSA (IAM roles for service accounts) + +Select **Use IRSA** if you want to have the Harness Kubernetes Delegate in AWS EKS use a specific IAM role when making authenticated requests to resources. + +By default, the Harness Kubernetes Delegate uses a ClusterRoleBinding to the **default** service account. Instead, you can use AWS IAM roles for service accounts (IRSA) to associate a specific IAM role with the service account used by the Harness Kubernetes Delegate. + + +:::note +See [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) from AWS. + +::: + +Setting up this feature requires a few more steps than other methods, but it is a simple process. + + +:::note +The following steps are for a new Delegate installation and new AWS Connector. If you updating an existing Delegate and AWS Connector, you can simply edit the Delegate YAML for your existing Delegate as described below, and select the **Use IRSA** option in your AWS Connector. + +::: + +Create the IAM role with the policies you want the Delegate to use. The policies you select with depend on what AWS resources you are deploying via the Delegate. See the different [AWS Policies Required](#aws-policies-required) sections in this document. + +In the cluster where the Delegate will be installed, create a service account and attach the IAM role to it. + +Here is an example of how to create a new service account in the cluster where you will install the Delegate and attach the IAM policy to it: + + +``` +eksctl create iamserviceaccount \ + --name=cdp-admin \ + --namespace=default \ + --cluster=test-eks \ + --attach-policy-arn= \ + --approve \ + --override-existing-serviceaccounts —region=us-east-1 +``` +In Harness, download the Harness Kubernetes Delegate YAML file. See [Install a Kubernetes Delegate](../../2_Delegates/delegate-guide/install-a-kubernetes-delegate.md). + +Open the Delegate YAML file in text editor. + +Add service account with access to IAM role to Delegate YAML. + +There are two sections in the Delegate YAML that you must update. + +First, update the `ClusterRoleBinding` by adding replacing the subject name `default` with the name of the service account with the attached IAM role. + + + +| | | +| --- | --- | +| Old `ClusterRoleBinding`: | New `ClusterRoleBinding` (for example, using the name `iamserviceaccount`): | +| ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
     name: harness-delegate-cluster-admin
    subjects:
     - kind: ServiceAccount
    name: default
    namespace: harness-delegate-ng
    roleRef:
      kind: ClusterRole
    name: cluster-admin
    apiGroup: rbac.authorization.k8s.io
    --- | ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
     name: harness-delegate-cluster-admin
    subjects: - kind: ServiceAccount
    name: iamserviceaccount
    namespace: harness-delegate-ng
    roleRef:
    kind: ClusterRole
    name: cluster-admin
    apiGroup: rbac.authorization.k8s.io
    ---| + +Next, update StatefulSet spec with the new `serviceAccountName`. + + + +| | | +| --- | --- | +| Old StatefulSet spec `serviceAccountName`: | New StatefulSet spec serviceAccountName (for example, using the name `iamserviceaccount`): | +| +``` +... spec: containers: - image: harness/delegate:latest imagePullPolicy: Always name: harness-delegate-instance ports: - containerPort: 8080... +``` + | +``` +... spec: serviceAccountName: iamserviceaccount containers: - image: harness/delegate:latest imagePullPolicy: Always name: harness-delegate-instance ports: - containerPort: 8080... +``` + | + +Save the Delegate YAML file. + +Install the Delegate in your EKS cluster and register the Delegate with Harness. See [Install a Kubernetes Delegate](../../2_Delegates/delegate-guide/install-a-kubernetes-delegate.md). + + +:::note +When you install the Delegate in the cluster, the serviceAccount you added is used and the environment variables `AWS_ROLE_ARN` and `AWS_WEB_IDENTITY_TOKEN_FILE` are added automatically by EKS.Create a new AWS Connector. + +::: + +In **Credentials**, select **Use IRSA**. + +In **Set up** **Delegates**, select the Delegate you used. + +Click **Save and Continue** to verify the Delegate credentials. + +### Enable cross-account access (STS Role) + + +:::note +Assume STS Role is supported for EC2 and ECS. It is supported for EKS if you use the IRSA option, described above. + +::: + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure. + +::: + +If you want to use one AWS account for the connection, but you want to deploy in a different AWS account, use the **Assume STS Role** option. + +This option uses the [AWS Security Token Service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) (STS) feature. + +In this scenario, the AWS account used for AWS access in **Credentials** will assume the IAM role you specify in **Role ARN** setting. + + +:::note +The Harness Delegate(s) always runs in the account you specify in **Credentials** via **Access/Secret Key** or **Assume IAM Role on Delegate**. + +::: + +To assume the role in **Role ARN**, the AWS account in **Credentials** must be trusted by the role. The trust relationship is defined in the **Role ARN** role's trust policy when the role is created. That trust policy states which accounts are allowed to give that access to users in the account. + + +:::note +You can use **Assume STS Role** to establish trust between roles in the same account, but cross-account trust is more common. + +::: + +#### Role ARN + +The Amazon Resource Name (ARN) of the role that you want to assume. This is an IAM role in the target deployment AWS account. + +The assumed role in **Role ARN** must have all the IAM policies required to perform your Harness deployment, such as Amazon S3, ECS (Existing Cluster), and AWS EC2 policies. For more information, see [Assuming an IAM Role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html) from AWS. + +#### External ID + +If the administrator of the account to which the role belongs provided you with an external ID, then enter that value. + +For more information, see [How to Use an External ID When Granting Access to Your AWS Resources to a Third Party](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) from AWS. + + +:::note +The AWS [IAM Policy Simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) is a useful tool for evaluating policies and access. + +::: + +### Test Region and AWS GovCloud Support + +By default, Harness uses the **us-east-1** region to test the credentials for this Connector. + +If you want to use an AWS GovCloud account for this Connector, select it in **Test Region**. + +GovCloud is used by organizations such as government agencies at the federal, state, and local level, as well as contractors, educational institutions. It is also used for regulatory compliance with these organizations. + +#### Restrictions + +You can access AWS GovCloud with AWS GovCloud credentials (AWS GovCloud account access key and AWS GovCloud IAM user credentials). + +You cannot access AWS GovCloud with standard AWS credentials. Likewise, you cannot access standard AWS regions using AWS GovCloud credentials. + +### Troubleshooting + +See [Troubleshooting Harness](https://docs.harness.io/article/jzklic4y2j-troubleshooting). + + +:::warning +The [DescribeRegions](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRegions.html) action is required for all AWS Connectors regardless of what AWS service you are using for your target infrastructure. + +::: + +### See also + +* [Google Cloud Platform (GCP) Connector Settings Reference](gcs-connector-settings-reference.md) +* [Kubernetes Cluster Connector Settings Reference](kubernetes-cluster-connector-settings-reference.md) + diff --git a/docs/platform/7_Connectors/ref-cloud-providers/docker-registry-connector-settings-reference.md b/docs/platform/7_Connectors/ref-cloud-providers/docker-registry-connector-settings-reference.md new file mode 100644 index 00000000000..3e4b0e267de --- /dev/null +++ b/docs/platform/7_Connectors/ref-cloud-providers/docker-registry-connector-settings-reference.md @@ -0,0 +1,78 @@ +--- +title: Docker Connector Settings Reference +description: This topic provides settings and permissions for the Docker Connector. Docker Registries in Cloud Platforms. The Docker Connector is platform-agnostic and can be used to connect to any Docker contain… +# sidebar_position: 2 +helpdocs_topic_id: u9bsd77g5a +helpdocs_category_id: 1ehb4tcksy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the Docker Connector. + +### Docker Registries in Cloud Platforms + +The Docker Connector is platform-agnostic and can be used to connect to any Docker container registry, but Harness provides first class support for registries in AWS and GCR. + +See: + +* [Add an AWS Connector](../add-aws-connector.md) +* [Google Cloud Platform (GCP) Connector Settings Reference](../connect-to-google-cloud-platform-gcp.md) + +### Docker Registry Permissions Required + +Make sure the connected user account has the following permissions. + +* Read permission for all repositories. + +The user needs access and permissions to the following: + +* List images and tags +* Pull images + +See [Docker Permissions](https://docs.docker.com/datacenter/dtr/2.0/user-management/permission-levels/). + +If you are using anonymous access to a Docker registry for a Kubernetes deployment, then `imagePullSecrets` should be removed from the container specification. This is standard Kubernetes behavior and not related to Harness specifically. + +### Name + +The unique name for this Connector. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +Text string. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### Docker Registry URL + +The URL of the Docker Registry. This is usually the URL used for your [docker login](https://docs.docker.com/engine/reference/commandline/login/) credentials. + +To connect to a public Docker registry like Docker Hub, use `https://registry.hub.docker.com/v2/`. + +To connect to a private Docker registry, use `https://index.docker.io/v2/`. + +### Provider Type + +The Docker registry platform that you want to connect. Some examples: + +* DockerHub +* Harbor +* Quay + +### Authentication + +You can authenticate using username and password, or select anonymous. + +### Credentials + +The username and password for the Docker registry account. + +The password uses a [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md). + diff --git a/docs/platform/7_Connectors/ref-cloud-providers/gcs-connector-settings-reference.md b/docs/platform/7_Connectors/ref-cloud-providers/gcs-connector-settings-reference.md new file mode 100644 index 00000000000..cddc7d88b3a --- /dev/null +++ b/docs/platform/7_Connectors/ref-cloud-providers/gcs-connector-settings-reference.md @@ -0,0 +1,95 @@ +--- +title: Google Cloud Platform (GCP) Connector Settings Reference +description: This topic provides settings and permissions for the GCP Connector. +# sidebar_position: 2 +helpdocs_topic_id: yykfduond6 +helpdocs_category_id: 1ehb4tcksy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Google Cloud Platform (GCP) Connector connects your Harness account to a GCP account. + +You add Connectors to your Harness Account and then reference them when defining resources and environments. + + +### Limitations + +Harness supports GKE 1.19 and later. + +If you use a version prior to GKE 1.19, please enable Basic Authentication. If Basic authentication is inadequate for your security requirements, use the [Kubernetes Cluster Connector](../add-a-kubernetes-cluster-connector.md). + +### Kubernetes Role Requirements + +If you are using the GCP Connector to connect to GKE, the GCP service account used for any credentials requires the **Kubernetes Engine Developer** and **Storage Object Viewer** permissions. + +If you use a version prior to GKE 1.19, please enable Basic Authentication. If Basic authentication is inadequate for your security requirements, use the [Kubernetes Cluster Connector](../add-a-kubernetes-cluster-connector.md). + +* For steps to add roles to your service account, see [Granting Roles to Service Accounts](https://cloud.google.com/iam/docs/granting-roles-to-service-accounts) from Google. For more information, see [Understanding Roles](https://cloud.google.com/iam/docs/understanding-roles?_ga=2.123080387.-954998919.1531518087#curated_roles) from GCP. + +Another option is to use a service account that has only the Storage Object Viewer permission needed to query GCR, and then use either an in-cluster Kubernetes Delegate or a direct [Kubernetes Cluster Connector](kubernetes-cluster-connector-settings-reference.md) with the Kubernetes service account token for performing deployment. + +### GCS and GCR Role Requirements + +For Google Cloud Storage (GCS) and Google Container Registry (GCR), the following roles are required: + +* Storage Object Viewer (roles/storage.objectViewer) +* Storage Object Admin (roles/storage.objectAdmin) + +See [Cloud IAM roles for Cloud Storage](https://cloud.google.com/storage/docs/access-control/iam-roles) from GCP. + +Ensure the Harness Delegate you have installed can reach to the GCR registry host name you are using in **Registry Host Name** (for example, gcr.io) and storage.cloud.google.com. + +### Google Cloud Operations Suite (Stackdriver) Requirements + +Most APM and logging tools are added to Harness as Verification Providers. For Google Cloud's operations suite (formerly Stackdriver), you use the GCP Connector. + +##### Roles and Permissions + +* **Stackdriver Logs** - The minimum role requirement is **Logs Viewer** (logging.viewer)**.** +* **Stackdriver Metrics** - The minimum role requirements are **Compute Network Viewer** (compute.networkViewer) and **Monitoring Viewer** (monitoring.viewer). + +See [Access control](https://cloud.google.com/monitoring/access-control) from Google. + +### Proxies and GCP with Harness + +If you are using a proxy server in your GCP account, but want to use GCP services with Harness, you need to set the following to not use your proxy: + +* `googleapis.com`. See [Proxy servers](https://cloud.google.com/storage/docs/troubleshooting#proxy-server) from Google. +* The `token_uri` value from your Google Service Account. See [Creating service account keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating_service_account_keys) from Google. + +### GCP Connector Settings + +#### Name + +The unique name for this Connector. + +#### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +#### Description + +Text string. + +#### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +#### Service Account Key + +Select or create a new [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md) that contains the Google Cloud's Account Service Key File. + +To obtain the Google Cloud's Account Service Key File, see [Creating and managing service account keys](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) from Google (JSON is recommended). + +![](./static/gcs-connector-settings-reference-00.png) +(./static/gcs-connector-settings-reference-00.png) +Once you have the key file from Google, open it, copy it, and paste it into the Harness Encrypted Text secret. + +Next, use that Harness Encrypted Text secret in **Service Account Key**. + +### See also + +* [AWS Connector Settings Reference](aws-connector-settings-reference.md) +* [Kubernetes Cluster Connector Settings Reference](kubernetes-cluster-connector-settings-reference.md) + diff --git a/docs/platform/7_Connectors/ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md b/docs/platform/7_Connectors/ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md new file mode 100644 index 00000000000..58b906d01a7 --- /dev/null +++ b/docs/platform/7_Connectors/ref-cloud-providers/kubernetes-cluster-connector-settings-reference.md @@ -0,0 +1,385 @@ +--- +title: Kubernetes Cluster Connector Settings Reference +description: This topic provides settings and permissions for the Kubernetes Cluster Connector. The Kubernetes Cluster Connector is a platform-agnostic connection to a Kubernetes cluster located anywhere. For clo… +# sidebar_position: 2 +helpdocs_topic_id: sjjik49xww +helpdocs_category_id: 1ehb4tcksy +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the Kubernetes Cluster Connector. + +The Kubernetes Cluster Connector is a platform-agnostic connection to a Kubernetes cluster located anywhere. + +For cloud platform-specific connections, see platform [Cloud Connectors](https://docs.harness.io/category/cloud-platform-connectors). + +Looking for the How-to? See [Add a Kubernetes Cluster Connector](../add-a-kubernetes-cluster-connector.md). + +### Video Summary + +Here's a 10min video that walks you through adding a Harness Kubernetes Cluster Connector and Harness Kubernetes Delegate. The Delegate is added to the target cluster and then the Kubernetes Cluster Connector uses the Delegate to connect to the cluster: + +### Kubernetes Cluster Connector vs Platform Connectors + +The Kubernetes Cluster Connector is platform-agnostic. Use it to access a cluster on any platform. + +It cannot also access platform-specific services and resources. For those, use a platform Connector like Google Cloud Platform or Amazon Web Services. + +See [Add a Google Cloud Platform (GCP) Connector](../connect-to-google-cloud-platform-gcp.md), [Add an AWS Connector](../add-aws-connector.md). + +For example, let's say you have a GKE Kubernetes cluster hosted in Google Cloud Platform (GCP). You can use the Kubernetes Cluster Connector to connect Harness to the cluster in GCP. The Kubernetes Cluster Connector cannot also access Google Container Registry (GCR). + +In this case, you have two options: + +1. Use a Google Cloud Platform Connector to access the GKE cluster and all other GCP resources you need. +2. Set up a Kubernetes Cluster Connector for the GKE cluster. Next, set up a Google Cloud Platform Connector for all other GCP services and resources. + +When you set up a deployment in Harness, you will specify Connector to use for the artifact and target cluster. If we use option 2 above, you will select a Google Cloud Platform Connector for the GCR container. Next, you will select Kubernetes Cluster Connector for the target cluster. + +Which option you choose will depend on how your teams use Harness. + +### Permissions Required + +The IAM roles and policies needed by the account used in the Connector depend on what operations you are using with Harness and what operations you want Harness to perform in the cluster. + +You can use different methods for authenticating with the Kubernetes cluster, but all of them use a Kubernetes Role. + +The Role used must have either the `cluster-admin` permission in the target cluster or admin permissions in the target namespace. + +For a detailed list of roles and policies, see [Harness Role-Based Access Control Overview](../../4_Role-Based-Access-Control/1-rbac-in-harness.md#role). + +#### Harness CI Permission Requirements + +If you are only using the Kubernetes Cluster Connector for Harness Continuous Integration (CI), you can use a reduced set of permissions. + +For Harness CI, the Delegate requires CRUD permissions on Secret and Pod. + +Here is a same Service Account and RoleBinding that lists the minimum permissions: + + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: cie-test +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: cie-test-sa + namespace: cie-test +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: sa-role + namespace: cie-test +rules: + - apiGroups: [""] + resources: ["pods", "secrets"] + verbs: ["get", "list", "watch", "create", "update", "delete"] + - apiGroups: [""] + resources: ["events"] + verbs: ["list", "watch"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: sa-role-binding + namespace: cie-test +subjects: + - kind: ServiceAccount + name: cie-test-sa + namespace: cie-test +roleRef: + kind: Role + name: sa-role + apiGroup: rbac.authorization.k8s.io +``` +#### Builds (CI) + +A Kubernetes service account with CRUD permissions on Secret, Service, Pod, and PersistentVolumeClaim (PVC). + +For more information, see [User-Facing Roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) from Kubernetes. + +#### Deployments (CD) + +A Kubernetes service account with permission to create entities in the target namespace is required. The set of permissions should include `list`, `get`, `create`, `watch` (to fetch the pod events), and `delete` permissions for each of the entity types Harness uses. In general, cluster admin permission or namespace admin permission is sufficient. + +If you don’t want to use `resources: [“*”]` for the Role, you can list out the resources you want to grant. Harness needs `configMap`, `secret`, `event`, `deployment`, and `pod` at a minimum for deployments, as stated above. Beyond that, it depends on the resources you are deploying via Harness. + +If you don’t want to use `verbs: [“*”]` for the Role, you can list out all of the verbs (`create`, `delete`, `get`, `list`, `patch`, `update`, `watch`). + +The YAML provided for the Harness Delegate defaults to `cluster-admin` because that ensures anything could be applied. Any restriction must take into account the actual manifests to be deployed. + +### Harness CI Cluster Requirements + +For Harness **Continuous Integration**, the resources required for the Kubernetes cluster depends on the number of builds running in parallel, as well as the resources required for each build. + +Below is a rough estimation of the resources required, based on the number of daily builds: + + + +| | | | +| --- | --- | --- | +| **PRs/Day** | **Nodes with 4 CPU, 8GB RAM,100GB disk** | **Nodes with 8 CPU, 16GB RAM, 200GB disk** | +| 100 | 19 - 26 | 11 - 15 | +| 500 | 87 - 121 | 45 - 62 | +| 1000 | 172 - 239 | 89 - 123 | + +### Credential Validation + +When you click **Submit**, Harness uses the provided credentials to list controllers in the **default** namespace in order to validate the credentials. If validation fails, Harness does not save the Connector and the **Submit** fails. + +If your cluster does not have a **default** namespace, or your credentials do not have permission in the **default** namespace, then you can check **Skip default namespace validation** to skip this check and saving your Connector settings. + +You do not need to come back and uncheck **Skip default namespace validation**. + +Later, when you define a target Infrastructure using this Connector, you will also specify a specific namespace. During deployment, Harness uses this namespace rather than the **default** namespace. + +When Harness saves the Infrastructure it performs validation even if **Skip default namespace validation** was checked. + +### Name + +The unique name for this Connector. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +Text string. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### Cluster Details + +#### Manual or Use a Delegate + +**Recommended:** Install and run the Harness Kubernetes Delegate in the target Kubernetes cluster, and then use the Kubernetes Cluster Connector to connect to that cluster using the Harness Kubernetes Delegate you installed. This is the easiest method to connect to a Kubernetes cluster.You can select to enter the authentication details of the target cluster or use the role associated with a Harness Delegate. + +When you select a Delegate, the Harness Delegate will inherit the Kubernetes service account associated with the Delegate pod. + +The service account associated with the Delegate pod must have the Kubernetes `cluster-admin` role. + +See [Install a Kubernetes Delegate](../../2_Delegates/delegate-guide/install-a-kubernetes-delegate.md). + +#### Master URL + +The Kubernetes master node URL. The easiest method to obtain the master URL is using kubectl: + +`kubectl cluster-info` + +#### Authentication + +Select an authentication method. + +Basic (Username and Password) authentication is not recommended. Basic authentication has been removed in GKE 1.19 and later. + +### Username and Password + +Username and password for the Kubernetes cluster. For example, **admin** or **john@example.com**, and a Basic authentication password. + +You can use an inline username or a Harness [Encrypted Text secret](https://docs.harness.io/article/ygyvp998mu-use-encrypted-text-secrets). + +For the password, select or create a new Harness Encrypted Text secret. + +This is not used, typically. Some Connectors have Basic authentication disabled by default. The cluster would need Basic authentication enabled and a specific username and password configured for authentication.For OpenShift or any other platform, this is not the username/password for the platform. It is the username/password for the cluster. + +### Service Account + +Add the service account token for the service account. The token must be pasted in decoded in the Encrypted Text secret you create/select. + +To get a list of the service accounts, run `kubectl get serviceAccounts`. + +For example, here's a manifest that creates a new SA named `harness-service-account` in the `default` namespace. + + +``` +# harness-service-account.yml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: harness-service-account + namespace: default +``` +Next, you apply the SA. + + +``` +kubectl apply -f harness-service-account.yml +``` +Next, grant the SA the `cluster-admin` permission (see **Permissions Required** above). + + +``` +# harness-clusterrolebinding.yml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: harness-admin +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin +subjects: +- kind: ServiceAccount + name: harness-service-account + namespace: default +``` +Next, apply the ClusterRoleBinding. + + +``` +kubectl apply -f harness-clusterrolebinding.yml +``` +Once you have the SA added, you can gets its token using the following commands. + + +``` +SERVICE_ACCOUNT_NAME={SA name} + +NAMESPACE={target namespace} + +SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o=jsonpath='{.secrets[].name}') + +TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o=jsonpath='{.data.token}' | base64 -d) + +echo $TOKEN +``` +The `| base64 -d` piping decodes the token. You can now enter it into the Connector. + +### OpenID Connect + +These settings come from the OIDC provider authorization server you have set up and others come from the provider app you are using to log in with. + +First let's look at the authorization server-related settings: + +#### Master URL + +The issuer URI for the provider authentication server. + +For example, in Okta, this is the Issuer URL for the [Authorization Server](https://developer.okta.com/docs/concepts/auth-servers/): + +![](./static/kubernetes-cluster-connector-settings-reference-02.png) +(./static/kubernetes-cluster-connector-settings-reference-02.png) +Providers use different API versions. If you want to identify the version also, you can obtain it from the token endpoint. + +In Okta, in the authentication server **Settings**, click the **Metadata URI**. Locate the **token\_endpoint**. Use the **token\_endpoint** URL except for the **/token** part. For example, you would use `https://dev-00000.okta.com/oauth2/default/v1` from the following endpoint: + + +``` +"token_endpoint":"https://dev-00000.okta.com/oauth2/default/v1/token" +``` +#### OIDC Username and Password + +Login credentials for a user assigned to the provider app. + +* **OIDC** **Client ID:** Public identifier for the client that is required for all OAuth flows. In Okta, this is located in the **Client Credentials** for the app: + +![](./static/kubernetes-cluster-connector-settings-reference-04.png) +(./static/kubernetes-cluster-connector-settings-reference-04.png) +#### OIDC Secret + +The client secret for the app. For Okta, you can see this in the above picture. + +#### OIDC Scopes + +OIDC scopes are used by an application during authentication to authorize access to a user's details, like name and picture. In Okta, you can find them in the Authorization Server **Scopes** tab: + +![](./static/kubernetes-cluster-connector-settings-reference-06.png) +(./static/kubernetes-cluster-connector-settings-reference-06.png) +If you enter multiple scopes, separate them using spaces. + +The remaining OIDC Token settings are part of the provider app you are using to log in. + +### Client Key Certificate + +#### Client Key + +Create or select a Harness secret to add the client key for the client certificate. The key can be pasted into the secret either Base64 encoded or decoded. + +#### Client Key passphrase + +Create or select a Harness secret to add the client key passphrase. The passphrase can be pasted in either Base64 encoded or decoded. + +#### Client Certificate + +Create or select a Harness secret to add the client certificate for the cluster. + +The public client certificate is generated along with the private client key used to authenticate. The certificate can be pasted in either Base64 encoded or decoded. + +#### Client Key Algorithm (optional) + +Specify the encryption algorithm used when the certificate was created. Typically, RSA. + +#### CA Certificate (optional) + +Create or select a Harness secret to add the Certificate authority root certificate used to validate client certificates presented to the API server. For more information, see [Authenticating](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) from Kubernetes. + +### Amazon AWS EKS Support + +AWS EKS is supported using the Inherit Delegate Credentials option in the Kubernetes Cluster Connector settings. You can use your [EKS service account](#service_account) token as well. + +To install a delegate in your AWS infrastructure, do the following: + +* Install a Harness Kubernetes Delegate in your EKS cluster.You must be logged in as an admin user when you run the `kubectl apply -f harness-delegate.yaml` command. +* Give it a name that you can recognize as an EKS cluster Delegate. For information on installing a Kubernetes Delegate, see [Install a Kubernetes Delegate](../../2_Delegates/delegate-guide/install-a-kubernetes-delegate.md). +* In the Kubernetes Cluster Connector settings, select the Delegate. +* When setting up the EKS cluster as the target Infrastructure, select the Kubernetes Cluster Connector. + +### OpenShift Support + +This section describes how to support OpenShift using a Delegate running externally to the Kubernetes cluster. Harness does support running Delegates internally for OpenShift 3.11 or greater, but the cluster must be configured to allow images to run as root inside the container in order to write to the filesystem.Typically, OpenShift is supported through an external Delegate installation (shell script installation of the Delegate outside of the Kubernetes cluster) and a service account token, entered in the **Service Account** setting. + +You only need to use the **Master URL** and **Service Account Token** setting in the **Kubernetes Cluster Connector** settings. + +The following shell script is a quick method for obtaining the service account token. Run this script wherever you run kubectl to access the cluster. + +Set the `SERVICE_ACCOUNT_NAME` and `NAMESPACE` values to the values in your infrastructure. + + +``` +SERVICE_ACCOUNT_NAME=default +NAMESPACE=mynamepace +SECRET_NAME=$(kubectl get sa "${SERVICE_ACCOUNT_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.secrets[].name') +TOKEN=$(kubectl get secret "${SECRET_NAME}" --namespace "${NAMESPACE}" -o json | jq -r '.data["token"]' | base64 -D) +echo $TOKEN +``` +Once configured, OpenShift is used by Harness as a typical Kubernetes cluster. + +##### OpenShift Notes + +* If you decide to use a username/password for credentials in the Harness Kubernetes Cluster Connector, do not use the username/password for the OpenShift platform. Use the username/password for the **cluster**. +* Harness supports [DeploymentConfig](https://docs.openshift.com/container-platform/4.1/applications/deployments/what-deployments-are.html), [Route](https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html), and [ImageStream](https://docs.openshift.com/enterprise/3.2/architecture/core_concepts/builds_and_image_streams.html#image-streams) across Canary, Blue Green, and Rolling deployment strategies. Please use `apiVersion: apps.openshift.io/v1` and not `apiVersion: v1`. +* The token does not need to have global read permissions. The token can be scoped to the namespace. +* The Kubernetes containers must be OpenShift-compatible containers. If you are already using OpenShift, then this is already configured. But be aware that OpenShift cannot simply deploy any Kubernetes container. You can get OpenShift images from the following public repos:  and . +* Useful articles for setting up a local OpenShift cluster for testing: [How To Setup Local OpenShift Origin (OKD) Cluster on CentOS 7](https://computingforgeeks.com/setup-openshift-origin-local-cluster-on-centos/), [OpenShift Console redirects to 127.0.0.1](https://chrisphillips-cminion.github.io/kubernetes/2019/07/08/OpenShift-Redirect.html). + +### YAML Example + + +``` +connector: + name: Doc Kubernetes Cluster + identifier: Doc_Kubernetes_Cluster + description: "" + orgIdentifier: "" + projectIdentifier: "" + tags: {} + type: K8sCluster + spec: + credential: + type: ManualConfig + spec: + masterUrl: https://00.00.00.000 + auth: + type: UsernamePassword + spec: + username: john.doe@example.io + passwordRef: account.gcpexample +``` diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-08.png b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-08.png new file mode 100644 index 00000000000..349f1f0ddf1 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-08.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-09.png b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-09.png new file mode 100644 index 00000000000..40ea99861b4 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-09.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-10.png b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-10.png new file mode 100644 index 00000000000..40ea99861b4 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-10.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-11.png b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-11.png new file mode 100644 index 00000000000..a85ac7bc955 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/artifactory-connector-settings-reference-11.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/gcs-connector-settings-reference-00.png b/docs/platform/7_Connectors/ref-cloud-providers/static/gcs-connector-settings-reference-00.png new file mode 100644 index 00000000000..45dd8d11d91 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/gcs-connector-settings-reference-00.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/gcs-connector-settings-reference-01.png b/docs/platform/7_Connectors/ref-cloud-providers/static/gcs-connector-settings-reference-01.png new file mode 100644 index 00000000000..45dd8d11d91 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/gcs-connector-settings-reference-01.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-02.png b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-02.png new file mode 100644 index 00000000000..2ddde320bdc Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-02.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-03.png b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-03.png new file mode 100644 index 00000000000..2ddde320bdc Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-03.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-04.png b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-04.png new file mode 100644 index 00000000000..a0377783b05 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-04.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-05.png b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-05.png new file mode 100644 index 00000000000..a0377783b05 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-05.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-06.png b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-06.png new file mode 100644 index 00000000000..e662f0c64a5 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-06.png differ diff --git a/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-07.png b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-07.png new file mode 100644 index 00000000000..e662f0c64a5 Binary files /dev/null and b/docs/platform/7_Connectors/ref-cloud-providers/static/kubernetes-cluster-connector-settings-reference-07.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/_category_.json b/docs/platform/7_Connectors/ref-source-repo-provider/_category_.json new file mode 100644 index 00000000000..9b6a5736c48 --- /dev/null +++ b/docs/platform/7_Connectors/ref-source-repo-provider/_category_.json @@ -0,0 +1 @@ +{"label": "Code Repo Connectors", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Code Repo Connectors"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "xyexvcc206"}} \ No newline at end of file diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/bitbucket-connector-settings-reference.md b/docs/platform/7_Connectors/ref-source-repo-provider/bitbucket-connector-settings-reference.md new file mode 100644 index 00000000000..f67a844f4bd --- /dev/null +++ b/docs/platform/7_Connectors/ref-source-repo-provider/bitbucket-connector-settings-reference.md @@ -0,0 +1,98 @@ +--- +title: Bitbucket Connector Settings Reference +description: This topic provides settings and permissions for the Bitbucket Connector. +# sidebar_position: 2 +helpdocs_topic_id: iz5tucdwyu +helpdocs_category_id: xyexvcc206 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the Bitbucket Connector. + +### Limitations + +* Before Harness syncs with your Git repo, it verifies all the connection settings in Harness. If Harness cannot establish a connection, it won't sync with your Git repo. + +Harness supports both Cloud and Data Center (On-Prem) versions of Bitbucket. The following settings are applicable for both versions. + +### Name + +The unique name for this Connector. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +Text string. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### URL Type + +Select one type: + +* **Account:** Connect to your entire Git account. This enables you to use one Connector for all repos in the account. If you select this, you must provide a repository name to test the connection. +* **Repository:** Connect to one repo in the account. + +### Connection Type + +The protocol to use for cloning and authentication. Select one type: + +* **HTTPS:** Requires a personal access token. +* **SSH:** You must use a key in PEM format, not OpenSSH. +To generate an SSHv2 key, use: `ssh-keygen -t rsa -m PEM` The `rsa` and `-m PEM` ensure the algorithm and that the key is PEM. Next, follow the prompts to create the PEM key. For more information, see the [ssh-keygen man page](https://linux.die.net/man/1/ssh-keygen). + +### Bitbucket Account URL + +The URL for your Git repo. Make sure that it matches the Connection Type option you selected. + +If the URL Type is **Repository**, enter the full URL for the repo. + +If the URL Type is **Account**, enter the URL without the repo name. You will provide a repo name when you use the Connector. + +If the Connection Type is **HTTP**, enter the URL in the format `https://bitbucket.org//.git`. + +### Authentication + +Bitbucket repos with read-only access also require a username and password.You can use a password for HTTPS credentials. + +If you selected **SSH** as the connection protocol, you must add the **SSH Key** for use with the connection.  + +If you log into Bitbucket using a Google account, you can create an application password in Bitbucket to use with Harness. For steps on this, see [App passwords](https://confluence.atlassian.com/bitbucket/app-passwords-828781300.html) from Atlassian. + +#### Username + +The username for the account. + +#### Password + +A [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md) for the credentials of your Bitbucket user account. + +If you have set up Two-Factor Authentication in your Bitbucket account, you need to generate a personal access token in your repo and enter that token in the **Password/Token** field. + +#### SSH Key + +If you selected **SSH** as the connection protocol, you must add the **SSH Key** for use with the connection as a [Harness Encrypted File secret](../../6_Security/3-add-file-secrets.md). For steps to create an SSH Key, see [Add new SSH Key](https://support.atlassian.com/bitbucket-cloud/docs/set-up-an-ssh-key/). + +#### Enable API access + +This option is required for using Git-based triggers, Webhook management, and updating Git statuses. If you are using Harness Git Experience, you will need to use this setting. + +### API Authentication + +#### UserName + +The username for the account. + +You must enter a plain-text username or a username secret for *both* Authentication and API Authentication. You cannot use a plain-text password for one field and a secret for the other. + +#### Personal Access Token + +A [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md) for the App password of your Bitbucket user account. + +![](./static/bitbucket-connector-settings-reference-05.png) \ No newline at end of file diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/git-connector-settings-reference.md b/docs/platform/7_Connectors/ref-source-repo-provider/git-connector-settings-reference.md new file mode 100644 index 00000000000..f29c5dde635 --- /dev/null +++ b/docs/platform/7_Connectors/ref-source-repo-provider/git-connector-settings-reference.md @@ -0,0 +1,86 @@ +--- +title: Git Connector Settings Reference +description: This topic provides settings and permissions for the Git Connector. +# sidebar_position: 2 +helpdocs_topic_id: tbm2hw6pr6 +helpdocs_category_id: xyexvcc206 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the platform-agnostic Git Connector. For Connectors to popular Git platforms like GitHub, see [Code Repo Connectors](https://docs.harness.io/category/code-repo-connectors). + +### Limitations + +* Before Harness syncs with your Git repo, it'll confirm that all Harness' settings are in a valid state. If a connection isn't working, Harness won't sync with your Git repo. + +### Name + +The unique name for this Connector. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +A description of this Connector. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### URL Type + +You can select **Account** or **Repository**. + +You can add a connection to your entire Git account or just a repo in the account. Selecting a Git account enables you to use one Connector for all of your subordinate repositories. + +Later when you test this connection, you will use a repo in the account. + +In either case, when you use the Connector later in Harness, you will specify which repo to use. + +### Connection Type + +You can select **HTTPS** or **SSH** for the connection. + +You will need to provide the protocol-relevant URL. + +If you use Two-Factor Authentication for your Git repo, you connect over **HTTPS** or **SSH**. **HTTS** requires a personal access token. + +For SSH, ensure that the key is not OpenSSH, but rather PEM format. To generate an SSHv2 key, use: `ssh-keygen -t rsa -m PEM` The `rsa` and `-m PEM` ensure the algorithm and that the key is PEM. Next, follow the prompts to create the PEM key. For more information, see the [ssh-keygen man page](https://linux.die.net/man/1/ssh-keygen). + +#### Github deprecated RSA + +Starting March 15, 2022, GitHub is fully deprecating RSA with SHA-1. GitHub will allow ECDSA and Ed25519 to be used. RSA keys uploaded after this date will work with SHA-2 signatures only (RSA keys uploaded before this date will continue to work with SHA-1). See [Improving Git protocol security on GitHub](https://github.blog/2021-09-01-improving-git-protocol-security-github/#when-are-these-changes-effective) from GitHub. + +Generating an SSH key in ECDSA looks like this: + +`ssh-keygen -t ecdsa -b 256 -f /home/user/Documents/ECDSA/key -m pem` + +### Git Account or Repository URL + +The URL for your Git repo. Ensure that it matches the Connection Type option you selected. + +If you selected **Git Repository** in **URL** **Type**, enter the full URL for the repo. + +If you selected **Git Account** in **URL** **Type**, enter the URL without the repo name. When you use this Connector in a Harness setting you will be prompted to provide a repo name. + +### Username + +The username for the account. + +### Password + +A [Harness Encrypted Text](../../6_Security/2-add-use-text-secrets.md) secret for the credentials of your Git user account. + +### SSH Key + +If you selected **SSH** as the connection protocol, you must add the **Username** as `git` and an **SSH Key** for use with the connection as a [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md). + +### Setup Delegates + +You can select **Connect via any available delegate** or **Connect only via delegates which has all of the following tags**. + +You need to enter **Selectors** to connect via specific delegates. For more information see [Select Delegates with Tags](../../2_Delegates/delegate-guide/select-delegates-with-selectors.md). + diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/git-hub-connector-settings-reference.md b/docs/platform/7_Connectors/ref-source-repo-provider/git-hub-connector-settings-reference.md new file mode 100644 index 00000000000..14654cc01c6 --- /dev/null +++ b/docs/platform/7_Connectors/ref-source-repo-provider/git-hub-connector-settings-reference.md @@ -0,0 +1,115 @@ +--- +title: GitHub Connector Settings Reference +description: This topic provides settings and permissions for the GitHub Connector. You can also use a GitHub App for authentication in a Harness GitHub Connector. See Use a GitHub App in a GitHub Connector. Name… +# sidebar_position: 2 +helpdocs_topic_id: v9sigwjlgo +helpdocs_category_id: xyexvcc206 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the GitHub Connector. + +You can also use a GitHub App for authentication in a Harness GitHub Connector. See [Use a GitHub App in a GitHub Connector](../git-hub-app-support.md). + +### Name + +The unique name for this Connector. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +Text string. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### URL Type + +You can select Git Account (which is a GitHub **organization**) or Git Repository. + +You can add a connection to your entire Git org or just a repo in the org. Selecting a Git org enables you to use one Connector for all of your subordinate repos. + +Later, when you test this connection, you'll use a repo in the org. + +In either case, when you use the Connector later in Harness, you'll specify which repo to use. + +### Connection Type + +You can select **HTTPS** or **SSH** for the connection. + +You will need to provide the protocol-relevant URL in **URL**. + +If you use Two-Factor Authentication for your Git repo, you connect over **HTTPS** or **SSH**. HTTPS connections require a personal access token. + +For SSH, make sure that the key is PEM and not OpenSSH. To generate an SSHv2 key, use:  +`ssh-keygen -t rsa -m PEM`  +Use `rsa` and `-m PEM` to make sure that the algorithm and the key are PEM. +Next, follow the prompts to create the PEM key. See the  [ssh-keygen man page](https://linux.die.net/man/1/ssh-keygen) and [Connecting to GitHub with SSH](https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh).Starting March 15, 2022, GitHub is fully deprecating RSA with SHA-1. GitHub will allow ECDSA and Ed25519 to be used. RSA keys uploaded after this date will work with SHA-2 signatures only (RSA keys uploaded before this date will continue to work with SHA-1). See [Improving Git protocol security on GitHub](https://github.blog/2021-09-01-improving-git-protocol-security-github/#when-are-these-changes-effective) from GitHub. + +Generating an SSH key in ECDSA looks like this: + +`ssh-keygen -t ecdsa -b 256 -f /home/user/Documents/ECDSA/key -m pem` + +### GitHub Repository URL + +The URL for a Git org or repo. The URL format must match the [Connection Type](#connection_type) you selected --for example: + +* HTTPS: `https://github.com/wings-software/harness-docs.git`. +* SSH: `git@github.com:wings-software/harness-docs.git`. + +You can get the URL from GitHub using its Code feature: + +![](./static/git-hub-connector-settings-reference-00.png) +If you selected **Git Repository** in [URL Type](#url_type), enter the full URL for the repo with the format `https://github.com/[org-name]/[repo-name]`. + +If you selected **Git Account** in [URL Type](#url_type), enter the URL without the repo name, like `https://github.com/[org-name]`. You will need to provide a repo name before you can use the Connector in Harness. + +### Authentication + +Read-only GitHub repos also require a username and password/token.You can use a password/token for HTTPS credentials. + +If you selected **SSH** as the connection protocol, you must add the **SSH Key** to use with the connection.  + +### Username + +Your personal GitHub account username. + +### Personal Access Token + +A [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md) for the credentials of your GitHub user account. + +A Personal Access Token (PAT) is required if your GitHub authentication uses 2FA. + +Typically, you can validate your token from the command line before using it in Harness. For example: + +`curl -i https://api.github.com -u :` + +If you have Two-Factor Authentication set up in your Git repo, then you need to generate a personal access token in your repo and enter that token in the **Personal Access Token** field. In GitHub, you can set up the personal access token at . + +#### PAT Permissions + +To use a personal access token with a GitHub organization that uses SAML single sign-on (SSO), you must first authorize the token. See [Authorizing a personal access token for use with SAML single sign-on](https://docs.github.com/en/enterprise-cloud@latest/authentication/authenticating-with-saml-single-sign-on/authorizing-a-personal-access-token-for-use-with-saml-single-sign-on) from GitHub.* The GitHub user account used to create the Personal Access Token must have admin permissions on the repo. +* GitHub doesn't provide a way of scoping a PAT for read-only access to repos. You must select the following permissions: + +![](./static/git-hub-connector-settings-reference-01.png) +### SSH Key + +If you selected **SSH** as the connection protocol, you must add the **SSH Key** to use with the connection as a [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md). For detailed steps to create an SSH Key, see [Add new SSH Key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account). + +Harness also supports [GitHub deploy keys](https://docs.github.com/en/developers/overview/managing-deploy-keys#deploy-keys). Deploy keys grant access to a single repo. Using a deploy key ensures that the Connector only works with the specific repo you selected in **URL Type**. + +### Enable API access + +This option is required for using Git-based triggers, Webhooks management, and updating Git statuses. + +You can use the same token you used in **Personal Access Token**. + +#### API Authentication + +You should use the same [Personal Access Token](#password_personal_access_token) for both Authentication and API Authentication. + diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/git-lab-connector-settings-reference.md b/docs/platform/7_Connectors/ref-source-repo-provider/git-lab-connector-settings-reference.md new file mode 100644 index 00000000000..47b11663170 --- /dev/null +++ b/docs/platform/7_Connectors/ref-source-repo-provider/git-lab-connector-settings-reference.md @@ -0,0 +1,98 @@ +--- +title: GitLab Connector Settings Reference +description: This topic provides settings and permissions for the GitLab Connector. Limitations. Before Harness syncs with your Git repo, it will confirm that all Harness' settings are in a valid state. If a conn… +# sidebar_position: 2 +helpdocs_topic_id: 5abnoghjgo +helpdocs_category_id: xyexvcc206 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the GitLab Connector. + +### Limitations + +* Before Harness syncs with your Git repo, it will confirm that all Harness' settings are in a valid state. If a connection is not working, Harness will not sync with your Git repo. + +### Name + +The unique name for this Connector. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +Text string. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### URL Type + +You can select Git Account or Git Repository. + +You can add a connection to your entire Git account or just a repo in the account. Selecting a Git account enables you to use one Connector for all of your subordinate repos. + +Later when you test this connection, you will use a repo in the account. + +In either case, when you use the Connector later in Harness, you will specify which repo to use. + +### Connection Type + +You can select **HTTPS** or **SSH** for the connection. + +You will need to provide the protocol-relevant URL in **GitLab Account URL**. + +If you use Two-Factor Authentication for your Git repo, you connect over **HTTPS** or **SSH**. HTTPS requires a personal access token. + +For SSH, ensure that the key is not OpenSSH, but rather RSA or another algorithm. To generate an SSHv2 key, use: `ssh-keygen -t rsa -m PEM` The `rsa` and `-m PEM` ensure that the key is RSA. Next, follow the prompts to create the PEM key. For more information, see the [ssh-keygen man page](https://linux.die.net/man/1/ssh-keygen).To sync with GitLab, you will need to generate a SSH key pair and add the SSH key to your GitLab account. For more information, see [Generating a new SSH key pair](https://gitlab.com/help/ssh/README#generating-a-new-ssh-key-pair) from GitLab. + +### GitLab Account or Repo URL + +The URL for your Git repo. Ensure that it matches the Connection Type option you selected. + +If you selected **Repository** in **Type**, enter the full URL for the repo. For example: `https://gitlab.com/John_User/harness.git`. + +You can get the URL from the **Clone** button in your repo. + +![](./static/git-lab-connector-settings-reference-03.png) +If you selected **Account** in **Type**, enter the URL without the repo name. When you use this Connector in a Harness setting you will be prompted to provide a repo name. + +### Authentication + +You can use a password/token for HTTPS credentials. Typically, a token is used. + +If you selected **SSH** as the connection protocol, you must add the **SSH Key** for use with the connection.  + +### Username + +Enter the username **git**. Do not enter any other value. + +**git** is the only value you should use in **Username**. + +### Password/Personal Access Token + +Enter a [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md) for the credentials of your GitLab user account. + +Typically, a Personal Access Token is used. See [Personal Access Tokens](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html) from GitLab. + +The Personal Access Token requirements for Harness are: `api`, `read_repository`, `write_repository`. + +![](./static/git-lab-connector-settings-reference-04.png) +### SSH Key + +If you selected **SSH** as the connection protocol, you must add the **SSH Key** for use with the connection as a [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md). + +See [Use SSH keys to communicate with GitLab](https://docs.gitlab.com/ee/user/ssh.html) from GitLab. + +### Enable API access + +This option is required for using Git-based triggers, Git Sync, and updating Git statuses. + +You'll need this setting if you use [Harness Git Experience](https://harness.helpdocs.io/article/grfeel98am). + +Simply use the same Personal Access Token you created earlier. + diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/http-helm-repo-connector-settings-reference.md b/docs/platform/7_Connectors/ref-source-repo-provider/http-helm-repo-connector-settings-reference.md new file mode 100644 index 00000000000..e3af089b1e9 --- /dev/null +++ b/docs/platform/7_Connectors/ref-source-repo-provider/http-helm-repo-connector-settings-reference.md @@ -0,0 +1,65 @@ +--- +title: HTTP Helm Repo Connector Settings Reference +description: This topic provides settings and permissions for the HTTP Helm Repo Connector. +# sidebar_position: 2 +helpdocs_topic_id: a0jotsvsi7 +helpdocs_category_id: xyexvcc206 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the HTTP Helm Repo Connector. + +You can add a Helm Chart Repository as an Artifact Server and then use it in Harness Kubernetes and Helm deployments. See [Helm CD Quickstart](https://docs.harness.io/article/cifa2yb19a-helm-cd-quickstart). + +A Helm chart repository is an HTTP server that houses an **index.yaml** file and, if needed, packaged charts. For details, see [The Chart Repository Guide](https://helm.sh/docs/topics/chart_repository/) from Helm. + +For instructions on how to use this Connector to perform specific tasks, see [Helm CD Quickstart](https://docs.harness.io/article/cifa2yb19a-helm-cd-quickstart). + + +### Limitations + +For Helm charts stored in repos such as **Amazon S3** or **GCS** (Google Cloud Storage), you will need a Cloud Provider for that account. For more information, see [Cloud Platform Connectors](https://docs.harness.io/category/cloud-platform-connectors). + +### Name + +The unique name for this Connector. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +Text string. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### Helm Repository URL + +The URL of the chart repo. + +Helm Hub at `https://hub.helm.sh` is not a Helm repo. It is a website for discovery and documentation. While it does list charts for deployments such cluster-autoscaler, the actual Helm repo for this and most charts is `https://kubernetes-charts.storage.googleapis.com`**.**If you're having trouble connecting, try adding a trailing slash (`/`) to the URL, like `https://nexus3.dev.example.io/repository/test-helm/`. + +Some chart servers, like Nexus, require a trailing slash. + +![](./static/http-helm-repo-connector-settings-reference-02.png) +### Username and Password + +From Helm: + + +> Note: For Helm 2.0.0, chart repositories do not have any intrinsic authentication. There is an issue tracking progress in GitHub. + + +> Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests, you have a plethora of options when it comes down to hosting your own chart repository. For example, you can use a Google Cloud Storage (GCS) bucket, Amazon S3 bucket, Github Pages, or even create your own web server. + +If the charts are backed by HTTP basic authentication, you can also supply the username and password. See [Share your charts with others](https://helm.sh/docs/topics/chart_repository/#share-your-charts-with-others) from Helm. + +### See also + +* [AWS Connector Settings Reference](../ref-cloud-providers/aws-connector-settings-reference.md) +* [Google Cloud Platform (GCP) Connector Settings Reference](../ref-cloud-providers/gcs-connector-settings-reference.md) + diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/source-code-manager-settings.md b/docs/platform/7_Connectors/ref-source-repo-provider/source-code-manager-settings.md new file mode 100644 index 00000000000..a812ddbc1d8 --- /dev/null +++ b/docs/platform/7_Connectors/ref-source-repo-provider/source-code-manager-settings.md @@ -0,0 +1,66 @@ +--- +title: Source Code Manager Settings +description: Currently, this feature is in Beta and behind a Feature Flag. Contact Harness Support to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once t… +# sidebar_position: 2 +helpdocs_topic_id: kqik8km5eb +helpdocs_category_id: xyexvcc206 +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is in Beta and behind a Feature Flag. Contact [Harness Support](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io) to enable the feature. Feature Flags can only be removed for Harness Professional and Essentials editions. Once the feature is released to a general audience, it is available for Trial and Community Editions. + +::: + +This topic describes the settings in **My Source Code Manager**. It's a reference you can use when you are trying to find the requirements and options for a specific **My Source Code Manager** setting. + +For instructions on setting up and using My Source Code Manager, see [Add Source Code Managers](https://docs.harness.io/article/p92awqts2x-add-source-code-managers). + +**My Source Code Manager** is required for Harness Git Experience. For details on Harness Git Experience, see [Harness Git Experience Overview](../../10_Git-Experience/git-experience-overview.md). + + +### Source Code Manager Overview + +In Harness Git Experience, a Harness Project is synced with a Git provider and has multiple Harness Users making commits to multiple branches. + +It can be difficult to audit all the Users making commits in the same Project without some way of identifying the users in Harness and your Git provider. Without some way of identifying them, all commits will look like they came from the same person. + +A **Source Code Manager** (SCM) uses your personal Git account information to identify the commits you make. A Source Code Manager is useful for auditing who is making changes to a Project, Pipeline, Connector, etc. + +**A Source Code Manager is mandatory for Harness Git Experience.** If you don’t have a SCM when you try to enable Harness Git Experience, Harness will warn you and require you set one up. + +### GitHub Authentication + +* **Supported Methods:** username and Personal Access Token (PAT). For information on creating PAT in GitHub, see [Creating a personal access token](https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/creating-a-personal-access-token). +* **Scopes:** select all the **repo** and **user** options.![](./static/source-code-manager-settings-06.png) + +Your GitHub Personal Access Token is stored in your Harness secret, which is a private key to which only you have access. This secret cannot be accessed or referenced by any other user. + +### Bitbucket Authentication + +* **Supported Methods:** + + Username and Password. This is the Bitbucket username and App password in your Bitbucket account settings.![](./static/source-code-manager-settings-07.png) + + SSH Key. This is the private key. The corresponding public key is added to your Bitbucket account settings.![](./static/source-code-manager-settings-08.png) +* **See also:** [Set up an SSH key](https://support.atlassian.com/bitbucket-cloud/docs/set-up-an-ssh-key/) from Bitbucket. + +### GitLab Authentication + +* **Supported Methods:** + + Username and Password. + + Username and Personal Access Token (PAT). + + Kerberos. + + SSH Key. This is the private key. The corresponding public key is added to your GitLab account settings.![](./static/source-code-manager-settings-09.png) +* **Scopes:** select **api**.![](./static/source-code-manager-settings-10.png) +* **See also:** [Set up your organization](https://docs.gitlab.com/ee/topics/set_up_organization.html) from GitLab. + +### Azure DevOps Authentication + +* **Supported Methods:** + + Username and password. + + Username and Personal Access Token (PAT). + + SSH key. +* **Scopes:** for Personal Access Tokens, **Code: Full**.![](./static/source-code-manager-settings-11.png) +* **See also:** [View permissions for yourself or others](https://docs.microsoft.com/en-us/azure/devops/organizations/security/view-permissions) from Azure. + diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/bitbucket-connector-settings-reference-05.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/bitbucket-connector-settings-reference-05.png new file mode 100644 index 00000000000..181470e78ca Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/bitbucket-connector-settings-reference-05.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/git-hub-connector-settings-reference-00.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-hub-connector-settings-reference-00.png new file mode 100644 index 00000000000..9bfff871127 Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-hub-connector-settings-reference-00.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/git-hub-connector-settings-reference-01.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-hub-connector-settings-reference-01.png new file mode 100644 index 00000000000..356c6d4a9ac Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-hub-connector-settings-reference-01.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/git-lab-connector-settings-reference-03.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-lab-connector-settings-reference-03.png new file mode 100644 index 00000000000..a2403ef2e4e Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-lab-connector-settings-reference-03.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/git-lab-connector-settings-reference-04.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-lab-connector-settings-reference-04.png new file mode 100644 index 00000000000..d9af520f899 Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/git-lab-connector-settings-reference-04.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/http-helm-repo-connector-settings-reference-02.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/http-helm-repo-connector-settings-reference-02.png new file mode 100644 index 00000000000..f33a4a59dd4 Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/http-helm-repo-connector-settings-reference-02.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-06.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-06.png new file mode 100644 index 00000000000..3cfdbf5c77b Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-06.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-07.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-07.png new file mode 100644 index 00000000000..181470e78ca Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-07.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-08.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-08.png new file mode 100644 index 00000000000..cee6c05c05f Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-08.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-09.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-09.png new file mode 100644 index 00000000000..3c5685ff3a9 Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-09.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-10.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-10.png new file mode 100644 index 00000000000..58c5846af45 Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-10.png differ diff --git a/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-11.png b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-11.png new file mode 100644 index 00000000000..d1e5a128acd Binary files /dev/null and b/docs/platform/7_Connectors/ref-source-repo-provider/static/source-code-manager-settings-11.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-34.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-34.png new file mode 100644 index 00000000000..c7a03bfd041 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-34.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-35.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-35.png new file mode 100644 index 00000000000..bc7aad4707c Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-35.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-36.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-36.png new file mode 100644 index 00000000000..2a618611230 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-36.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-37.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-37.png new file mode 100644 index 00000000000..a456c1cb305 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-37.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-38.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-38.png new file mode 100644 index 00000000000..1c436bd2020 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-38.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-39.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-39.png new file mode 100644 index 00000000000..c60eb731e00 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-39.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-40.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-40.png new file mode 100644 index 00000000000..30ef3d3734a Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-40.png differ diff --git a/docs/platform/7_Connectors/static/add-a-git-hub-connector-41.png b/docs/platform/7_Connectors/static/add-a-git-hub-connector-41.png new file mode 100644 index 00000000000..55252dd6848 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-git-hub-connector-41.png differ diff --git a/docs/platform/7_Connectors/static/add-a-kubernetes-cluster-connector-06.png b/docs/platform/7_Connectors/static/add-a-kubernetes-cluster-connector-06.png new file mode 100644 index 00000000000..2723cb14355 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-kubernetes-cluster-connector-06.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-63.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-63.png new file mode 100644 index 00000000000..3016a32af82 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-63.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-64.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-64.png new file mode 100644 index 00000000000..d282f0b332c Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-64.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-65.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-65.png new file mode 100644 index 00000000000..6ff6489eb60 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-65.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-66.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-66.png new file mode 100644 index 00000000000..1e066b4fc30 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-66.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-67.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-67.png new file mode 100644 index 00000000000..d48c158e074 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-67.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-68.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-68.png new file mode 100644 index 00000000000..4eaefe897fa Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-68.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-69.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-69.png new file mode 100644 index 00000000000..33af7c79f30 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-69.png differ diff --git a/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-70.png b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-70.png new file mode 100644 index 00000000000..33af7c79f30 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-a-microsoft-azure-connector-70.png differ diff --git a/docs/platform/7_Connectors/static/add-aws-connector-77.png b/docs/platform/7_Connectors/static/add-aws-connector-77.png new file mode 100644 index 00000000000..85f95066c61 Binary files /dev/null and b/docs/platform/7_Connectors/static/add-aws-connector-77.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-a-azure-repo-00.png b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-00.png new file mode 100644 index 00000000000..1e73e98d1e8 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-00.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-a-azure-repo-01.png b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-01.png new file mode 100644 index 00000000000..171b39e9702 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-01.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-a-azure-repo-02.png b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-02.png new file mode 100644 index 00000000000..a94670d10a3 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-02.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-a-azure-repo-03.png b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-03.png new file mode 100644 index 00000000000..dcf98f8f24a Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-03.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-a-azure-repo-04.png b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-04.png new file mode 100644 index 00000000000..dcf98f8f24a Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-04.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-a-azure-repo-05.png b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-05.png new file mode 100644 index 00000000000..a1e58e7ed85 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-a-azure-repo-05.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-code-repo-08.png b/docs/platform/7_Connectors/static/connect-to-code-repo-08.png new file mode 100644 index 00000000000..c794b355bcf Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-code-repo-08.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-code-repo-09.png b/docs/platform/7_Connectors/static/connect-to-code-repo-09.png new file mode 100644 index 00000000000..50a2f427b89 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-code-repo-09.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-google-cloud-platform-gcp-07.png b/docs/platform/7_Connectors/static/connect-to-google-cloud-platform-gcp-07.png new file mode 100644 index 00000000000..c3e0aa1d729 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-google-cloud-platform-gcp-07.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-45.png b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-45.png new file mode 100644 index 00000000000..6d25fefc0de Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-45.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-46.png b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-46.png new file mode 100644 index 00000000000..265f4001865 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-46.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-47.png b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-47.png new file mode 100644 index 00000000000..81c371b28fd Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-47.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-48.png b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-48.png new file mode 100644 index 00000000000..c477cc7c900 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-48.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-49.png b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-49.png new file mode 100644 index 00000000000..b28e3ffe96c Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-harness-container-image-registry-using-docker-connector-49.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-jenkins-10.png b/docs/platform/7_Connectors/static/connect-to-jenkins-10.png new file mode 100644 index 00000000000..c2ddc1c0a49 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-jenkins-10.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-jira-42.png b/docs/platform/7_Connectors/static/connect-to-jira-42.png new file mode 100644 index 00000000000..d91bdede0f9 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-jira-42.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-11.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-11.png new file mode 100644 index 00000000000..22a9450b250 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-11.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-12.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-12.png new file mode 100644 index 00000000000..6037a745900 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-12.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-13.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-13.png new file mode 100644 index 00000000000..c21cab48769 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-13.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-14.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-14.png new file mode 100644 index 00000000000..ab9cf301cb9 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-14.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-15.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-15.png new file mode 100644 index 00000000000..f75ce49fa94 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-15.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-16.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-16.png new file mode 100644 index 00000000000..987d144f01c Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-16.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-17.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-17.png new file mode 100644 index 00000000000..09d5177d9f6 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-17.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-18.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-18.png new file mode 100644 index 00000000000..fb90d998ee4 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-18.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-19.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-19.png new file mode 100644 index 00000000000..fb90d998ee4 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-19.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-20.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-20.png new file mode 100644 index 00000000000..082dc702051 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-20.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-21.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-21.png new file mode 100644 index 00000000000..082dc702051 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-21.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-22.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-22.png new file mode 100644 index 00000000000..14ba7488c9e Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-22.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-23.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-23.png new file mode 100644 index 00000000000..34834d1817a Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-23.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-24.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-24.png new file mode 100644 index 00000000000..bd4075f1151 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-24.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-25.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-25.png new file mode 100644 index 00000000000..9a58fb0178f Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-25.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-26.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-26.png new file mode 100644 index 00000000000..1a3ba606da9 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-26.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-27.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-27.png new file mode 100644 index 00000000000..420ff61f477 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-27.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-28.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-28.png new file mode 100644 index 00000000000..09bfaad7d15 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-28.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-29.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-29.png new file mode 100644 index 00000000000..d8f4c7421b8 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-29.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-30.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-30.png new file mode 100644 index 00000000000..d8f4c7421b8 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-30.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-31.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-31.png new file mode 100644 index 00000000000..3b821d43d31 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-31.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-32.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-32.png new file mode 100644 index 00000000000..3b821d43d31 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-32.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-33.png b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-33.png new file mode 100644 index 00000000000..a144cdb848e Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-monitoring-and-logging-systems-33.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-service-now-43.png b/docs/platform/7_Connectors/static/connect-to-service-now-43.png new file mode 100644 index 00000000000..78a8935c261 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-service-now-43.png differ diff --git a/docs/platform/7_Connectors/static/connect-to-service-now-44.png b/docs/platform/7_Connectors/static/connect-to-service-now-44.png new file mode 100644 index 00000000000..329d706ee01 Binary files /dev/null and b/docs/platform/7_Connectors/static/connect-to-service-now-44.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-50.png b/docs/platform/7_Connectors/static/git-hub-app-support-50.png new file mode 100644 index 00000000000..7828b7c6444 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-50.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-51.png b/docs/platform/7_Connectors/static/git-hub-app-support-51.png new file mode 100644 index 00000000000..1c214fe3537 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-51.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-52.png b/docs/platform/7_Connectors/static/git-hub-app-support-52.png new file mode 100644 index 00000000000..2ede2b51baa Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-52.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-53.png b/docs/platform/7_Connectors/static/git-hub-app-support-53.png new file mode 100644 index 00000000000..2c09302b6ba Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-53.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-54.png b/docs/platform/7_Connectors/static/git-hub-app-support-54.png new file mode 100644 index 00000000000..df0e34d5377 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-54.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-55.png b/docs/platform/7_Connectors/static/git-hub-app-support-55.png new file mode 100644 index 00000000000..db579190bd5 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-55.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-56.png b/docs/platform/7_Connectors/static/git-hub-app-support-56.png new file mode 100644 index 00000000000..b85879b2540 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-56.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-57.png b/docs/platform/7_Connectors/static/git-hub-app-support-57.png new file mode 100644 index 00000000000..91834b1dd42 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-57.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-58.png b/docs/platform/7_Connectors/static/git-hub-app-support-58.png new file mode 100644 index 00000000000..df0e34d5377 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-58.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-59.png b/docs/platform/7_Connectors/static/git-hub-app-support-59.png new file mode 100644 index 00000000000..db579190bd5 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-59.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-60.png b/docs/platform/7_Connectors/static/git-hub-app-support-60.png new file mode 100644 index 00000000000..c2851571712 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-60.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-61.png b/docs/platform/7_Connectors/static/git-hub-app-support-61.png new file mode 100644 index 00000000000..6cfd8faeded Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-61.png differ diff --git a/docs/platform/7_Connectors/static/git-hub-app-support-62.png b/docs/platform/7_Connectors/static/git-hub-app-support-62.png new file mode 100644 index 00000000000..7eecd74e289 Binary files /dev/null and b/docs/platform/7_Connectors/static/git-hub-app-support-62.png differ diff --git a/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-71.png b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-71.png new file mode 100644 index 00000000000..45049639114 Binary files /dev/null and b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-71.png differ diff --git a/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-72.png b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-72.png new file mode 100644 index 00000000000..265f4001865 Binary files /dev/null and b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-72.png differ diff --git a/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-73.png b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-73.png new file mode 100644 index 00000000000..eccaadac0ec Binary files /dev/null and b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-73.png differ diff --git a/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-74.png b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-74.png new file mode 100644 index 00000000000..688d044d630 Binary files /dev/null and b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-74.png differ diff --git a/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-75.png b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-75.png new file mode 100644 index 00000000000..b608c75d92a Binary files /dev/null and b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-75.png differ diff --git a/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-76.png b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-76.png new file mode 100644 index 00000000000..d344b7e7328 Binary files /dev/null and b/docs/platform/7_Connectors/static/using-ibm-registry-to-create-a-docker-connector-76.png differ diff --git a/docs/platform/7_Connectors/using-ibm-registry-to-create-a-docker-connector.md b/docs/platform/7_Connectors/using-ibm-registry-to-create-a-docker-connector.md new file mode 100644 index 00000000000..6005ef43216 --- /dev/null +++ b/docs/platform/7_Connectors/using-ibm-registry-to-create-a-docker-connector.md @@ -0,0 +1,102 @@ +--- +title: Connect to IBM Cloud Container Registry +description: This topic explains how to set up the Docker Connector that uses IBM Registry. +# sidebar_position: 2 +helpdocs_topic_id: fjwm9xs5qv +helpdocs_category_id: o1zhrfo8n5 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can connect Harness to IBM Cloud Container Registry using a Harness Docker Registry Connector. The Connector uses your credentials to your IBM Cloud Container Registry and allows you to push and pull images. + +This topic explains how to use the Harness Docker Registry Connector to connect Harness to the IBM Cloud Container Registry. + + +### Before you begin + +* [CI Enterprise Concepts](../../continuous-integration/ci-quickstarts/ci-concepts.md) +* [Harness Delegate Overview](../2_Delegates/delegates-overview.md) + +### Review: Managing IAM Policies in IBM Cloud + +If the IBM Cloud IAM role used by your Docker Registry Connector does not have the policies required by the IBM service you want to access, you can modify or switch the role. + +To set up and manage IAM policies, see [Defining access role policies](https://cloud.ibm.com/docs/Registry?topic=Registry-user#user). + +When you switch or modify the IAM role, it might take up to 5 minutes to take effect. + +### Supported Platforms and Technologies + +For a list of the platforms and technologies supported by Harness, see [Supported Platforms and Technologies](https://ngdocs.harness.io/article/1e536z41av-supported-platforms-and-technologies). + +### Step 1: Generate an API Key in IBM Cloud Console + +Follow the instructions outlined in [Creating an API Key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key) from IBM. + +![](./static/using-ibm-registry-to-create-a-docker-connector-71.png) +Once the API key is successfully generated, click **Copy** or **Download the API key.** + +### Step 2: Create a Docker Registry Connector in Harness + +You can create the Docker Registry Connector at the Harness **Account**, **Organisation**, or **Project** level. In this example, we'll use a **Project**. + +Open a Harness **Project**. + +In **Project Setup**, click **Connectors**. + +Click **New Connector**, and under **Artifact Repositories** click **Docker Registry**.  + +![](./static/using-ibm-registry-to-create-a-docker-connector-72.png) +The Docker Registry Connector settings appear. + +![](./static/using-ibm-registry-to-create-a-docker-connector-73.png) +In **Name**, enter a name for this connector. + +Harness automatically creates the corresponding Id ([entity identifier](../20_References/entity-identifier-reference.md)). + +Click **Continue**. + +### Step 3: Enter Credentials + +Here is where you'll use the API key you generated in IBM Cloud. + +![](./static/using-ibm-registry-to-create-a-docker-connector-74.png) +Select or enter the following options: + + + +| | | +| --- | --- | +| **Docker Registry URL** | Enter the IBM Cloud Container Registry API endpoint URL. For example:`https://us.icr.io`See [IBM Cloud Container Registry](https://cloud.ibm.com/apidocs/container-registry#endpoint-url) from IBM. | +| **Provider Type** | Select **Other (Docker V2 compliant)** | +| **Authentication** | Select **Username and Password** | +| **Username** | `iamapikey`See [Authentication](https://cloud.ibm.com/docs/Registry?topic=Registry-registry_access&mhsrc=ibmsearch_a&mhq=iamapikey#registry_access_apikey_auth) from IBM. | +| **Password** | In **Password**, click **Create** or **Select a Secret**.In the new Secret in **Secret Value**, enter the API key generated in [Step 1](using-ibm-registry-to-create-a-docker-connector.md#step-1-generate-an-api-key-in-ibm-cloud-console). | + +![](./static/using-ibm-registry-to-create-a-docker-connector-75.png) +Click **Save**, and **Continue**. + +### Step 4: Set Up Delegates + +Harness uses Docker Registry Connectors at Pipeline runtime to authenticate and perform operations with IBM Cloud Registry. Authentications and operations are performed by Harness Delegates. + +You can select Any Available Harness Delegate and Harness will select the Delegate. For a description of how Harness picks Delegates, see [Delegates Overview](../2_Delegates/delegates-overview.md). + +You can use Delegate Tags to select one or more Delegates. For details on Delegate Tags, see [Select Delegates with Tags](../2_Delegates/delegate-guide/select-delegates-with-selectors.md). + +If you need to install a Delegate, see [Delegate Installation Overview](https://ngdocs.harness.io/article/re8kk0ex4k-delegate-installation-overview). + +The Delegate(s) you use must have networking connectivity to the IBM Cloud Container Registry. + +Click **Save** and **Continue**. + +### Step 5: Verify Test Connection + +Harness tests the credentials you provided using the Delegates you selected. + +![](./static/using-ibm-registry-to-create-a-docker-connector-76.png) +If the credentials fail, you'll see an error. Click **Edit Credentials** to modify your credentials. + +Click **Finish**. + diff --git a/docs/platform/8_Pipelines/_category_.json b/docs/platform/8_Pipelines/_category_.json new file mode 100644 index 00000000000..6ce43b308f9 --- /dev/null +++ b/docs/platform/8_Pipelines/_category_.json @@ -0,0 +1 @@ +{"label": "Pipelines", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Pipelines"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "kncngmy17o"}} \ No newline at end of file diff --git a/docs/platform/8_Pipelines/add-a-custom-stage.md b/docs/platform/8_Pipelines/add-a-custom-stage.md new file mode 100644 index 00000000000..cf4c7a4663c --- /dev/null +++ b/docs/platform/8_Pipelines/add-a-custom-stage.md @@ -0,0 +1,101 @@ +--- +title: Add a Custom Stage +description: The Custom stage provides flexibility to support any use case that doesn't require the pre-defined settings of CI, CD, or Approvals. +# sidebar_position: 2 +helpdocs_topic_id: o60eizonnn +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + + +:::note +Currently, this feature is behind the feature flag `NG_CUSTOM_STAGE`. Contact [Harness Support](mailto:support@harness.io) to enable the feature.This topic describes how to set up a Custom stage. + +::: + +Harness has pre-defined stages for the most common release operations, such as Build (CI), Deploy (CD), and Approval stages; however, there are times when you need to add a stage to your Pipeline that performs other operations and don't require the pre-defined settings of CI, CD, or Approvals. + +For example, ad hoc provisioning or jobs that need to run before a deployment stage. This is when a Custom stage is useful. + +Unlike the standard Build, Deploy, or Approval stages, a Custom stage has no pre-defined functionality or requirements. The Custom stage provides flexibility to support any use case outside of the standard stages and doesn't require the pre-defined settings of CI, CD, or Approvals. + +The steps available in a Custom stage are also available in standard stages. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Add a Stage](add-a-stage.md) + +### Limitations + +* The Custom stage is available in all modules that use Pipelines (CI, CD, Feature Flags). +* Steps available in the Custom stage are also available in Build, Deploy, or Approval stages, but CI and CD-specific steps, like a Rolling Deployment, are not available in the Custom stage. +* Licensing is applied to the steps in Custom stage. For example, CD Steps such as the HTTP step are available inside Custom stage only if you have CD license. +* There is no Rollback functionality in a Custom stage. + + Rollbacks can be achieved via conditional execution. For example, run a step only is something failed/succeeded previously. +* You can select which Delegate to use for each step in a Custom stage using the step's **Delegate Selector** setting. If this setting is not used, then Harness will select a Delegate using its standard selection process. See [Delegates Overview](../2_Delegates/delegates-overview.md). +* Custom stage can be used as a template like other stage types. Step templates can be used inside a Custom stage, and the Pipeline containing the Custom stage can also be used as a Template. + +### Visual Summary + +The following video provides a quick overview of the Custom stage. + +### Step 1: Add a Custom Stage + +In your Pipeline, click **Add Stage**, and then click **Custom Stage**. + +![](./static/add-a-custom-stage-58.png) +Enter a name for the stage. Harness automatically adds an Id ([Entity Identifier](../20_References/entity-identifier-reference.md)) for the stage. + +Click **Set Up Stage**. + +The new stage is created and the Execution section is displayed. + +Let's look at adding stage variables to the **Overview** section first. + +### Option: Add Stage Variables + +Once you've created a stage, its settings are in the **Overview** tab. + +In **Advanced**, you can add **Stage Variables**. + +Stage variables are custom variables you can add and reference in your stage and Pipeline. They're available across the Pipeline. You can override their values in later stages. + +You can even reference stage variables in the files fetched at runtime. + +You reference stage variables **within their stage** using the expression `<+stage.variables.[variable name]>`. + +You reference stage variables **outside their stage** using the expression `<+pipeline.stages.[stage Id].variables.[variable name]>`. + +See [Built-in and Custom Harness Variables Reference](../12_Variables-and-Expressions/harness-variables.md). + +### Step 2: Add Execution Steps + +In **Execution**, click **Add Step**, and add whatever steps you need. + +These steps are also available in CI, CD, and Approval stages. + +For details on the different steps, see: + +* [General CD](https://ngdocs.harness.io/category/y6gyszr0kl) +* [Using Shell Scripts in CD Stages](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) +* [Create an HTTP Step Template](../13_Templates/harness-template-library.md) +* [Approvals](https://ngdocs.harness.io/category/bz4zh3b75p) +* [Synchronize Deployments using Barriers](https://docs.harness.io/article/dmlf8w2aeh-synchronize-deployments-using-barriers) +* [Add a Policy Engine Step to a Pipeline](../14_Policy-as-code/add-a-governance-policy-step-to-a-pipeline.md) +* [Terraform How-tos](https://docs.harness.io/article/w6i5f7cpc9-terraform-how-tos) + +CI and CD-specific steps, like a Rolling Deployment, are not available in the Custom stage. + +### Option: Configure Advanced Settings + +In **Advanced**, you can use the following options: + +* [Stage Conditional Execution Settings](w_pipeline-steps-reference/step-skip-condition-settings.md) +* [Step Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md) + +### See also + +* [Create a Stage Template](../13_Templates/add-a-stage-template.md) + diff --git a/docs/platform/8_Pipelines/add-a-stage.md b/docs/platform/8_Pipelines/add-a-stage.md new file mode 100644 index 00000000000..5995bfba91e --- /dev/null +++ b/docs/platform/8_Pipelines/add-a-stage.md @@ -0,0 +1,97 @@ +--- +title: Add a Stage +description: This functionality is limited temporarily to the platforms and settings you can see. More functionality for this feature is coming soon. A Stage is a subset of a Pipeline that contains the logic to p… +# sidebar_position: 2 +helpdocs_topic_id: 2chyf1acil +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This functionality is limited temporarily to the platforms and settings you can see. More functionality for this feature is coming soon.A Stage is a subset of a Pipeline that contains the logic to perform one major segment of the Pipeline process. Stages are based on the different milestones of your Pipeline, such as building, approving, and delivering. + +Adding a stage to your Pipeline is the same across all Harness modules (CD, CI, etc). When you add a stage you select the module you want to use. + +The module you select determines the stage settings. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md) + +### Step 1: Create a Pipeline + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can create a Pipeline from any module in your Project, and then add stages for any module. + +This topic shows you how to create a Pipeline from the CI module. To do this, perform the below steps: + +In Harness, click **Builds** and then click **Pipelines**. + +Click **Pipeline**. The new Pipeline settings appear. + +Enter **Name**, **Description**, **Tags**, **and Timeout** for your Pipeline. + +![](./static/add-a-stage-55.png) +Click **Start**. Now you're ready to add a stage. + +### Step 2: Add a Stage + +Click **Add Stage**. The stage options appear. + +Select a stage type and follow its steps. + +The steps you see depend on the type of stage you selected. + +Don't see the module you want? You can enable modules in your Project Overview. See [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md).Enter a name for the stage. + +You can add a name when you create the stage, but you can edit the name in the **Overview** section of the stage anytime. + +Changing the stage name doesn't change the stage identifier (Id). The stage identifier is created when you first name the stage and it cannot be changed. See [Entity Identifier Reference](../20_References/entity-identifier-reference.md). + +For CD stages, you can select a deployment type. A Stage can deploy Services, and other workloads. Select the type of deployment this Stage performs. + +### Option: Stage Variables + +Once you've created a stage, its settings are in the **Overview** tab. For example, here's the **Overview** tab for a Deploy stage: + +![](./static/add-a-stage-56.png) +In **Advanced**, you can add **Stage Variables**. + +Stage variables are custom variables you can add and reference in your stage and Pipeline. They're available across the Pipeline. You can override their values in later stages. + +You can even reference stage variables in the files fetched at runtime. + +For example, you could create a stage variable `name` and then reference it in the Kubernetes values.yaml file used by this stage: `name: <+stage.variables.name>`: + + +``` +name: <+stage.variables.name> +replicas: 2 + +image: <+artifact.image> +... +``` +When you run this Pipeline, the value for `name` is used for the values.yaml file. The value can be a Fixed Value, Expression, or Runtime Input. + +You reference stage variables **within their stage** using the expression `<+stage.variables.[variable name]>`. + +You reference stage variables **outside their stage** using the expression `<+pipeline.stages.[stage Id].variables.[variable name]>`. + +### Option: Advanced Settings + +In **Advanced**, you can use the following options: + +* [Stage Conditional Execution Settings](w_pipeline-steps-reference/step-skip-condition-settings.md) +* [Step Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md) + +### Option: Running Stages in Parallel + +You can drag stages on top of each other to run them in parallel: + +![](./static/add-a-stage-57.png) +### See also + +* [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md) + diff --git a/docs/platform/8_Pipelines/best-practices-for-looping-strategies.md b/docs/platform/8_Pipelines/best-practices-for-looping-strategies.md new file mode 100644 index 00000000000..47fc0aa7521 --- /dev/null +++ b/docs/platform/8_Pipelines/best-practices-for-looping-strategies.md @@ -0,0 +1,62 @@ +--- +title: Best Practices for Looping Strategies +description: Review this topic before you implement a Matrix and Parallelism strategy in your pipeline. +# sidebar_position: 2 +helpdocs_topic_id: q7i0saqgw4 +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness recommends the following best practices for implementing Matrix and Parallelism strategies in your Pipelines. + +### Complex Looping Scenarios Require Careful Planning + +Harness supports complex looping strategies such as: + +* A Matrix strategy with multiple dimensions. +* Multiple looping strategies in the same Stage. For example, you can define a Stage Matrix with three dimensions and then a Step Matrix with four dimensions within the same Stage. + +Before you implement a complex looping scenario, you need to consider carefully the resource consumption of your Pipeline containers and the overall capacity of your build/deploy infrastructure. Too many Stages or Steps running concurrently can cause the Pipeline to fail, to time out, to consume too many resources, or to run successfully but incorrectly. + +A good general rule to follow is: **Your looping scenario is too complex if you cannot visualize how it will run and calculate the memory and CPU required for the Pipeline to run successfully.** + +### How Pipelines Reserve Resources + +When a Pipeline requests resources for a Step, it calculates the *maximum CPU and memory required at any point in the Stage*. Consider the following scenario: + +* Your Build Stage has three Steps: the first builds an artifact for a web app; the second runs the artifact in a browser to confirm that it runs, the third pushes it to a registry. +* Each Step consumes up to 500M (memory) and 400m (CPU). Because the Steps run serially, not concurrently, the Pipeline reserves 500Mi memory and 400m CPU for the entire Stage.![](./static/best-practices-for-looping-strategies-06.png) +* Suppose you want to test the app on both Chrome and Firefox. You create a simple Matrix strategy for the Step: +``` +matrix: + browser: [ chrome, firefox ] + maxConcurrency: 2 +``` +* The Pipeline creates two copies of the Run Stage and runs them concurrently. This doubles the resource consumption for the overall Stage. When the Pipeline runs, it reserves double the resources (1000M memory, 800m CPU) for the overall Stage.![](./static/best-practices-for-looping-strategies-07.png) +* So far, so good. The Pipeline executes with no problem. But suppose you add another dimension to your matrix and increase the `maxConcurrency`to run all the Stages at once? +``` +matrix: +  os: [ macos, linux, android ] + browser: [ chrome, firefox, opera ] + maxConcurrency: 9 +``` +* In this case, the Stage requires 9 times the original resources to run. The Pipeline fails because the build infrastructure cannot reserve the resources to run all these Stages concurrently. + +### How to Determine the Right `maxConcurrency` + +Always consider the value you want to specify for the `maxConcurrency`. Your goal is to define a `maxConcurrency` that speeds up your Pipeline builds while staying within the capacity limits of your build infrastructure. + +Harness recommends that you determine the `maxConcurrency` for a specific Stage or Step using an iterative workflow: + +1. Start with a low `maxConcurrency` value of 2 or 3. +2. Run the Pipeline and monitor the resource consumption for the overall Pipeline. +3. Gradually increase the `maxConcurrency` based on each successive run until you reach a "happy medium" between your run times and resource consumption. + +### See also + +* [Optimizing CI Build Times](https://harness.helpdocs.io/article/g3m7pjq79y) +* [How to Run a Run a Stage or Step Multiple Times using a Matrix](run-a-stage-or-step-multiple-times-using-a-matrix.md) +* [Looping Strategies Overview: Matrix, Repeat, and Parallelism](looping-strategies-matrix-repeat-and-parallelism.md) +* [Speed Up CI Test Pipelines Using Parallelism](https://harness.helpdocs.io/article/kce8mgionj) + diff --git a/docs/platform/8_Pipelines/define-a-failure-strategy-on-stages-and-steps.md b/docs/platform/8_Pipelines/define-a-failure-strategy-on-stages-and-steps.md new file mode 100644 index 00000000000..a2e5ef256da --- /dev/null +++ b/docs/platform/8_Pipelines/define-a-failure-strategy-on-stages-and-steps.md @@ -0,0 +1,99 @@ +--- +title: Define a Failure Strategy on Stages and Steps +description: Currently, only the All Errors Failure Type is supported. A failure strategy defines how your stages and steps handle different failure conditions. The failure strategy contains error conditions that… +# sidebar_position: 2 +helpdocs_topic_id: 0zvnn5s1ph +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, only the **All Errors** Failure Type is supported.A failure strategy defines how your stages and steps handle different failure conditions. + +The failure strategy contains error conditions that must occur for the strategy to apply, and actions to take when the conditions occur. + +Failure strategies are a critical pipeline design component that determine what fails a step or stage and what to do when the failure occurs. + +You can apply a failure strategy to the following: + +* Step +* Step Group +* Stage + +For details on strategy options and how strategies work, see [Step and Stage Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md). + +### Before you begin + +* [Add a Stage](add-a-stage.md) + +### Visual Summary + +Here's a quick video of how to set up failure strategies: + +Here is what a Manual Intervention action looks like when a failure occurs: + +![](./static/define-a-failure-strategy-on-stages-and-steps-11.png) +You can select an option or, if the Manual Intervention exceeds its Timeout setting, select the Post Timeout Action that will happen automatically. + +### Review: Failure Strategy takes Precedence over Conditional Execution + +Harness Pipeline stages and steps both include **Conditional Execution** and **Failure Strategy** settings: + +![](./static/define-a-failure-strategy-on-stages-and-steps-12.png) +Using these settings together in multiple stages requires some consideration. + +Let's say you have a Pipeline with two stages: **stage 1** followed by **stage 2**. + +Stage 2's **Conditional Execution** is set to **Execute this step only if prior stage or step failed**. Stage 1's **Failure Strategy** is set to **Rollback Stage on All Errors**. + +If stage 1 has any error it is rolled back and so it is not considered a failure. Hence, the stage 2's **Conditional Execution** is not executed. + +In order to get stage 2 to execute, you can set the stage 1 **Failure Strategy** to **Ignore Failure**. Rollback will not occur and stage 2's **Conditional Execution** is executed. + +In general, if you want to run particular steps on a stage failure, you should add them to stage's **Rollback** section. + +### Step: Add a Stage Failure Strategy + +The stage failure strategy applies to all steps in the stage that do not have their own failure strategy configured. + +In a stage, click **Advanced**. + +In **Failure Strategy**, you can see the default stage strategy: + +**On all errors other than those specified in failure strategies defined here, perform action** + +This default cannot be removed, but it can be edited. You can choose a new Action, Timeout, and Post timeout action. + +To add an additional stage failure strategy, click **Add**. + +Select the following: + +* **On failure of type:** select one or more of the error types. See [Step and Stage Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md). + +Currently, only **All Errors** is supported.* **Action:** select one of the available actions. See [Step and Stage Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md). +* **Timeout** and **Post timeout action:** these are available if you selected **Manual Intervention** in Action. Enter the timeout for the failure strategy and the subsequent action to perform. +* **Retry Count** and **Retry Intervals:** these are available if you selected **Retry** in Action. Enter the number of times to retry the step, and the retries intervals. + +### Step: Add a Step Failure Strategy + +By default, steps do not have a failure strategy. Steps follow the stage failure strategy. + +When you add a step failure strategy, you are overriding the stage failure strategy. + +In a step, click **Advanced**. + +Click **Failure Strategy** and click **Add**. + +Select the following: + +* **On failure of type:** select one or more of the error types. See [Step and Stage Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md). + +Currently, only **All Errors** is supported.* **Action:** select one of the available actions. See [Step and Stage Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md). +* **Timeout** and **Post timeout action:** these are available if you selected **Manual Intervention** in Action. Enter the timeout for the failure strategy and the subsequent action to perform. +* **Retry Count** and **Retry Intervals:** these are available if you selected **Retry** in Action. Enter the number of times to retry the step, and the retries intervals. + +### See also + +* [Step and Stage Failure Strategy Settings](w_pipeline-steps-reference/step-failure-strategy-settings.md) +* [Stage and Step Execution Condition Settings](w_pipeline-steps-reference/step-skip-condition-settings.md) + diff --git a/docs/platform/8_Pipelines/harness-yaml-quickstart.md b/docs/platform/8_Pipelines/harness-yaml-quickstart.md new file mode 100644 index 00000000000..639968d84d2 --- /dev/null +++ b/docs/platform/8_Pipelines/harness-yaml-quickstart.md @@ -0,0 +1,420 @@ +--- +title: Harness YAML Quickstart +description: This quickstart shows you how to create a Harness Pipelines using YAML. It's not an exhaustive reference for all of the YAML entries, but a quick procedure to get you started with Harness Pipeline YA… +# sidebar_position: 2 +helpdocs_topic_id: 1eishcolt3 +helpdocs_category_id: w6r9f17pk3 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This quickstart shows you how to create a Harness Pipelines using YAML. It's not an exhaustive reference for all of the YAML entries, but a quick procedure to get you started with Harness Pipeline YAML. + +Harness includes visual and YAML editors for creating and editing Pipelines, Triggers, Connectors, and other entities. Everything you can do in the visual editor you can also do in YAML. + +A quick run through on how to use the YAML builder will get you up and coding Harness Pipelines in minutes. Let's get started. + + +:::note +For details of the YAML schema, see [YAML Reference: Pipelines](w_pipeline-steps-reference/yaml-reference-cd-pipeline.md). + +::: + +### Objectives + +You'll learn how to: + +1. View a Pipeline in the YAML builder. +2. Review the Pipeline YAML schema structure. +3. Use keyboard shortcuts for adding entries and selecting commands. +4. Find and replace YAML. +5. Verify and resolve YAML errors with the YAML builder's assistance. + +### Before you begin + +* Review [Harness Key Concepts](https://docs.harness.io/article/4o7oqwih6h-harness-key-concepts) to establish a general understanding of Harness. +* The best way to get started with YAML is to do a CI or CD quickstart and then view the YAML in Pipeline Studio. See ​[CD Quickstarts](https://docs.harness.io/category/c9j6jejsws) and [CI Quickstarts](https://docs.harness.io/category/onboard-with-ci). + +### Visual Summary + +Here's a very quick video showing you how to build a Pipeline using YAML: + +### Step 1: Start a New Pipeline + +In your Harness Project, click **New Pipeline**. + +Enter the name **YAML Example** for the Pipeline and click **Start**. + +The Pipeline is created. + +Click **YAML** to view the YAML editor. + +![](./static/harness-yaml-quickstart-21.png) + +You can see the Pipeline YAML. Here's an example: + + +``` +pipeline: + name: YAML Example + identifier: YAML_Example + projectIdentifier: CD_Examples + orgIdentifier: default + tags: {} +``` +Place your cursor after `tags: {}` and hit Enter. + +Press **Ctrl + Space**. The major Pipeline sections are displayed. + +![](./static/harness-yaml-quickstart-22.png) + +Completing the Pipeline in YAML is simply the process of filing out these sections. + +Let's look at the YAML structure of a Pipeline. + +### Review: Basic Pipeline Structure + +The following outline shows the basic hierarchy of the Pipeline YAML. + +This outline is shown to help you gain a general idea of the structure of a Pipeline and help you know which entry to make next as you develop your Pipeline in YAML. + +Here are the Pipeline settings and major sections: + + +``` +pipeline: + name: YAML Example + identifier: YAML_Example + projectIdentifier: CD_Examples + orgIdentifier: default + tags: {} + description: + stages: + - + notificationRules: + - + flowControl: + + properties: + + timeout: + variables: + - +``` +The following steps walk through adding the YAML for the `stages` section of the Pipeline. + +For details of the YAML schema, see [YAML Reference: Pipelines](w_pipeline-steps-reference/yaml-reference-cd-pipeline.md). + +### Step 2: Add a Stage + +The basic Stage YAML looks like this: + + +``` + stages: + - stage: + identifier: + name: + type: + description: + tags: + spec: + serviceConfig: + + infrastructure: + + execution: + steps: + - + variables: + - + when: + pipelineStatus: + failureStrategies: + - onFailure: + errors: + - null +``` +In `identifier`, enter a unique Id for the stage, such as **mystage**. In the Visual editor the Id is generated automatically. In YAML, you have to enter an Id. + +In `name`, enter a name for the stage, such as **mystage**. + +In `type`, you select the type of Stage you want to add. This is the same as clicking Add Stage in the Visual editor. + +![](./static/harness-yaml-quickstart-23.png) + +For details on each type, see: + +* **Approval:** [Using Manual Harness Approval Stages](../9_Approvals/adding-harness-approval-stages.md), [Adding Jira Approval Stages and Steps](../9_Approvals/adding-jira-approval-stages.md) +* **CI:** [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) +* **Deployment:** [CD Quickstarts](https://docs.harness.io/category/c9j6jejsws) + +For this quickstart, we're going to use the **Deployment** type. + +In `type`, select **Deployment**. + +Next, we'll add a `spec` that defines the Service, Environment, and Execution for the stage. + +#### Stage Spec + +The Stage spec contains the three major sections of the Stage: + + +``` + spec: + serviceConfig: + ... + infrastructure: + ... + execution: + steps: + - ... +``` +These correspond to the Stage sections in the Visual editor: + +![](./static/harness-yaml-quickstart-24.png) + +#### Stage Service + +In `serviceConfig`, press `Enter`, and then `Ctrl + Space`. + +You can see the `serviceConfig` options: + +![](./static/harness-yaml-quickstart-25.png) + +For this quickstart, we'll just use `service` and `serviceDefinition`. We're just looking at the structure so we'll use [Runtime Inputs](../20_References/runtime-inputs.md) wherever we can: + + +``` + spec: + serviceConfig: + service: + identifier: myservice + name: myservice + serviceDefinition: + type: Kubernetes + spec: + manifests: + - manifest: + identifier: <+input> + type: K8sManifest + spec: + store: + type: Github + spec: + connectorRef: <+input> + gitFetchType: Branch + branch: <+input> + folderPath: <+input> + repoName: <+input> +``` +This stage simply adds a Service named `myservice` and a Service Definition using Kubernetes manifests. + +For details on adding manifests to a Service Definition, see [Add Kubernetes Manifests](https://docs.harness.io/article/ssbq0xh0hx-define-kubernetes-manifests). + +The `connectorRef` setting is for the Harness Connector that connects to the Git repo where the manifests are located. In the Visual editor you can create/select Connectors inline, but in the YAML editor, you must use the name of an existing Connector. In this example, we simply use a Runtime Input (`connectorRef: <+input>`) and we can add the Connector later. + +#### Stage Infrastructure + +In `infrastructure`, press `Enter`, and then `Ctrl + Space`. + +You can see the `infrastructure` options: + +![](./static/harness-yaml-quickstart-26.png) + +For this quickstart, we'll just use `environment` and `infrastructureDefinition`. We're just looking at the structure so we'll use [Runtime Inputs](../20_References/runtime-inputs.md) wherever we can: + + +``` + infrastructure: + environment: + identifier: myinfra + name: myinfra + type: PreProduction + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: <+input> + namespace: <+input> + releaseName: <+input> +``` +In `infrastructureDefinition`, you can see that we are using a [Kubernetes Cluster Connector](../7_Connectors/add-a-kubernetes-cluster-connector.md) for a platform-agnostic direct connection to the target cluster. + +#### Stage Execution + +In `execution`, in `steps`, press **Enter**, type `-`, and then press **Space**. + +You can see the `steps` options: + +![](./static/harness-yaml-quickstart-27.png) + +Click `step`. The step settings appear. + +In `type`, press **Ctrl + Space** to see the steps you can add. + +![](./static/harness-yaml-quickstart-28.png) + +For this quickstart, we'll just use `ShellScript`. Here's an example: + + +``` + - step: + identifier: ssh + name: ssh + type: ShellScript + description: <+input> + spec: + shell: Bash + source: + type: Inline + spec: + script: echo "hello world" + timeout: <+input> + onDelegate: false + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: Abort + when: + stageStatus: Success +``` +The step Failure Strategy is set using `failureStrategies` and the Conditional Execution is set using `when`. + +#### Stage Conditional Execution + +You can add the Stage Conditional Execution settings before or after `stage`. To add them after, make sure you indent to the same depth as stage `identifier`. and then press **Ctrl + Space**. + +You can see the remaining options: + +![](./static/harness-yaml-quickstart-29.png) + +Click `when`. The `pipelineStatus` setting appears. + +Press **Ctrl + Space** and select `Success`. + +#### Stage Failure Strategy + +Create a new line under `pipelineStatus: Success` , indent to the same level as `when`, and the remaining options appear. + +![](./static/harness-yaml-quickstart-30.png) + +Click `failureStrategies`. The Failure Strategy settings appear. + +Here's an example: + + +``` + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: Abort +``` +#### Stage Variables + +Create a new line, indent to the same level as `failureStrategies`, and the remaining options appear. + +![](./static/harness-yaml-quickstart-31.png) + +Click `variables`. On the new line, press **Ctrl + Space**. The `variables` options appear. + +Here's an example: + + +``` + variables: + - name: myvar + type: String + value: myval +``` +The Pipeline stage is now complete. + +Click **Save**. + +For details of the YAML schema, see [YAML Reference: Pipelines](w_pipeline-steps-reference/yaml-reference-cd-pipeline.md). + +### Review: Autocomplete + +The YAML editor has an autocomplete feature that makes it very easy to see what entries are available. + +The keyboard command for autocomplete is `Ctrl + Space`. + +![](./static/harness-yaml-quickstart-32.png) + +If an entry already has a value, the autocomplete will not show you other options. You need to delete the value and then enter `Ctrl + Space`. + +Let's look at an example. + +Navigate to `rollbackSteps: []` at the bottom of the stage. + +Delete the `[]`, press `Enter`, type `-`, and press `Space`. + +You can see the available options: + +![](./static/harness-yaml-quickstart-33.png) + +Click `step`. The default step settings appear: + + +``` + rollbackSteps: + - step: + identifier: + name: + type: +``` +In `type`, press `Ctrl + Space` to see the options. You can see all of the available steps: + +![](./static/harness-yaml-quickstart-34.png) + +This shows you how easy it is to view settings with autocomplete. + +### Review: Find and Replace YAML + +Click **Cmd/Ctrl + f** to see the Find and Replace settings. + +![](./static/harness-yaml-quickstart-35.png) + +You can quickly find and replace anything in the YAML. + +### Review: Command Palette + +The command palette keyboard command is `F1`. + +![](./static/harness-yaml-quickstart-36.png) + +The command palette displays all of the commands and the keyboard shortcuts for most commands. + +### Review: Validating YAML + +Ad you edit your YAML you will see **Invalid** to highlight that your YAML is incomplete. + +Hover over **Invalid** to see where the errors are: + +![](./static/harness-yaml-quickstart-37.png) + +You can also hover over any incomplete entry to see what is expected: + +![](./static/harness-yaml-quickstart-38.png) + +Click **Peek Problem** to see suggestions and valid values also: + +![](./static/harness-yaml-quickstart-39.png) + +### Next steps + +In this tutorial, you learned how to: + +1. View a Pipeline in the YAML builder. +2. Review the Pipeline YAML schema structure. +3. Use keyboard shortcuts for adding entries and selecting commands. +4. Find and replace YAML. +5. Verify and resolve YAML errors with the YAML builder's assistance. + +### See also + +* [YAML Reference: Pipelines](w_pipeline-steps-reference/yaml-reference-cd-pipeline.md) + diff --git a/docs/platform/8_Pipelines/input-sets.md b/docs/platform/8_Pipelines/input-sets.md new file mode 100644 index 00000000000..3e20e392df0 --- /dev/null +++ b/docs/platform/8_Pipelines/input-sets.md @@ -0,0 +1,79 @@ +--- +title: Input Sets and Overlays +description: Input Sets are collections of runtime variables and values. Overlays are groups of Input Sets. +# sidebar_position: 2 +helpdocs_topic_id: 3fqwa8et3d +helpdocs_category_id: sy6sod35zi +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Input Sets are collections of runtime inputs for a Pipeline provided before execution. + +All Pipeline settings can be set as runtime inputs in Pipeline Studio **Visual** and **YAML** editors: + +| ![](./static/InputsetsOverlays1.png) | ![](./static/InputsetsOverlays2.png) | +| ------------------------------------ | ------------------------------------ | + + +An Input Set includes all the runtime inputs that are not permanent in the Pipeline. Runtime inputs contain the values that you would be prompted to provide when you executed the Pipeline. + +Overlays are groups of Input Sets. Overlays enable you to provide several Input Sets when executing a Pipeline. + +With Input Sets and Overlays, you can make a single Pipeline template that can be used for multiple scenarios. Each scenario can be defined in an Input Set or Overlay and simply selected at runtime. + +Looking for the How-to? See [Run Pipelines using Input Sets and Overlays](run-pipelines-using-input-sets-and-overlays.md). + +### Input Sets Overview + +Nearly every setting in a Pipeline can be configured as a runtime input. You can then create an Input Set from those inputs. + +![](./static/input-sets-05.png) +Here are some Input Set examples: + +* Values of fields and variables +* Artifacts +* Connectors +* Environments +* Infrastructures +* Services +* Secrets +* Stage variables +* Step settings + +Input sets group the values of these entities and make it easy provide the correct set of values for a single Pipeline execution, and reuse the same values for the executions of multiple Pipelines. + +### Overlays Overview + +You can add several Input Sets as an Overlay. Overlays are use when: + +* The Pipeline is used for multiple Services. +* The Services have some configurations in common but some have differences. For example: + + Same configuration but using different runtime variables. + + Same artifact stream. + +In this use case, you can then create different Input Sets: + +* 1 Input Set for the common configuration: this set is used for every Pipeline execution regardless of the Service selected in the Pipeline. +* 1 Input Set for each Service with a specific configuration. +* 1 Input Set for a unique execution. For example, if it contains a specific build number. + +For a specific execution, you provide multiple Input Sets. All together, these Input Sets provide a complete list of values needed for Pipeline execution. + +#### Input Set Order in Overlays + +You can order the Input Sets you add to an Overlay to give priority to certain Input Sets. + +Each Input Set in an Overlay can overwrite the settings of previous Input Sets in the order. + +### Using Input Sets for Pipeline Execution + +Before running a Pipeline, you can select one or more Input Sets and apply them to the Pipeline. As a result, Harness will do the following operations: + +* Apply the Input Set(s) on the Pipeline. If you are using an Overlay, the application it performed in the same as the Input Sets in the Overlay to ensure the correct values are used. +* Indicate if the Pipeline can start running. Meaning, all required values are provided. + + If the Pipeline cannot start running, Harness indicates which values are missing. +* Harness shows the following: + + The values that were resolved. + + The values that were not resolved. In this case, Harness provides a clear indication that the Pipeline cannot run without values for all variables. + diff --git a/docs/platform/8_Pipelines/looping-strategies-matrix-repeat-and-parallelism.md b/docs/platform/8_Pipelines/looping-strategies-matrix-repeat-and-parallelism.md new file mode 100644 index 00000000000..18109140513 --- /dev/null +++ b/docs/platform/8_Pipelines/looping-strategies-matrix-repeat-and-parallelism.md @@ -0,0 +1,100 @@ +--- +title: Looping Strategies Overview -- Matrix, Repeat, and Parallelism +description: Looping strategies enable you to run a Stage or Step multiple times with different inputs. Looping speeds up your pipelines and makes them easier to read and maintain. +# sidebar_position: 2 +helpdocs_topic_id: eh4azj73m4 +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature.Looping strategies enable you to run a Stage or Step multiple times with different inputs. This eliminates the need to copy the same Stage or Step for each variation you need. It also makes the Pipeline more readable, clean, and easy to maintain. Looping strategies enable use cases such as: + +* You want to test a UI feature in multiple browsers and platforms. You can define a matrix that specifies the browsers and platforms to test. +* You want to build artifacts for multiple JDK versions in the same Build Stage. +* You have a Build Pipeline with 20 unit tests. To speed up execution, you want to run the tests in parallel across 4 jobs that run 5 tests each. + +### Looping Strategy Types + +Harness supports the following strategies. + +#### Matrix + +Matrix strategies are highly flexible and applicable for both CD and CI Pipelines. + +First you define a matrix of configurations that you want the Stage or Step to run. Each axis has a user-defined tag — `env`, `service`, `platform`, `browser`, `jdk`, etc. — and a list of values.  You can use variables such as `<+matrix.jdk>` in a Build and Push Step or `<+matrix.env>` and `<+matrix.service>` in a Deploy Stage. + +When a Pipeline runs, it creates multiple copies of the Stage or Step and runs them in parallel. You can use the`exclude` keyword to filter out some combinations. You can also use the `maxConcurrency` keyword to limit the number of parallel runs. + + +``` +matrix: + service: [svc1, svc2, svc3] + env: [env1, env2] + exclude: # don’t run [svc1, env1] or [svc3, env3] + - service: svc1 + env: env1 + - service: svc3 + env: env2 + maxConcurrency: 2 # run up to 2 jobs in parallel based on your resources +# example run: +# testgroup0 -> testgroup2 +# testgroup1 -> testgroup3 +``` +#### Parallelism + +Parallelism strategies are useful for CI Build Stages that include a lot of tests. Suppose your Stage includes over 100 tests. You can specify the following to split your tests into 10 groups and test 5 groups at a time. + + +``` +parallelism: 10 + maxConcurrency: 5 +# example run: +# testgroup0 -> testgroup5 +# testgroup1 -> testgroup6 +# testgroup2 -> testgroup7 +# testgroup3 -> testgroup8 +# testgroup4 -> testgroup9 +``` +#### Repeat + +Repeat strategies are alternative methods for defining Matrix or Parallelism or one-dimensional Matrix strategies. + +For example, you can define a Parallelism strategy as follows: + + +``` +repeat: + times: 6 + maxConcurrency: 3 + +# this is functionally equivalent to +# parallelism: 6 +# maxConcurrency: 3 +``` +You can iterate through a list of values with the keyword `items` . You can then use the variable `<+repeat.item>` to access each value in the list. + + +``` +repeat: + items: [ "18", "17", "16", "15", "14", "13", "12", "11", "10", "9" ] + maxConcurrency: 5 +``` +##### Running steps on multiple target hosts + +To run steps on multiple target hosts, such as in a CD stage that performs a Deployment Template or SSH/WinRM deployment, you must use the `<+stage.output.hosts>` expression to reference all of the hosts/pods/instances: + + +``` +repeat: + items: <+stage.output.hosts> +``` +For more information, go to [Run a step on multiple target instances](https://docs.harness.io/article/c5mcm36cp8-run-a-script-on-multiple-target-instances). + +### See also + +* [Best Practices for Looping Strategies](best-practices-for-looping-strategies.md) +* [Run a Stage or Step Multiple Times using a Matrix](run-a-stage-or-step-multiple-times-using-a-matrix.md) +* [Speed Up CI Test Pipelines Using Parallelism](https://harness.helpdocs.io/article/kce8mgionj) +* [Optimizing CI Build Times](https://harness.helpdocs.io/article/g3m7pjq79y) + diff --git a/docs/platform/8_Pipelines/resume-pipeline-deployments.md b/docs/platform/8_Pipelines/resume-pipeline-deployments.md new file mode 100644 index 00000000000..7fb12d29c86 --- /dev/null +++ b/docs/platform/8_Pipelines/resume-pipeline-deployments.md @@ -0,0 +1,69 @@ +--- +title: Retry Failed Executions from any Stage +description: Describes how to resume Pipeline deployments that fail during execution. +# sidebar_position: 2 +helpdocs_topic_id: z5n5llv35m +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Pipeline execution might fail for many reasons, such as infrastructure changes or changes to resource access. In such cases, rerunning an entire Pipeline can be costly and time-consuming. + +Harness provides an option to resume Pipeline executions from any executed Stage or from the failed Stage. These options enable you to quickly rerun stages after you identify the cause of the failure.  + +Retrying a Pipeline or Stage is different from rerunning a Pipeline or Stage. When you rerun, you can select new values for Runtime Inputs. When you retry a Pipeline or Stage, you are running the Pipeline or Stage exactly as it was run before. See [Run Specific Stages in Pipeline](run-specific-stage-in-pipeline.md).Harness provides an option to resume Pipeline executions from any previously executed Stage or from the failed Stage. + +### Before you begin + +* [Learn Harness' Key Concepts](https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md) +* [Add a Stage](../8_Pipelines/add-a-stage.md) +* Make sure you have **Execute** permissions for Pipeline to run a specific Stage of the Pipeline. For example, the [Pipeline Executor](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) default role in the Project where your Pipeline is located. + +### Limitations + +* You can retry Pipelines when they are in **Failed**, **Aborted**, **Expired,** or **Rejected** status. +* You cannot retry a Stage's execution if it was run successfully. You need to rerun the Pipeline and select Stage. This will be a new run of the Stage. To rerun a successful Stage in a Pipeline, click **Rerun** in the execution. For more information, see [Rerun Stage](run-specific-stage-in-pipeline.md#rerun-stage). +* You cannot change mandatory settings, parameters, or conditions when you retry. + +### Review: Serial and Parallel Stages + +Stages can be added to Pipelines serially and in parallel. Here is an example that shows both: + +![](./static/resume-pipeline-deployments-00.png) +How you run and retry Stages is different depending on whether you are retrying a serial or a parallel Stage. + +### Option: Retry Serial Stages + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can [create a Pipeline](add-a-stage.md#step-1-create-a-pipeline) from CI and CD module in your Project, and then [add Stages](add-a-stage.md#add-a-stage) for any module. + +Click on the failed deployment that you want to retry and click **Retry** **Failed Pipeline**. + +![](./static/resume-pipeline-deployments-01.png) +The Retry Pipeline settings appear. + +![](./static/resume-pipeline-deployments-02.png) +Choose the failed Stage or any previous Stage to retry the Pipeline. The [Runtime Inputs](../20_References/runtime-inputs.md) for the selected and later stages are automatically filled from the previous execution. You can modify these Runtime Inputs while retrying the Pipeline. + +### Option: Retry Parallel Stages + +When there are parallel Stages in your Pipeline and one of the parallel Stages fails, you can execute the failed Stage or all the parallel Stages. + +Let's take an example of a Pipeline that has parallel Stages. + +![](./static/resume-pipeline-deployments-03.png) +If Stage stage1 fails, you can either execute all the parallel Stages or only the failed Stage. + +![](./static/resume-pipeline-deployments-04.png) +All the previous values are populated for the stages. You can keep them or modify them as needed. + +You cannot retry any execution that is more than 30 days old. + +### See also + +* [Run Pipelines using Input Sets and Overlays](run-pipelines-using-input-sets-and-overlays.md) +* [Run Specific Stages in Pipeline](run-specific-stage-in-pipeline.md) + diff --git a/docs/platform/8_Pipelines/run-a-stage-or-step-multiple-times-using-a-matrix.md b/docs/platform/8_Pipelines/run-a-stage-or-step-multiple-times-using-a-matrix.md new file mode 100644 index 00000000000..8d8e0a88ee4 --- /dev/null +++ b/docs/platform/8_Pipelines/run-a-stage-or-step-multiple-times-using-a-matrix.md @@ -0,0 +1,222 @@ +--- +title: Run a Stage or Step Multiple Times using a Matrix +description: A matrix enables you to run the same Stage or Step multiple times with different parameters. Matrix strategies also make your Pipelines more readable, clean, and easy to maintain. +# sidebar_position: 2 +helpdocs_topic_id: kay7z1bi01 +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A matrix enables you to run the same Stage or Step multiple times with different parameters. Matrix strategies eliminate the need to copy the same stage or step with different inputs for each variation. Matrix strategies also make your Pipelines more readable, clean, and easy to maintain. You can easily define matrix strategies to support workflows such as: + +* A Run Step that load-tests a UI feature in 4 different browsers and 3 different platforms. +* A Build Stage that builds artifacts for 10 different JDK versions. +* A Deploy Stage that deploys 3 different services to 4 different environments. + +### Before you begin + +You can apply matrix strategies to both CI and CD workflows. This topic assumes that you are familiar with the following: + +* [CD Pipeline Basics](https://docs.harness.io/article/cqgeblt4uh-cd-pipeline-basics) and [CI Pipeline Basics](../../continuous-integration/ci-quickstarts/ci-pipeline-basics.md) +* [Looping Strategies Overview](looping-strategies-matrix-repeat-and-parallelism.md) +* [Best Practices for Looping Strategies](best-practices-for-looping-strategies.md) + +### Important Notes + +* There is no limit on the number of dimensions you can include in a matrix or the number of looping strategies you define in a Pipeline. +* You should avoid complex looping scenarios unless you clearly understand the resources that your scenario will require. See [Best Practices for Looping Strategies](best-practices-for-looping-strategies.md). + +### Add a Matrix Strategy to a Stage or Step + +1. In the Pipeline Studio, go to the **Advanced** tab of the Stage or Step where you want to apply the Looping strategy. +2. Under Looping Strategies, select **Matrix**.You can also use a Loop strategy to iterate through a simple list. See [Looping Strategies Overview: Matrix, Repeat, and Parallelism](looping-strategies-matrix-repeat-and-parallelism.md). +3. Enter the YAML definition for your strategy as illustrated in the following examples. + +### CI Example: Run an App in `[browser]` on `[os]` + +Suppose you have a Pipeline that builds an app in Go. You want to test the app on three different platforms and three different browsers. In the Stage where you test the app, you can define a matrix like this.  + + +``` +matrix: + browser: [chrome, safari, firefox ] + os: [ macos, windows, linux ] +maxConcurrency: 3 +``` +  + +In this example, `os` and `browser` are user-defined tags. You can specify any tag in a matrix:  `jdk`, `platform`, `node-version`, and so on. + +You can use the `maxConcurrency` keyword to run multiple jobs concurrently. In this case, the effective matrix has 9 combinations. With `maxConcurrency` set to 3, the Pipeline runs 3 Build Stages concurrently and load-balances the combinations between them. + +### Excluding Combinations from a Matrix + +You can use the `exclude` keyword to exclude certain combinations from being run. Suppose you don’t want to run the app in Safari on Windows. In this case, you can exclude this combination from the run matrix:  + + +``` +matrix: + browser: [chrome, safari, firefox ] + os: [ macos, windows, linux ] + exclude: + - browser: safari + os: windows +maxConcurrency: 3 # test the app across 3 Stages running concurrently +``` +You can also exclude all combinations based on just one value. If you want to exclude all combinations with MacOS, for example, you can do the following:  + + +``` +matrix: + browser: [chrome, safari, firefox ] + os: [ macos, windows, linux ] + exclude: + - os: macos +maxConcurrency: 4 # test the app across 4 Stages running concurrently +``` +### CD Matrix Example: Deploy `[service]` to  `[environment]` + +You can easily set up a Deploy Stage to deploy multiple services to multiple environments by defining  a matrix like this: + + +``` +matrix: + service: [ svc1, svc2, svc3 ] + environment: [ env1, env2 ] + exclude: + - service: svc1 + environment: env1 +maxConcurrency: 2 +``` +   + +### Simple List Example: Build App `[items]` + +You can also use the `repeat` and `items` keywords to iterate through a simple list. Suppose you want to build a Java app for multiple JDKs. Under Looping Strategy, select **For Loop** and  enter the following: + + +``` +repeat: + items: [ "18", "17", "16", "15", "14", "13", "12", "11", "10", "9" ] +maxConcurrency: 2 +``` +Note that a this example is simply an alternative way of defining a one-dimensional matrix with the `items` keyword as the key. You can define the same basic strategy like this: + + +``` +matrix: +  jdk: [ "18", "17", "16", "15", "14", "13", "12", "11", "10", "9" ] +maxConcurrency: 2 +``` +### Using Matrix Variables in Your Pipeline + +You can reference matrix values in your Stages and Steps using `<+matrix.`*`tag`*`>`. Here are some examples. + +Given the CI example above, you can enter the following in a Run Step to output the current run: + + +``` +echo “Testing app in <+matrix.browser> on <+matrix.os>” +``` +  + +Suppose you have a matrix in a Stage and another in a member Step. The Stage matrix has tags `browser` and `os`. The Step matrix has tags `browser` and `os`. You can reference both sets of tags from within the Step like this: + + +``` +echo "stage values (parent):" +echo "Current service for stage: <+stage.matrix.browser>" +echo "Current os for stage: <+stage.matrix.os>" +echo "step values (local):" +echo "Current browser for step: <+matrix.browser>" +echo "Current os for step: <+matrix.os>" +``` +Given the CD example above, you can go to the Service tab of the Deploy Stage and specify the service using  `<+matrix.service>`. + +![](./static/run-a-stage-or-step-multiple-times-using-a-matrix-40.png) +The following variables are also supported: + +* `<+strategy.iteration>` — The current iteration. +* `<+strategy.iterations>` — The total number of iterations. +* `<+repeat.item>` - The value of the item when iterating through a list using the `repeat` and `items` keywords. + +### YAML Pipeline Example + +The following example illustrates how you can define matrix strategies in a pipeline. + +matrix-pipeline-example.yml +``` + pipeline: + name: matrix-example-2 + identifier: matrixexample2 + projectIdentifier: myproject + orgIdentifier: myorg + tags: {} + stages: + - stage: + name: echoMatrixSettings + identifier: echoMatrixSettings + description: "" + type: Custom + spec: + execution: + steps: + - step: + type: ShellScript + name: echo + identifier: echo + spec: + shell: Bash + onDelegate: true + source: + type: Inline + spec: + script: |- + echo "iteration index = <+strategy.iteration>" + echo "total iterations = <+strategy.iterations>" + echo "stage values (parent):" + echo "Current version for stage: <+stage.matrix.service>" + echo "Current environment for stage: <+stage.matrix.environment>" + echo "step values (local):" + echo "Current item (version): <+repeat.item>" + environmentVariables: [] + outputVariables: [] + executionTarget: {} + timeout: 10m + failureStrategies: [] + strategy: + repeat: + items: + - "18" + - "17" + - "16" + - "15" + - "14" + - "13" + - "12" + - "11" + - "10" + - "9" + maxConcurrency: 2 + tags: {} + strategy: + matrix: + service: + - svc1 + - svc2 + - svc3 + environment: + - env1 + - env2 + exclude: + - service: svc1 + environment: env1 + maxConcurrency: 2 +``` +### See also + +* [Best Practices for Looping Strategies](best-practices-for-looping-strategies.md) +* [Looping Strategies Overview: Matrix, Repeat, and Parallelism](looping-strategies-matrix-repeat-and-parallelism.md) +* [Speed Up CI Test Pipelines Using Parallelism](speed-up-ci-test-pipelines-using-parallelism.md) + diff --git a/docs/platform/8_Pipelines/run-pipelines-using-input-sets-and-overlays.md b/docs/platform/8_Pipelines/run-pipelines-using-input-sets-and-overlays.md new file mode 100644 index 00000000000..e3895ba8838 --- /dev/null +++ b/docs/platform/8_Pipelines/run-pipelines-using-input-sets-and-overlays.md @@ -0,0 +1,123 @@ +--- +title: Run Pipelines using Input Sets and Overlays +description: Create a Pipeline template that can use different runtime variable values for different services, codebases, target environments, and goals. +# sidebar_position: 2 +helpdocs_topic_id: gfk52g74xt +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Input Sets are collections of runtime variables and values that can be provided to Pipelines before execution. + +An Input Set includes all the runtime inputs that are not permanent in the Pipeline. Runtime inputs are the settings that you would be prompted to provide when you executed the Pipeline manually. + +Overlays are groups of Input Sets. Overlays enable you to provide several input sets when executing a Pipeline. + +Input Sets and Overlays allow you to create a Pipeline template that can use different runtime input values for different services, codebases, target environments, and goals. + + +### Before you begin + +* [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md) +* [Kubernetes CD Quickstart](https://docs.harness.io/article/knunou9j30-kubernetes-cd-quickstart) +* [Input Sets and Overlays](input-sets.md) +* [Runtime Inputs](../20_References/runtime-inputs.md) + +### Step 1: Create the Input Sets + +You can create an Input Set in two ways: + +* From the **Run Pipeline** page: +1. Configure your Pipeline and click **Run**. +2. Enter values for the required runtime inputs. +3. Click **Save as Input Set**. The Input Set setup appears. + + ![](./static/run-pipelines-using-input-sets-and-overlays-08.png) + +4. Enter a name, description, and tags for the new Input Set, and then click **Save**. +* By simply creating an Input Set: +1. In **Pipeline Studio**, click **Input Sets**. +2. Click **New Input Set** and select **Input Set**. +3. Enter values for the required runtime inputs and click **Save**. + +#### YAMl Example + +YAML Example +``` +inputSet: + name: service + tags: {} + identifier: service + pipeline: + identifier: BG_example + stages: + - stage: + identifier: nginx + type: Deployment + spec: + serviceConfig: + serviceDefinition: + type: Kubernetes + spec: + manifests: + - manifest: + identifier: manifests + type: K8sManifest + spec: + store: + type: Git + spec: + branch: main + variables: + - name: foo + type: String + value: bar + serviceRef: nginx + infrastructure: + environmentRef: quickstart + variables: + - name: stagevar + type: String + value: "" +``` +### Step 2: Create an Overlay + +Once you have multiple Input Sets set up you can combine them into an Overlay. + +In an Overlay, you select the order in which to apply several Input Sets. + +When you run a Pipeline using an Overlay, the Inputs Sets are applied in the order specified in the Overlay. The first Inputs Set is used and then subsequent Inputs Sets override any previously specified or empty values. + +### Step 3: Run the Pipeline using Input Set or Overlay + +When you have created your Input Sets and Overlays, you can run the Pipeline using them. + +You can select Input Sets and Overlays in two ways: + +* From the **Run Pipeline** page: +1. In **Pipeline Studio**, click **Run**. +2. In the **Run Pipeline** page, click the Input Sets option. + + ![](./static/run-pipelines-using-input-sets-and-overlays-09.png) + +3. Click an Input Set(s) or Overlay(s) to apply their settings. +4. Click **Run Pipeline**. +* From the **Input Sets** list: +1. In **Pipeline Studio**, click **Input Sets**. +2. In the Input Set or Overlay you want to use, click **Run Pipeline**. +You can also use the Input Sets option here. +3. Change any settings you want and click **Run Pipeline**. + + The Pipeline is run with the Input Set(s) or Overlay(s) settings. + +### Limitations + +Only runtime inputs are available in Input Sets. Most, but not all, Pipeline and Stage settings can be defined as runtime inputs. + +You can use any setting that offers the **Runtime input** option: + +![](./static/run-pipelines-using-input-sets-and-overlays-10.png) + +### + diff --git a/docs/platform/8_Pipelines/run-specific-stage-in-pipeline.md b/docs/platform/8_Pipelines/run-specific-stage-in-pipeline.md new file mode 100644 index 00000000000..3b9c2c72008 --- /dev/null +++ b/docs/platform/8_Pipelines/run-specific-stage-in-pipeline.md @@ -0,0 +1,76 @@ +--- +title: Run Specific Stages in Pipeline +description: You can choose to run a specific stage instead of running the whole Pipeline in Harness. The ability to run a specific stage helps in situations when only a few stages fail in a Pipeline This topic e… +# sidebar_position: 2 +helpdocs_topic_id: 95q2sp1hpr +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +A Pipeline is an end-to-end process that delivers a new version of your software. Each Pipeline has stages that contain the logic to perform one major segment of the Pipeline process. + +While executing a Pipeline, you might encounter situations where most of the stages succeed, but a few of them fail. Or you might want to run only specific Stages. In such situations, Harness lets you select specific stages to run instead of executing the entire Pipeline again. + +This topic explains how to run specific stages in a Pipeline. + + +### Before you begin + +* [Learn Harness' Key Concepts](https://ngdocs.harness.io/article/hv2758ro4e-learn-harness-key-concepts) +* [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md) +* [Add a Stage](../8_Pipelines/add-a-stage.md) +* Make sure you have **Execute** permissions for Pipeline to run a specific Stage of the Pipeline. For example, the [Pipeline Executor](../4_Role-Based-Access-Control/ref-access-management/permissions-reference.md) default role in the Project where your Pipeline is located. + +### Review: Dependent and Independent Stages + +The Services and Environments in a Pipeline stage can be propagated to subsequent Stages. Also, the settings of one stage such as its variables and step inputs and outputs can be referenced in other Stages as expressions. + +See [Fixed Values, Runtime Inputs, and Expressions](../20_References/runtime-inputs.md). + +If a Stage uses the settings of another Stage, it is a dependent Stage. + +If a Stage does not use the settings of any other Stage, it is an independent Stage. + +How you run and rerun Stages is different depending on whether the Stage is dependent or independent. + +Let's look at the different options. + +### Step 1: Select Stage Execution Settings + +To run specific stages in your Pipeline, you must allow selective stage(s) execution. + +To do this, in your Pipeline click **Advanced Options**. + +In **Stage Execution Settings**, set **Allow selective stage(s) executions?** to **Yes**. + +![](./static/run-specific-stage-in-pipeline-44.png) +### Option: Run Specific Independent Stages + +This topic assumes you have a Harness Project set up. If not, see [Create Organizations and Projects](../1_Organizations-and-Projects/2-create-an-organization.md). + +You can [create a Pipeline](add-a-stage.md#step-1-create-a-pipeline) from any module in your Project, and then [add Stages](../8_Pipelines/add-a-stage.md) for any module. + +In your Pipeline, click Run. The Run Pipeline settings appear. + +In Stages, select one or more stages in your Pipeline which are independent of other stages. + +![](./static/run-specific-stage-in-pipeline-45.png) +If the selected stage requires any [Runtime Inputs](../20_References/runtime-inputs.md#runtime-inputs), you can provide the inputs only for that Stage manually or by selecting an input set. + +![](./static/run-specific-stage-in-pipeline-46.png) +You can also view the execution details in the Pipeline execution history. + +![](./static/run-specific-stage-in-pipeline-47.png) +### Option: Run Specific Dependent Stages + +If you want to run Stages that propagate settings or need inputs from previous Stages as [expressions](../20_References/runtime-inputs.md#expressions), you can provide the inputs manually while executing this Stage independently. + +The below example shows a Pipeline with 3 stages. Stage2 uses the value of timeout in stage1 using an expression. When you run stage2 without executing stage1, this expression is evaluated as a runtime input. You can input the value during execution and run this Stage independently. + +![](./static/run-specific-stage-in-pipeline-48.png) +### Option: Rerun Stage + +You can rerun an executed Stage by clicking the Rerun Stage button and providing any Runtime inputs. + +![](./static/run-specific-stage-in-pipeline-49.png) \ No newline at end of file diff --git a/docs/platform/8_Pipelines/searching-the-console-view.md b/docs/platform/8_Pipelines/searching-the-console-view.md new file mode 100644 index 00000000000..a8de268fa16 --- /dev/null +++ b/docs/platform/8_Pipelines/searching-the-console-view.md @@ -0,0 +1,43 @@ +--- +title: Searching the Console View +description: Pipeline executions can be viewed in Console View and you can quickly search the logs for each step. +# sidebar_position: 2 +helpdocs_topic_id: gnht939ijo +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Pipeline executions can be viewed in Console View and you can quickly search the logs for each step. + + +### Before you begin + +Before you can search execution logs, you need to run a Pipeline. See [Quickstarts](https://docs.harness.io/article/u8lgzsi7b3-quickstarts) to set up and run a Pipeline in minutes. + +### Step: Search the Execution Step Logs + +In the Pipeline execution, click **Console View**. + +![](./static/searching-the-console-view-41.png) +In the Console View of an execution, click the step you want to search. + +Enter `Cmd + f` (Mac), `Ctrl + f` (Windows). + +You can also click the search icon. + +![](./static/searching-the-console-view-42.png) +The search appears. + +Type in your search query and the results are highlighted immediately. + +![](./static/searching-the-console-view-43.png) +### Option: Console Keyboard Shortcuts + +Use the following shortcuts to search the log (these are Mac example, just substitute `Ctrl` for `Cmd` for Windows): + +* `Up` to go up or go to the next search result. +* `Down` to go down or go to the previous search result. +* `Enter` to go to the next search result. +* `Esc` to cancel search. + diff --git a/docs/platform/8_Pipelines/speed-up-ci-test-pipelines-using-parallelism.md b/docs/platform/8_Pipelines/speed-up-ci-test-pipelines-using-parallelism.md new file mode 100644 index 00000000000..e0baeb2b2fa --- /dev/null +++ b/docs/platform/8_Pipelines/speed-up-ci-test-pipelines-using-parallelism.md @@ -0,0 +1,317 @@ +--- +title: Speed Up CI Test Pipelines Using Parallelism +description: Use parallelism to to run your build tests in parallel. This is one of the looping strategies available in Harness pipelines. Parallelism is useful whenever there is a need to run a step or a stage multiple times in parallel. +# sidebar_position: 2 +helpdocs_topic_id: kce8mgionj +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Currently, this feature is behind a Feature Flag. Contact [Harness Support](mailto:support@harness.io) to enable the feature.The more tests you run, the longer it takes for them to complete if run sequentially. To reduce test cycle time, you can split your tests and run them across multiple groups at the same time.  + +*Parallelism* is one of the [looping strategies](looping-strategies-matrix-repeat-and-parallelism.md) available in Harness pipelines. Parallelism is useful whenever you can split a step or stage into multiple groups and run them at the same time.  + +Parallelism is one of the [available methods](../../continuous-integration/troubleshoot/optimizing-ci-build-times.md) you can use to speed up your CI builds. + +### Key concepts: parallelism and test splitting + +Many pipelines are set up to run a set of tests with every new commit. When you [set up parallelism](#set-up-parallelism-in-a-pipeline) in your pipeline, you specify the following: + +1. How many copies of the stage or step to run ([`parallelism`](#define-the-parallelism-strategy) field). +2. How to split your tests into groups ([`split_tests`](#define-test-splitting) command). This command splits the tests as evenly as possible to ensure the fastest overall test time. You can split by file size or by file timing. + +The following figure illustrates how parallelism can speed up your CI pipelines. The first time you run with parallelism, the pipeline splits the tests by file size and collects timing data for all tests. You can then split your tests by time and speed up your pipeline even further. Every build optimizes the splitting based on the most recent timing data. + +**Figure 1: Parallelism and Test Times** + +![](./static/speed-up-ci-test-pipelines-using-parallelism-50.png) + +### YAML stage with parallelism + +Parallelism can be set on both steps and stages.  + +The following snippet shows a YAML definition of a Run step that uses [pytest](https://docs.pytest.org/) to split tests into four test groups running in parallel. + + +``` +# Use "run" step type +- step: + type: Run   + name: Run Pytests + identifier: Run_Pytests +# Enable parallelism strategy + strategy:   + parallelism: 4   # Number of parallel runs + maxConcurrency: 2 # (optional) Limit the number of parallel runs + spec: + connectorRef: $dockerhub_connector + image: python:latest + shell: Sh +# Store the current index and total runs in environment variables + envVariables: + HARNESS_NODE_INDEX: <+strategy.iteration> # index of current run + HARNESS_NODE_TOTAL: <+strategy.iterations> # total runs + command: |- + pip install -r requirements.txt +# Define splitting strategy and generate a list of test groups + FILES=`/addon/bin/split_tests --glob "**/test_*.py" \ + --split-by file_timing \ + --split-index ${HARNESS_NODE_INDEX} \ + --split-total ${HARNESS_NODE_TOTAL}` + echo $FILES +# Run tests with the test-groups string as input + pytest -v --junitxml="result_<+strategy.iteration>.xml" $FILES +# Publish JUnit test reports to Harness + reports: + type: JUnit + spec: + paths: # Generate unique report for each iteration + - "**/result_<+strategy.iteration>.xml" + failureStrategies: [] +``` +### Important notes + +* Please consider any resource constraints in your build infrastructure when using parallelism. To learn more, go to [Best Practices for Looping Strategies](best-practices-for-looping-strategies.md). +* You can implement a parallelism strategy for an entire stage or for individual steps within a stage. +* If you are implementing parallelism in a step rather than a stage, you need to make sure that each test-group step generates a report with a unique filename to avoid conflicts. +You can do this using the `<+strategy.iteration>` variable, which is the index of the current test group run. This index is in the range of `0` to `parallelism- 1`. +* If you want to publish your test results, you must ensure that your output files are in [JUnit](https://junit.org/junit5/) XML format. How you publish your test results depends on the specific language, test runner, and formatter used in your repo. +For more information, go to [Publish test reports](#define-the-test-reports). + +### Set up parallelism in a pipeline + +The following steps describe the high-level workflow for setting up parallelism in a pipeline. + +1. Enable parallelism and specify the number of jobs you want to in parallel. Go to [Define the parallelism strategy](#define-parallelism-strategy). +2. Define the following environment variables in the stage where you run your parallelism strategy: + * `HARNESS_NODE_TOTAL` = `<+strategy.iterations>` — The total number of iterations in the current Stage or Step. + * `HARNESS_NODE_INDEX` = `<+strategy.iterations>` — The index of the current test run. This index is in the range of `0` to `parallelism``- 1`.. This snippet shows how you can define and use these variables in the YAML editor: +``` +- step:   + .... + envVariables:       + HARNESS_NODE_INDEX: <+strategy.iteration> + HARNESS_NODE_TOTAL: <+strategy.iterations> + command: |-       + pip install -r requirements.txt       + FILES=`/addon/bin/split_tests --glob "**/test_*.py" \ + --split-by file_size \ + --split-index ${HARNESS_NODE_INDEX} \ + --split-total=${HARNESS_NODE_TOTAL}`       + pytest -v --junitxml="result_${HARNESS_NODE_INDEX}.xml" $FILES + echo "$HARNESS_NODE_TOTAL runs using file list $FILES" +``` +To define these attributes in the Pipeline Studio, go to the step that implements the parallelism strategy. Then go to **Optional Configuration** > **Environment Variables**. + +3. Set up the split\_tests command with the splitting criteria based on file size (`--split-by file_size`). Go to [Define test splitting](#define-test-splitting). +4. Define your test reports. Your reports must be in JUnit format. Go to [Publish test reports](#define-the-test-reports). +5. Run your Pipeline to make sure all your Steps complete successfully. You can see the parallel copies of your Step running in the Build UI. +**Figure 2: Parallel steps in a build** + + ![](./static/speed-up-ci-test-pipelines-using-parallelism-51.png) + +6. When the build finishes, go to the Tests tab and view your results. You can view results for each parallel run using the pull-down. + **Figure 3: View results for individual runs** + + ![](./static/speed-up-ci-test-pipelines-using-parallelism-52.png) + +7. Now that Harness has collected timing data, you can split your tests by time and reduce your build times further. Go to [Define test splitting](#define-test-splitting). + +### Define the parallelism strategy + +The `parallelism` value defines how many steps you want to run in parallel. In general, a higher value means a faster completion time for all tests. The primary restraint is the resource availability in your build infrastructure. The YAML definition looks like this: + + +``` +- step: + ... + strategy: + parallelism: 4 +``` +#### Defining parallelism in the Pipeline UI + +You can configure parallelism in the Pipeline Studio as well: + +1. In the Pipeline Studio, open the Step or Stage where you run your Tests and click the **Advanced** tab. +2. Under **Looping Strategies**, select **Parallelism** and define your strategy. +**Figure 4: Define parallelism in a Run step** + +![](./static/speed-up-ci-test-pipelines-using-parallelism-53.png) + +[Parallelism Workflow](#set-up-parallelism-in-a-pipeline) + +### Define test splitting + +You use the `split_tests` CLI command to define the set of tests you want to run. In the **Command** field of the step where you run your tests, you need to do the following: + +1. Configure the `split_tests` command to define how you want to split your tests. This command outputs a string of your test groups. +2. Run the test command with your test-groups string as input. + + +``` +# Generate a new set of grouped test files and output the file list to a string... +/addon/bin/split_tests --glob "**/test_*.py" \ + --split-by file_time \ + --split-index ${HARNESS_NODE_INDEX} \ + --split-total=${HARNESS_NODE_TOTAL} +echo $FILES +# example output: test_api_2.py test_api_4.py test_api_6.py + +# Then use the $FILES list as input to the test command--in this case, pytest: +pytest -v --junitxml="result_${HARNESS_NODE_INDEX}.xml" $FILES +``` +The `split_tests` command creates a new set of test files that is ordered based on your splitting criteria. This command takes the following as inputs: + +* The set of all the tests you want to run (`--glob` argument). +* The algorithm used to split the tests into groups (`--split-by` argument). +* The run index and total number of runs. You should set these to the environment attributes you defined previously (`--split-index ${HARNESS_NODE_INDEX}` and `--split-total ${HARNESS_NODE_TOTAL}`). + +#### Test splitting strategies + +The `split_tests` command allows you to define the criteria for splitting tests. The pipeline uses [Test Intelligence](../../continuous-integration/ci-quickstarts/test-intelligence-concepts.md) to eliminate tests that don’t need to be rerun; then it splits the remaining tests based on the splitting strategy. + +Harness supports the following strategies: + +* `--split-by file_size` - Split files into groups based on the size of individual files. +The pipeline needs timing data from the previous run to split tests by time. If timing data isn't available, the pipeline splits tests using this option. +* `--split-by file_timing` — Split files into groups based on the test times of individual files. This is the default setting: `split_tests` uses the most recent timing data to ensure that all parallel test runs finish at approximately the same time. +* `--split-by test_count` — Split tests into groups based on the overall number of tests. +* `--split-by class_timing` — Split tests into groups based on the timing data for individual classes. +* `--split-by testcase_timing` — Split tests into groups based on the timing data for individual test cases. +* `--split-by testsuite_timing` — Split tests into groups based on the timing data for individual test suites. + +##### Specifying the Tests to Split + +To split tests by time, you need to provide a list of the classes, test cases, or test suites to include.In the following example code, included in a Run Tests step, the `split_tests` command parses all matching test files (`--glob` option) and splits them into separate lists based on `--split-by file_timing`. The number of lists is based on the parallelism setting. If `parallelism` = 2, for example, the the command creates creates two separate lists of files, evenly divided by testing time. The pipeline then creates two parallel steps that run tests for the files in each list. + + +``` +pip install -r requirements.txt + +# Split by timing data +FILES=`/addon/bin/split_tests --glob "**/test_*.py" --split-by file_timing` +echo $FILES +pytest -v --junitxml="result_${HARNESS_NODE_INDEX}.xml" $FILES +``` +When the pipeline finishes a build, the `echo $FILES` output shows the files that got tested in each step. For example, one step log shows.... + + +``` + ++ FILES=test_file_1.py test_file_2.py test_file_6.py test_file_9.py test_file_10.py test_file_12.py test_file_13.py +``` +...while the other log shows: + + +``` + ++ FILES=test_file_3.py test_file_4.py test_file_5.py test_file_8.py test_file_11.py test_file_14.py + +``` +Note that this example applies to the `--split-by file_timing`option. In this case, you can use a glob expression to specify the set of elements that need to be split and tested. For class, test-case, or test-suite timing, you must provide a text file of the elements to split. If you want to split by Java-class timing, for example, you could specify the set of classes to split and test in a new-line-delineated string like this: + + +``` +echo 'io.harness.jhttp.server.PathResolverTest\nio.harness.jhttp.processor.DirectoryIndexTest\nio.harness.jhttp.functional.HttpClientTest\nio.harness.jhttp.processor.ResourceNotFoundTest'> classnames.txt +CLASSES=`/addon/bin/split_tests --split-by class_timing --file-path classnames.txt` +``` +[Parallelism Workflow](#set-up-parallelism-in-a-pipeline) + +### Define the test reports + +The `report` section in the Pipeline YAML defines how to publish your test reports. Here's an example: + + +``` +reports: + type: JUnit + spec: + paths: - "**/result_${HARNESS_NODE_INDEX}.xml" +``` +You need to do the following: + +* Set up your test runner and formatter to publish your test reports in [JUnit](https://junit.org/junit5/) XML format and to include filenames in the XML output. If you are using [pytest](https://docs.pytest.org/), for example, you can configure the report format by setting the `junit_family` in the pytest.ini file in your code repo: +`junit_family=xunit1` +Reporting setup and configurations depend on the specific test runner. Go to the external documentation for your specific runner to determine how to publish in the correct format. +* If you are implementing parallelism in a step rather than a stage, you need to make sure that each test-group step generates a report with a unique filename. +You can do this using the `<+strategy.iteration>`variable, which is the index of the current test run. This index is in the range of `0` to `parallelism``- 1`. + +You can configure your test reporting options in the pipeline YAML, as shown above, or in the Pipeline Studio. Go to the Run or Run Tests Step and configure the **Report Paths** field under Optional Configuration. + +**Figure 6: Define Report Paths in a Run Step** + +![](./static/speed-up-ci-test-pipelines-using-parallelism-54.png) + +[Parallelism Workflow](#set-up-parallelism-in-a-pipeline) + +### YAML pipeline example with parallelism + +The following example shows a full end-to-end pipeline with parallelism enabled. + +parallelism-pipeline-example.yml +``` +pipeline: + name: parallelism-for-docs-v6 + identifier: parallelismfordocsv6 + projectIdentifier: myproject + orgIdentifier: myorg + tags: {} + properties: + ci: + codebase: + connectorRef: $GITHUB_CONNECTOR + repoName: testing-flask-with-pytest + build: <+input> + stages: + - stage: + name: Build and Test + identifier: Build_and_Test + type: CI + spec: + cloneCodebase: true + infrastructure: + type: KubernetesDirect + spec: + connectorRef: $HARNESS_K8S_DELEGATE_CONNECTOR + namespace: harness-delegate-ng + automountServiceAccountToken: true + nodeSelector: {} + os: Linux + execution: + steps: + - step: + type: Run + name: Run Pytests + identifier: Run_Pytests + strategy: + parallelism: 4 + spec: + connectorRef: $DOCKERHUB_CONNECTOR + image: python:latest + shell: Sh + envVariables: + HARNESS_NODE_INDEX: <+strategy.iteration> + HARNESS_NODE_TOTAL: <+strategy.iterations> + command: |- + pip install -r requirements.txt + FILES=`/addon/bin/split_tests --glob "**/test_*.py" \ + --split-by file_timing \ + --split-index ${HARNESS_NODE_INDEX} \ + --split-total=${HARNESS_NODE_TOTAL}` + echo $FILES + pytest -v --junitxml="result_${HARNESS_NODE_INDEX}.xml" $FILES + reports: + type: JUnit + spec: + paths: + - "**/result_${HARNESS_NODE_INDEX}.xml" + failureStrategies: [] +``` +### See also + +* [Optimizing CI Build Times](https://harness.helpdocs.io/article/g3m7pjq79y) +* [Looping Strategies Overview: Matrix, For Loop, and Parallelism](https://harness.helpdocs.io/article/eh4azj73m4) +* [Best Practices for Looping Strategies](best-practices-for-looping-strategies.md) +* [Run a Stage or Step Multiple Times using a Matrix](run-a-stage-or-step-multiple-times-using-a-matrix.md) + diff --git a/docs/platform/8_Pipelines/static/InputsetsOverlays1.png b/docs/platform/8_Pipelines/static/InputsetsOverlays1.png new file mode 100644 index 00000000000..c27b9f63867 Binary files /dev/null and b/docs/platform/8_Pipelines/static/InputsetsOverlays1.png differ diff --git a/docs/platform/8_Pipelines/static/InputsetsOverlays2.png b/docs/platform/8_Pipelines/static/InputsetsOverlays2.png new file mode 100644 index 00000000000..ef4fd53050a Binary files /dev/null and b/docs/platform/8_Pipelines/static/InputsetsOverlays2.png differ diff --git a/docs/platform/8_Pipelines/static/add-a-custom-stage-58.png b/docs/platform/8_Pipelines/static/add-a-custom-stage-58.png new file mode 100644 index 00000000000..8eaa68134ea Binary files /dev/null and b/docs/platform/8_Pipelines/static/add-a-custom-stage-58.png differ diff --git a/docs/platform/8_Pipelines/static/add-a-stage-55.png b/docs/platform/8_Pipelines/static/add-a-stage-55.png new file mode 100644 index 00000000000..61cdb7b8aea Binary files /dev/null and b/docs/platform/8_Pipelines/static/add-a-stage-55.png differ diff --git a/docs/platform/8_Pipelines/static/add-a-stage-56.png b/docs/platform/8_Pipelines/static/add-a-stage-56.png new file mode 100644 index 00000000000..613f6117a6b Binary files /dev/null and b/docs/platform/8_Pipelines/static/add-a-stage-56.png differ diff --git a/docs/platform/8_Pipelines/static/add-a-stage-57.png b/docs/platform/8_Pipelines/static/add-a-stage-57.png new file mode 100644 index 00000000000..0e98d86c943 Binary files /dev/null and b/docs/platform/8_Pipelines/static/add-a-stage-57.png differ diff --git a/docs/platform/8_Pipelines/static/best-practices-for-looping-strategies-06.png b/docs/platform/8_Pipelines/static/best-practices-for-looping-strategies-06.png new file mode 100644 index 00000000000..f0996cd052e Binary files /dev/null and b/docs/platform/8_Pipelines/static/best-practices-for-looping-strategies-06.png differ diff --git a/docs/platform/8_Pipelines/static/best-practices-for-looping-strategies-07.png b/docs/platform/8_Pipelines/static/best-practices-for-looping-strategies-07.png new file mode 100644 index 00000000000..93a787ddda5 Binary files /dev/null and b/docs/platform/8_Pipelines/static/best-practices-for-looping-strategies-07.png differ diff --git a/docs/platform/8_Pipelines/static/define-a-failure-strategy-on-stages-and-steps-11.png b/docs/platform/8_Pipelines/static/define-a-failure-strategy-on-stages-and-steps-11.png new file mode 100644 index 00000000000..38df552e380 Binary files /dev/null and b/docs/platform/8_Pipelines/static/define-a-failure-strategy-on-stages-and-steps-11.png differ diff --git a/docs/platform/8_Pipelines/static/define-a-failure-strategy-on-stages-and-steps-12.png b/docs/platform/8_Pipelines/static/define-a-failure-strategy-on-stages-and-steps-12.png new file mode 100644 index 00000000000..5a1a496ada1 Binary files /dev/null and b/docs/platform/8_Pipelines/static/define-a-failure-strategy-on-stages-and-steps-12.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-21.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-21.png new file mode 100644 index 00000000000..afc72d76c35 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-21.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-22.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-22.png new file mode 100644 index 00000000000..43a7ba0350f Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-22.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-23.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-23.png new file mode 100644 index 00000000000..2775ae72b0f Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-23.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-24.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-24.png new file mode 100644 index 00000000000..5d272983cae Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-24.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-25.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-25.png new file mode 100644 index 00000000000..f0b71d11455 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-25.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-26.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-26.png new file mode 100644 index 00000000000..94ea921b231 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-26.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-27.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-27.png new file mode 100644 index 00000000000..b934210a5cb Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-27.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-28.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-28.png new file mode 100644 index 00000000000..e545c828119 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-28.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-29.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-29.png new file mode 100644 index 00000000000..e758113559c Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-29.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-30.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-30.png new file mode 100644 index 00000000000..a8944620670 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-30.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-31.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-31.png new file mode 100644 index 00000000000..7e9d8f72a2c Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-31.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-32.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-32.png new file mode 100644 index 00000000000..0dd9cb31e68 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-32.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-33.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-33.png new file mode 100644 index 00000000000..a1b65bc3848 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-33.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-34.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-34.png new file mode 100644 index 00000000000..3d56be1073e Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-34.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-35.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-35.png new file mode 100644 index 00000000000..1437ec677d1 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-35.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-36.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-36.png new file mode 100644 index 00000000000..990302d3f29 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-36.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-37.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-37.png new file mode 100644 index 00000000000..fbbe065d4ca Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-37.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-38.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-38.png new file mode 100644 index 00000000000..03d540f8d84 Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-38.png differ diff --git a/docs/platform/8_Pipelines/static/harness-yaml-quickstart-39.png b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-39.png new file mode 100644 index 00000000000..ce8421cdc4e Binary files /dev/null and b/docs/platform/8_Pipelines/static/harness-yaml-quickstart-39.png differ diff --git a/docs/platform/8_Pipelines/static/input-sets-05.png b/docs/platform/8_Pipelines/static/input-sets-05.png new file mode 100644 index 00000000000..f6740bf435f Binary files /dev/null and b/docs/platform/8_Pipelines/static/input-sets-05.png differ diff --git a/docs/platform/8_Pipelines/static/resume-pipeline-deployments-00.png b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-00.png new file mode 100644 index 00000000000..b4cec31b632 Binary files /dev/null and b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-00.png differ diff --git a/docs/platform/8_Pipelines/static/resume-pipeline-deployments-01.png b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-01.png new file mode 100644 index 00000000000..b1e1140b3f7 Binary files /dev/null and b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-01.png differ diff --git a/docs/platform/8_Pipelines/static/resume-pipeline-deployments-02.png b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-02.png new file mode 100644 index 00000000000..a5c584035ac Binary files /dev/null and b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-02.png differ diff --git a/docs/platform/8_Pipelines/static/resume-pipeline-deployments-03.png b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-03.png new file mode 100644 index 00000000000..b4cec31b632 Binary files /dev/null and b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-03.png differ diff --git a/docs/platform/8_Pipelines/static/resume-pipeline-deployments-04.png b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-04.png new file mode 100644 index 00000000000..c9973557e7b Binary files /dev/null and b/docs/platform/8_Pipelines/static/resume-pipeline-deployments-04.png differ diff --git a/docs/platform/8_Pipelines/static/run-a-stage-or-step-multiple-times-using-a-matrix-40.png b/docs/platform/8_Pipelines/static/run-a-stage-or-step-multiple-times-using-a-matrix-40.png new file mode 100644 index 00000000000..8ec5d768f83 Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-a-stage-or-step-multiple-times-using-a-matrix-40.png differ diff --git a/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-08.png b/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-08.png new file mode 100644 index 00000000000..e0ff35ff0ea Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-08.png differ diff --git a/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-09.png b/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-09.png new file mode 100644 index 00000000000..243fedfd15c Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-09.png differ diff --git a/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-10.png b/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-10.png new file mode 100644 index 00000000000..56aa17af890 Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-pipelines-using-input-sets-and-overlays-10.png differ diff --git a/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-44.png b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-44.png new file mode 100644 index 00000000000..cf1a366081d Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-44.png differ diff --git a/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-45.png b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-45.png new file mode 100644 index 00000000000..33dac99d1de Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-45.png differ diff --git a/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-46.png b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-46.png new file mode 100644 index 00000000000..d7660e80c6a Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-46.png differ diff --git a/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-47.png b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-47.png new file mode 100644 index 00000000000..77616bed13d Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-47.png differ diff --git a/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-48.png b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-48.png new file mode 100644 index 00000000000..4937a18db78 Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-48.png differ diff --git a/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-49.png b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-49.png new file mode 100644 index 00000000000..f1b70a4db9c Binary files /dev/null and b/docs/platform/8_Pipelines/static/run-specific-stage-in-pipeline-49.png differ diff --git a/docs/platform/8_Pipelines/static/searching-the-console-view-41.png b/docs/platform/8_Pipelines/static/searching-the-console-view-41.png new file mode 100644 index 00000000000..1d64f77bc65 Binary files /dev/null and b/docs/platform/8_Pipelines/static/searching-the-console-view-41.png differ diff --git a/docs/platform/8_Pipelines/static/searching-the-console-view-42.png b/docs/platform/8_Pipelines/static/searching-the-console-view-42.png new file mode 100644 index 00000000000..2619c3b2cbe Binary files /dev/null and b/docs/platform/8_Pipelines/static/searching-the-console-view-42.png differ diff --git a/docs/platform/8_Pipelines/static/searching-the-console-view-43.png b/docs/platform/8_Pipelines/static/searching-the-console-view-43.png new file mode 100644 index 00000000000..210880754dd Binary files /dev/null and b/docs/platform/8_Pipelines/static/searching-the-console-view-43.png differ diff --git a/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-50.png b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-50.png new file mode 100644 index 00000000000..a2e65ce562a Binary files /dev/null and b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-50.png differ diff --git a/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-51.png b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-51.png new file mode 100644 index 00000000000..7d82767db9a Binary files /dev/null and b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-51.png differ diff --git a/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-52.png b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-52.png new file mode 100644 index 00000000000..8ed070133f7 Binary files /dev/null and b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-52.png differ diff --git a/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-53.png b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-53.png new file mode 100644 index 00000000000..1eeb972f78a Binary files /dev/null and b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-53.png differ diff --git a/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-54.png b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-54.png new file mode 100644 index 00000000000..8eb1dc654e1 Binary files /dev/null and b/docs/platform/8_Pipelines/static/speed-up-ci-test-pipelines-using-parallelism-54.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-13.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-13.png new file mode 100644 index 00000000000..aa80ebda494 Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-13.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-14.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-14.png new file mode 100644 index 00000000000..b65e7fd2445 Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-14.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-15.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-15.png new file mode 100644 index 00000000000..326986ed107 Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-15.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-16.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-16.png new file mode 100644 index 00000000000..4cafca0a9f9 Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-16.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-17.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-17.png new file mode 100644 index 00000000000..89da4e3386a Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-17.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-18.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-18.png new file mode 100644 index 00000000000..326986ed107 Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-18.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-19.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-19.png new file mode 100644 index 00000000000..aa80ebda494 Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-19.png differ diff --git a/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-20.png b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-20.png new file mode 100644 index 00000000000..b65e7fd2445 Binary files /dev/null and b/docs/platform/8_Pipelines/static/view-and-compare-pipeline-executions-20.png differ diff --git a/docs/platform/8_Pipelines/view-and-compare-pipeline-executions.md b/docs/platform/8_Pipelines/view-and-compare-pipeline-executions.md new file mode 100644 index 00000000000..6e9ff963930 --- /dev/null +++ b/docs/platform/8_Pipelines/view-and-compare-pipeline-executions.md @@ -0,0 +1,63 @@ +--- +title: View and Compare Pipeline Executions +description: view and compare the Harness Pipeline YAML used for each Pipeline execution +# sidebar_position: 2 +helpdocs_topic_id: n39cwsfvmj +helpdocs_category_id: kncngmy17o +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can view and compare the compiled Harness Pipeline YAML used for each Pipeline execution. + +Comparing Pipeline YAML helps you see what changes took place between executions. This can help with troubleshooting execution failures. + +### Before you begin + +* [CD Pipeline Basics](https://docs.harness.io/article/cqgeblt4uh-cd-pipeline-basics) +* [CI Pipeline Basics](../../continuous-integration/ci-quickstarts/ci-pipeline-basics.md) + +### Limitations + +* You can only compare YAML from two executions at a time. + +### Visual Summary + +You can compare Pipeline executions by selecting **Compare YAML**, selecting executions, and clicking **Compare**. + +![](./static/view-and-compare-pipeline-executions-13.png) +A diff of the Pipeline YAML for each execution is displayed: + +![](./static/view-and-compare-pipeline-executions-14.png) +### Option: View Compiled Execution YAML + +Compiled execution YAML is the Pipeline YAML used in the execution, including all resolved [Runtime Inputs, Expressions](../20_References/runtime-inputs.md), and [variables](../12_Variables-and-Expressions/harness-variables.md). + +In a Pipeline, click **Execution History**. + +![](./static/view-and-compare-pipeline-executions-15.png) +Pick an execution, click more options (⋮), and then click **View Compiled YAML**. + +![](./static/view-and-compare-pipeline-executions-16.png) +The YAML for the Pipeline used in that execution is displayed. + +![](./static/view-and-compare-pipeline-executions-17.png) +### Option: Compare Execution YAML + +You can compare the compiled execution YAML of two executions. This comparison can help you see what changed between executions. + +In a Pipeline, click **Execution History**. + +![](./static/view-and-compare-pipeline-executions-18.png) +Select **Compare YAML**, select two executions, and click **Compare**. + +![](./static/view-and-compare-pipeline-executions-19.png) +A diff of the Pipeline YAML for the two executions is displayed: + +![](./static/view-and-compare-pipeline-executions-20.png) +The diff can help you quickly see changes and troubleshoot a failed execution. + +### See also + +* [Pipelines and Stages How-tos](https://docs.harness.io/category/pipelines) + diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/_category_.json b/docs/platform/8_Pipelines/w_pipeline-steps-reference/_category_.json new file mode 100644 index 00000000000..9e886e90097 --- /dev/null +++ b/docs/platform/8_Pipelines/w_pipeline-steps-reference/_category_.json @@ -0,0 +1 @@ +{"label": "Pipeline and Steps Reference", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Pipeline and Steps Reference"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "lussbhnyjt"}} \ No newline at end of file diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/nexus-connector-settings-reference.md b/docs/platform/8_Pipelines/w_pipeline-steps-reference/nexus-connector-settings-reference.md new file mode 100644 index 00000000000..5c7863c4f75 --- /dev/null +++ b/docs/platform/8_Pipelines/w_pipeline-steps-reference/nexus-connector-settings-reference.md @@ -0,0 +1,106 @@ +--- +title: Nexus Connector Settings Reference +description: This topic provides settings and permissions for the Nexus Connector. In this topic -- Nexus Permissions Required. Artifact Type Support. Docker Support. Nexus Artifact Server Name. ID. Description. Ta… +# sidebar_position: 2 +helpdocs_topic_id: faor0dc98d +helpdocs_category_id: lussbhnyjt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings and permissions for the Nexus Connector. + +### Nexus Permissions Required + +Ensure the connected user account has the following permissions in the Nexus Server. + +* Repo: All repositories (Read) +* Nexus UI: Repository Browser + +![](./static/nexus-connector-settings-reference-05.png) +If used as a Docker Repo, the user needs: + +* List images and tags +* Pull images + +See [Nexus Managing Security](https://help.sonatype.com/display/NXRM2/Managing+Security). + +### Artifact Type Support + +Legend: + +* **M** - Metadata. This includes Docker image and registry information. For AMI, this means AMI ID-only. +* **Blank** - Not supported. + + + +| | | | | | | | | | | | +| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +| **Docker Image**(Kubernetes/Helm) | **AWS** **AMI** | **AWS CodeDeploy** | **AWS Lambda** | **JAR** | **RPM** | **TAR** | **WAR** | **ZIP** | **PCF** | **IIS** | +| M | | | | | | | | | | M | + +### Docker Support + +Nexus 3 Artifact Servers only. + +### Nexus Artifact Server + +The Harness Nexus Artifact server connects your Harness account to your Nexus artifact resources. It has the following settings. + +#### Name + +The unique name for this Connector. + +#### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +#### Description + +Text string. + +#### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +#### Nexus Repository URL + +The URL that you use to connect to your Nexus server. For example, `https://nexus3.dev.mycompany.io`. + +#### Version + +The **Version** field in the dialog lists the supported Nexus version, 3.x. + +For Nexus 3.x, Harness supports only the Docker repository format as the artifact source. + +#### Credentials + +The username and password for the Nexus account. + +The password uses a [Harness Encrypted Text secret](../../6_Security/2-add-use-text-secrets.md). + +### Nexus Artifact Details + +#### Repository URL + +The URL you would use in the Docker login to fetch the artifact. This is the same as the domain name and port you use for `docker login hostname:port`. + +#### Repository Port + +The port you use for `docker login hostname:port`. As a best practice, include the scheme and port. For example `https://your-repo:443`. If you cannot locate the scheme, you may omit it, for example `your-repo:18080`. + +For more information, see [Docker Repository Configuration and Client Connection](https://support.sonatype.com/hc/en-us/articles/115013153887-Docker-Repository-Configuration-and-Client-Connection) and [Using Nexus 3 as Your Repository – Part 3: Docker Images](https://blog.sonatype.com/using-nexus-3-as-your-repository-part-3-docker-images) from Sonatype. + +#### Repository + +Name of the repository where the artifact is located. + +#### Artifact Path + +The name of the artifact you want to deploy. For example, `nginx`, `private/nginx`, `public/org/nginx`. + +The repository and artifact path must not begin or end with `/`.![](./static/nexus-connector-settings-reference-06.png) +#### Tag + +Select a Tag from the list. + diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/nexus-connector-settings-reference-05.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/nexus-connector-settings-reference-05.png new file mode 100644 index 00000000000..88f3621dfb2 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/nexus-connector-settings-reference-05.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/nexus-connector-settings-reference-06.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/nexus-connector-settings-reference-06.png new file mode 100644 index 00000000000..6a79eebfeac Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/nexus-connector-settings-reference-06.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-failure-strategy-settings-07.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-failure-strategy-settings-07.png new file mode 100644 index 00000000000..5a1a496ada1 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-failure-strategy-settings-07.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-failure-strategy-settings-08.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-failure-strategy-settings-08.png new file mode 100644 index 00000000000..5f66820c031 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-failure-strategy-settings-08.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-skip-condition-settings-09.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-skip-condition-settings-09.png new file mode 100644 index 00000000000..5a1a496ada1 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-skip-condition-settings-09.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-skip-condition-settings-10.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-skip-condition-settings-10.png new file mode 100644 index 00000000000..392940cc709 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/step-skip-condition-settings-10.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-11.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-11.png new file mode 100644 index 00000000000..e89b3def090 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-11.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-12.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-12.png new file mode 100644 index 00000000000..299026d300a Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-12.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-13.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-13.png new file mode 100644 index 00000000000..acd28d1e4bc Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-13.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-14.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-14.png new file mode 100644 index 00000000000..acd28d1e4bc Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-14.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-15.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-15.png new file mode 100644 index 00000000000..5864cbe78fc Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-15.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-16.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-16.png new file mode 100644 index 00000000000..ef59498fce7 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-16.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-17.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-17.png new file mode 100644 index 00000000000..e6e462570e1 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/triggers-reference-17.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-00.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-00.png new file mode 100644 index 00000000000..9c7c22a9ab2 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-00.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-01.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-01.png new file mode 100644 index 00000000000..0dd9cb31e68 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-01.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-02.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-02.png new file mode 100644 index 00000000000..990302d3f29 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-02.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-03.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-03.png new file mode 100644 index 00000000000..f67cf3bcdf0 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-03.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-04.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-04.png new file mode 100644 index 00000000000..cd18c542a18 Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yaml-reference-cd-pipeline-04.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yamlrefcdpipeline1.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yamlrefcdpipeline1.png new file mode 100644 index 00000000000..fac65c17b6d Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yamlrefcdpipeline1.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yamlrefcdpipeline2.png b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yamlrefcdpipeline2.png new file mode 100644 index 00000000000..7460b3c78ea Binary files /dev/null and b/docs/platform/8_Pipelines/w_pipeline-steps-reference/static/yamlrefcdpipeline2.png differ diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/step-failure-strategy-settings.md b/docs/platform/8_Pipelines/w_pipeline-steps-reference/step-failure-strategy-settings.md new file mode 100644 index 00000000000..f20df130469 --- /dev/null +++ b/docs/platform/8_Pipelines/w_pipeline-steps-reference/step-failure-strategy-settings.md @@ -0,0 +1,131 @@ +--- +title: Step and Stage Failure Strategy References +description: This topic provides settings and permissions for the [context]. In this topic -- [Name of Entity] [Setting label name]. [Setting label name]. [Setting label name].. Related Reference Material. [Name of… +# sidebar_position: 2 +helpdocs_topic_id: htrur23poj +helpdocs_category_id: lussbhnyjt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the failure strategy settings for Pipeline steps and stages. + + +### Where Can I add Failure Strategies? + +You can apply a failure strategy to the following: + +* **Step:** this failure strategy overrides (or enhances) the stage failure strategy. +* **Step Group:** you can set up a failure strategy for all steps in the group. Individual steps in the group will not have a failure strategy. +* **Stage:** the failure strategy for all steps and step groups in the stage. It is overridden by step and step group failure strategies, if present. + +See [Define a Failure Strategy on Stages and Steps](../define-a-failure-strategy-on-stages-and-steps.md). + +### Error Types + +The following types of errors can be selected in a failure strategy. + +Currently, only **All Errors** is supported. + +| | | +| --- | --- | +| **Error Type** | **Description** | +| Authentication | Credentials provided in a Connector are not valid. Typically, the Harness secret used for one of the credentials is incorrect.If Harness cannot determine if an error is Authentication and Authorization, then it is treated as an Authentication Error. | +| Authorization | The credentials are valid but the user permissions needed to access the resource are not sufficient.If Harness cannot determine if an error is Authentication and Authorization, then it is treated as an Authentication Error. | +| Connectivity | A Harness Delegate cannot connect to a specific resource. For example, the Delegate cannot connect to repo or a VM or a Secrets Manager. | +| Timeout | A Harness Delegate failed to complete a task within the timeout setting in the stage or step.For example, if the Kubernetes workload you are deploying fails to reach steady state within the step timeout. | +| Delegate Provisioning | No available Delegate can accomplish the task or the task is invalid.For example, if an HTTP step attempts to connect to a URL but there is no available Delegate to perform the task. | +| Verification | A Harness Continuous Verification step failed. | +| All Errors | An error whether defined by the other error types or not. | + +### Error Scope + +The scope of a failure strategy is confined to where it is set. + +For example, a failure strategy set on a step does not impact the failure strategy set on a stage. Likewise, the failure strategy set at the stage does not override any failure strategies on its steps. + +### Rollback Stage Only + +Both step and stage failure strategies include the **Rollback Stage** action option. There is no rollback step option. + +### Failure Strategy Settings + +The follow tables lists the failure strategy actions and how they work at the step, step group, and stage levels. + +These actions can be applied to the failure strategy as the primary action and as the timeout action. + + + +| | | | | +| --- | --- | --- | --- | +| **Action** | **Step** | **Step Group** | **Stage** | +| **Manual Intervention** | A Harness User will perform a manual intervention when the error type occurs.There are several options to select from:
  • Mark as Success
  • Ignore Failure
  • Retry
  • Abort
  • Rollback Stage
  • Harness pauses the pipeline execution while waiting for manual intervention. The state of the Pipeline execution is displayed as **Paused**. | Same as step. | Same as step, but applies to all steps. | +| **Mark as Success** | The step is marked as **Successful** and the stage execution continues. | Same as step. | The step that failed is marked as **Successful** and the Pipeline execution continues. | +| **Ignore Failure** | The stage execution continues. The step is marked as **Failed**, but no rollback is triggered. | Same as step. | Same as step. | +| **Retry** | Harness will retry the execution of the failed step automatically.You can set **Retry Count** and **Retry Intervals**. | Same as step. | Same as step. | +| **Abort** | Pipeline execution is aborted. If you select this option, no Timeout is needed. | Same as step. | Same as step. | +| **Rollback Stage** | The stage is rolled back to the state prior to stage execution. How the stage rolls back depends on the type of build or deployment it was performing. | Same as step. | Same as step. | +| **Rollback Step Group** | N/A | The step group is rolled back to the state prior to step group execution. How the step group rolls back depends on the type of build or deployment it was preforming. | N/A | + +### Review: Failure Strategy takes Precedence over Conditional Execution + +Harness Pipeline stages and steps both include **Conditional Execution** and **Failure Strategy** settings: + +![](./static/step-failure-strategy-settings-07.png) +Using these settings together in multiple stages requires some consideration. + +Let's say you have a Pipeline with two stages: **stage 1** followed by **stage 2**. + +Stage 2's **Conditional Execution** is set to **Execute this step only if prior stage or step failed**. Stage 1's **Failure Strategy** is set to **Rollback Stage on All Errors**. + +If stage 1 has any error it is rolled back and so it is not considered a failure. Hence, the stage 2's **Conditional Execution** is not executed. + +In order to get stage 2 to execute, you can set the stage 1 **Failure Strategy** to **Ignore Failure**. Rollback will not occur and stage 2's **Conditional Execution** is executed. + +In general, if you want to run particular steps on a stage failure, you should add them to stage's **Rollback** section. + +### Review: Stage and Step Priority + +The stage failure strategy applies to all steps that do not have their own failure strategy. A step's failure strategy overrides (or extends) its stage's failure strategy. + +Step failure strategies are evaluated before their stage's failure strategy. The order of the steps determines which failure strategy is evaluated first. + +If the first step in the Execution does not have a failure strategy, the stage's failure strategy is used. If the second step has its own failure strategy, it is used. And so on. + +### Review: Multiple Failure Strategies in a Stage + +A stage can have multiple failure strategies. + +![](./static/step-failure-strategy-settings-08.png) +When using multiple failure strategies in a stage, consider the following: + +* For failure strategies that do not overlap (different types of failures selected), they will behave as expected. +* Two failures cannot occur at the same time. So, whichever error occurs first, that failure strategy will be used. + +### Review: Failure Strategy Conflicts + +Conflicts might arise between failure strategies on the same level or different levels. By level, we mean the step level or the stage level. + +#### Same level + +If there is a conflict between multiple failures in strategies on the same level, the first applicable strategy is used, and the remaining strategies are ignored. + +For example, consider these two strategies: + +1. Abort on Verification Failure or Authentication Failure. +2. Ignore on Verification Failure or Connectivity Error. + +Here's what will happen: + +* On a verification failure, the stage is aborted. +* On an authentication failure, the stage is aborted. +* On a connectivity error, the error is ignored. + +#### Different levels + +If there is a clash between selected errors in strategies on different levels, the step-level strategy is used and the stage level strategy is ignored. + +### Related Reference Material + +* [Stage and Step Execution Condition Settings](step-skip-condition-settings.md) + diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/step-skip-condition-settings.md b/docs/platform/8_Pipelines/w_pipeline-steps-reference/step-skip-condition-settings.md new file mode 100644 index 00000000000..c4eb4b0dbaf --- /dev/null +++ b/docs/platform/8_Pipelines/w_pipeline-steps-reference/step-skip-condition-settings.md @@ -0,0 +1,106 @@ +--- +title: Stage and Step Conditional Execution Settings +description: This topic describes Pipeline stage and step Conditional Executional settings. Review -- Failure Strategy takes Precedence over Conditional Execution. Harness Pipeline stages and steps both include Con… +# sidebar_position: 2 +helpdocs_topic_id: i36ibenkq2 +helpdocs_category_id: lussbhnyjt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes Pipeline stage and step **Conditional Executional** settings. + +### Review: Failure Strategy takes Precedence over Conditional Execution + +Harness Pipeline stages and steps both include **Conditional Execution** and **Failure Strategy** settings: + +![](./static/step-skip-condition-settings-09.png) +Using these settings together in multiple stages requires some consideration. + +Let's say you have a Pipeline with two stages: **stage 1** followed by **stage 2**. + +Stage 2's **Conditional Execution** is set to **Execute this step only if prior stage or step failed**. Stage 1's **Failure Strategy** is set to **Rollback Stage on All Errors**. + +If stage 1 has any error it is rolled back and so it is not considered a failure. Hence, the stage 2's **Conditional Execution** is not executed. + +In order to get stage 2 to execute, you can set the stage 1 **Failure Strategy** to **Ignore Failure**. Rollback will not occur and stage 2's **Conditional Execution** is executed. + +In general, if you want to run particular steps on a stage failure, you should add them to stage's **Rollback** section. + +### Review: Stage and Step Priority + +The stage Conditional Execution applies to all steps that do not have their own Conditional Execution. A step's Conditional Execution overrides its stage's Conditional Execution. + +### Stage Conditions + +#### Execute This Stage if Pipeline Execution is Successful so Far + +Select this option if you only want this stage to run when all previous stages were successful. + +This is the default setting and used most of the time. + +#### Always execute this stage + +Select this option if you always want this stage to run regardless of the success or failure of previous stages. + +#### Execute This Stage Only if Prior Pipeline or Stage Failed + +Select this option if you always want this stage to run only if the prior Pipeline or stage failed. + +#### And execute this stage only if the following JEXL Condition evaluates to True + +Only execute this stage is a [JEXL expression](http://commons.apache.org/proper/commons-jexl/reference/examples.html) is met (evaluates to **true**). + +In the JEXL expression, you could use any of the Pipeline variables, including the output of any previous steps. + +Examples: + +* `<+pipeline.stages.cond.spec.execution.steps.echo.status> == "SUCCEEDED"` +* `<+environment.name> != “QA”` + +See [Built-in Harness Variables Reference](../../12_Variables-and-Expressions/harness-variables.md). + +### Step Conditions + +#### Execute this step if the stage execution is successful thus far + +Select this option if you only want this step to run when all previous steps were successful. + +This is the default setting and used most of the time. + +#### Always execute this step + +Select this option if you always want this step to run regardless of the success or failure of previous steps. + +#### Execute this step only if prior stage or step failed + +Select this option if you always want this step to run only if the prior stage or step failed. + +#### And execute this step only if the following JEXL Condition evaluates to True + +Only execute this step is a [JEXL expression](http://commons.apache.org/proper/commons-jexl/reference/examples.html) is met (evaluates to **true**). + +In the JEXL expression, you could use any of the Pipeline variables, including the output of any previous steps. + +Example: + +* `<+pipeline.stages.cond.spec.execution.steps.echo.status> == "SUCCEEDED"` +* `<+environment.name> != “QA”` + +For more information on variable expressions, go to [Built-in and Custom Harness Variables Reference](../../12_Variables-and-Expressions/harness-variables.md). + +### Variable Expressions in Conditional Execution Settings + +Stages and Steps support variable expressions in the JEXL conditions of their **Conditional Execution** settings. + +You can only use variable expressions in the JEXL conditions that can be resolved before the stage. + +Since **Conditional Execution** settings are used to determine if the stage should be run, you cannot use variable expressions that can't be resolved until the stage is run. + +### Deployment Status + +Deployment status values are a Java enum. The list of values can be seen in the Deployments **Status** filter: + +![](./static/step-skip-condition-settings-10.png) +You can use any status value in a JEXL condition. For example, `<+pipeline.stages.cond.spec.execution.steps.echo.status> == "FAILED"`. + diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/triggers-reference.md b/docs/platform/8_Pipelines/w_pipeline-steps-reference/triggers-reference.md new file mode 100644 index 00000000000..e6350f810f5 --- /dev/null +++ b/docs/platform/8_Pipelines/w_pipeline-steps-reference/triggers-reference.md @@ -0,0 +1,399 @@ +--- +title: Webhook Triggers Reference +description: This topic provides settings information for Triggers. Triggers are used to initiate the execution of Pipelines. For steps on setting up different types of Triggers, see Triggers Howtos. Name. The un… +# sidebar_position: 2 +helpdocs_topic_id: rset0jry8q +helpdocs_category_id: lussbhnyjt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic provides settings information for Triggers. Triggers are used to initiate the execution of Pipelines. + +For steps on setting up different types of Triggers, see [Triggers Howtos](https://docs.harness.io/category/triggers). + +### Name + +The unique name for the Trigger. + +### ID + +See [Entity Identifier Reference](../../20_References/entity-identifier-reference.md). + +### Description + +Text string. + +### Tags + +See [Tags Reference](../../20_References/tags-reference.md). + +### Payload Type + +Git providers, such as GitHub, Bitbucket, and GitLab. + +### Custom Payload type + +To use a custom payload type, copy the secure token and add it to your custom Git provider. + +Whenever you regenerate a secure token, any preceding tokens become invalid. Update your Git provider with the new token. + +### Connector + +Select the Code Repo Connector that connects to your Git provider account. + +See [Code Repo Connectors Tech Ref](https://docs.harness.io/category/code-repo-connectors). + +### Repository Name + +Enter the name of the repo in the account. + +### Event and Actions + + Select the Git events and actions that will initiate the Trigger. + + + +| | | | +| --- | --- | --- | +| **Payload Type** | **Event** | **Action** | +|  **Github** |  Pull Request | Closed | +|   |   | Edited | +|   |   | Labeled | +|   |   | Opened | +|   |   | Reopened | +|   |   | Synchronized | +|   |   | Unassigned | +|   |   | UnLabeled | +|   | Push | n/a | +|   | Issue CommentOnly comments on a pull request are supported. | Created | +|   |   | Deleted | +|   |   | Edited | +| **GitLab** | Push | N/A | +|   | Merge Request | Sync | +| | | Open | +| | | Close | +| | | Reopen | +| | | Merge | +| | | Update | +| **Bitbucket** |  On Pull Request | Pull Request Created | +|   |   | Pull Request Merged | +|   |   | Pull Request Declined | +|   | Push | | + +Harness uses your Harness account Id to map incoming events. Harness takes the incoming event and compares it to ALL triggers in the account. + +You can see the event Id that Harness mapped to a Trigger in the Webhook's event response body `data`: + + +``` +{"status":"SUCCESS","data":"60da52882dc492490c30649e","metaData":null,... +``` +Harness maps the success status, execution Id, and other information to this event Id. + +### Conditions + +Optional conditions to specify in addition to events and actions. These help to form the overall set of criteria to trigger a Pipeline based on changes in a given source. + +For example: + +* Trigger when a specific value is passed in the source payload. +* Trigger when there's a change in a specific file or a pull request. +* Trigger based on a specific artifact tag convention. + +#### Conditions are ANDs + +You can think of each Trigger as a complex filter in which all Conditions are `AND`-ed together. To execute a Trigger, the event payload must match all Conditions in the Trigger. + +![](./static/triggers-reference-11.png) +In this example, an event must match all conditions under **Source Branch**, **Target Branch**, **Header Conditions**, **Payload Conditions**, and **JEXL Conditions** for the Trigger to be filtered. + +To use `OR`, `NOT`, or other operators across the payload, use a **JEXL Condition** and leave the rest empty. + +The JEXL `in` operator is not supported in **JEXL Condition**. + +#### Source and Target Branch + +The source and target branches of the Git merge that must be matched. + +These are available depending on the type of event selected. Any event that belongs to a merge will have Source Branch and Target Branch conditions. + +For example: + +* Source Branch starts with `new-` +* Target Branch equals `main` + +![](./static/triggers-reference-12.png) +#### Built-in Git Trigger and Payload Expressions + +Harness includes built-in expressions for referencing trigger details such as a PR number. + +##### Main Expressions + +* `<+trigger.type>` + + Webhook. +* `<+trigger.sourceRepo>` + + Github, Gitlab, Bitbucket, Custom +* `<+trigger.event>` + + PR, PUSH, etc. + +##### PR and Issue Comment Expressions + +* `<+trigger.targetBranch>` +* `<+trigger.sourceBranch>` +* `<+trigger.prNumber>` +* `<+trigger.prTitle>` +* `<+trigger.gitUser>` +* `<+trigger.repoUrl>` +* `<+trigger.commitSha>` +* `<+trigger.baseCommitSha>` +* `<+trigger.event>` + + PR, PUSH, etc. + +##### Push Expressions + +* `<+trigger.targetBranch>` +* `<+trigger.gitUser>` +* `<+trigger.repoUrl>` +* `<+trigger.commitSha>` +* `<+trigger.event>` + + PR, PUSH, etc. + +#### Header Conditions + +Valid JSON cannot contain a dash (–), but headers are not JSON strings and often contain dashes. For example, X-Github-Event, content-type: + + +``` +Request URL: https://app.harness.io: +Request method: POST +Accept: */* +content-type: application/json +User-Agent: GitHub-Hookshot/0601016 +X-GitHub-Delivery: be493900-000-11eb-000-000 +X-GitHub-Event: create +X-GitHub-Hook-ID: 281868907 +X-GitHub-Hook-Installation-Target-ID: 250384642 +X-GitHub-Hook-Installation-Target-Type: repository +``` +The header expression format is `<+trigger.header['key-name']>`. For example. `<+trigger.header['X-GitHub-Event']>`. + +![](./static/triggers-reference-13.png) +If the header key doesn't contain a dash (`–`), then the format `<+trigger.header.['key name']>` will work also. + +When Harness evaluates the header key you enter, the comparison is case insensitive. + +In **Matches Value**, you can enter multiple values separated by commas and use wildcards. + +#### Payload Conditions + +Conditions based on the values of the JSON payload. Harness treats the JSON payload as a data model and parses the payload and listens for events on a JSON payload key. + +To reference payload values, you use `<+eventPayload.` followed by the path to the key name. + +For example, a payload will have a repository owner: + + +``` +... +> "repository" : { +> "id": 1296269, +> "full_name": "octocat/Hello-World", +> "owner": { +> "login": "octocat", +> "id": 1, +> ... +> }, +... +``` +To reference the repository owner, you would use `<+eventPayload.repository.owner>`. Here's an example using `name`: + +![](./static/triggers-reference-14.png) +Next, you enter an operator and the value to match. For example: + +![](./static/triggers-reference-15.png) +#### Referencing Payload Fields + +You can reference any payload fields using the expression `<+trigger.payload.pathInJson>`, where `pathInJson` is the path to the field in the JSON payload. + +For example: `<+trigger.payload.pull_request.user.login>` + +How you reference the path depends on a few things: + +* There are different payloads for different events. +* Different Git providers send JSON payloads formatted differently, even for the same event. For example, a GitHub push payload might be formatted differently than a Bitbucket push payload. Always make sure the path you use works with the provider's payload format. + +#### JEXL Expressions + +You can refer to payload data and headers using [JEXL expressions](https://commons.apache.org/proper/commons-jexl/reference/syntax.html). That includes all constants, methods, and operators in [JexlOperator](https://commons.apache.org/proper/commons-jexl/apidocs/org/apache/commons/jexl3/JexlOperator.html). + +Be careful when you combine Harness variables and JEXL expressions. + +* **Invalid expression:** `<+pipeline.variables.MAGIC.toLowerCase()>` +This expression is ambiguous. It could be evaluated as a Harness variable (return the value of variable `pipeline.variables.MAGIC.toLowerCase()`) or as a JEXL operation (return the lowercase of literal string `pipeline.variables.MAGIC`). +* **Valid expression:** `<+<+pipeline.variables.MAGIC>.toLowerCase()>` First it gets the value of variable `pipeline.variables.MAGIC`. Then it returns the value converted to all lowercase. + +Here are some examples: + +* `<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo")` +* `<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo") || <+trigger.payload.repository.owner.name> == "wings-software"` +* `<+trigger.payload.pull_request.diff_url>.contains("triggerNgDemo") && (<+trigger.payload.repository.owner.name> == "wings-software" || <+trigger.payload.repository.owner.name> == "harness")` + +#### Operators + +Some operators work on single values and some work on multiple values: + +**Single values:** `equals`, `not equals`, `starts with`, `ends with`, `regex`. + +**Multiple values:** `in`, `not in`. + +The **IN** and **NOT IN** operators don't support Regex. + +### Pipeline Input + +Runtime Inputs for the Trigger to use, such as Harness Service and artifact. + +You can use the [Built-in Git Payload Expressions](#built_in_git_trigger_and_payload_expressions) and JEXL expressions in this setting. + +See [Run Pipelines using Input Sets and Overlays](../run-pipelines-using-input-sets-and-overlays.md). + +### Webhook + +For all Git providers supported by Harness, the Webhook is created in the repo automatically. You don't need to copy it and add it to your repo webhooks. + +#### Git Events Automatically Registered with Webhooks + +The following Git events are automatically added to the webhooks Harness registers. + +##### GitHub + +[GitHub events](https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads): + +* `create` +* `push` +* `delete` +* `deployment` +* `pull_request` +* `pull_request_review` + +##### GitLab + +[GitLab events](https://docs.gitlab.com/ee/user/project/integrations/webhooks.html): + +* Comment events +* Issue events +* Merge request events +* Push events + +##### Bitbucket Cloud + +[Bitbucket Cloud events](https://support.atlassian.com/bitbucket-cloud/docs/event-payloads/): + +* `issue` +* `pull request` + +##### Bitbucket Server + +[Bitbucket Server events](https://confluence.atlassian.com/bitbucketserver/event-payload-938025882.html): + +* Pull requests +* Branch push tag + +#### Manually Applying the Webhook + +Although Webhooks are applied automatically by Harness, here's a quick summary of the manual process. This is provided in case the automatic Webhook registration doesn't work. + +You obtain the Webhook to use in your repo by clicking the **Webhook** icon. + +![](./static/triggers-reference-16.png) +Log into your repo in the Git provider and navigate to its Webhook settings. For example, here's the **Webhooks** section of GitHub. + +![](./static/triggers-reference-17.png) +Add a Webhook. + +In the Webhook settings, paste the Webhook URL you copied from Harness into the payload URL setting in the repo. + +Make sure that you select JSON for the content type. For example, in GitHub, you select **application/json** in **Content type**. + +#### Custom Webhook + +You enter the webhook in your custom Git provider. + +Make sure that the payload content type is JSON (application/json).The format for the custom Webhook is: + + +``` +https://app.harness.io/pipeline/api/webhook/custom?accountIdentifier=123456789&orgIdentifier=default&projectIdentifier=myProject&pipelineIdentifier=newpipelinetest&triggerIdentifier=myTrigger +``` +The `pipelineIdentifier` and `triggerIdentifier` target the Webhook at the specific Pipeline and Trigger. + +Is some cases, you will not want to target the Webhook at the specific Pipeline and Trigger. For example, there are events in GitHub that are not covered by Harness and you might want to set up a Trigger for those events that applies to all Pipelines and their Triggers in the Project. + +To instruct Harness to evaluate the custom Trigger against all Pipelines (until it finds a **Conditions** match), remove `pipelineIdentifier` and `triggerIdentifier` from the URL before adding it to your repo. + +The `orgIdentifier` and `projectIdentifier` are mandatory. + +### Last Execution Details + +In a Trigger's details, you can see when the Trigger was executed: + +Activation means the Trigger could request Pipeline execution. It doesn't mean that the Webhook didn't work. + +If you see **Failed**, the Pipeline probably has a configuration issue that prevented the Trigger from initiating a Execution. + +### YAML Example + +You can edit your Trigger in YAML also. Click the Trigger, and then click YAML. + + +``` +trigger: + name: GitlabNewTrigger + identifier: GitlabNewTrigger + enabled: true + description: "" + tags: {} + orgIdentifier: default + projectIdentifier: NewProject + pipelineIdentifier: testpp + source: + type: Webhook + spec: + type: Gitlab + spec: + type: MergeRequest + spec: + connectorRef: gitlab + autoAbortPreviousExecutions: true + payloadConditions: + - key: <+trigger.payload.user.username> + operator: In + value: john, doe.john + headerConditions: + - key: <+trigger.header['X-Gitlab-Event']> + operator: Equals + value: Merge Request Hook + jexlCondition: (<+trigger.payload.user.username> == "doe" || <+trigger.payload.user.username> == "doe.john") && <+trigger.header['X-Gitlab-Event']> == "Merge Request Hook" + actions: [] + inputYaml: | + pipeline: + identifier: testpp + properties: + ci: + codebase: + build: + type: branch + spec: + branch: <+trigger.branch> + variables: + - name: testVar + type: String + value: alpine +``` +### Notes + +* For details on each providers events, see: + + [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) from GitHub. + + [Events](https://docs.gitlab.com/ee/api/events.html) from GitLab. + + [Repository events](https://support.atlassian.com/bitbucket-cloud/docs/event-payloads/#Repository-events) from Bitbucket. + diff --git a/docs/platform/8_Pipelines/w_pipeline-steps-reference/yaml-reference-cd-pipeline.md b/docs/platform/8_Pipelines/w_pipeline-steps-reference/yaml-reference-cd-pipeline.md new file mode 100644 index 00000000000..947ab68b3f7 --- /dev/null +++ b/docs/platform/8_Pipelines/w_pipeline-steps-reference/yaml-reference-cd-pipeline.md @@ -0,0 +1,919 @@ +--- +title: YAML Reference -- Pipelines +description: This topic describes the YAML schema for a CD Pipeline. +# sidebar_position: 2 +helpdocs_topic_id: xs2dfgq7s2 +helpdocs_category_id: lussbhnyjt +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the YAML schema for a Pipeline. + +It includes common examples of the major schema entries. + +New to Harness YAML? See [Harness YAML Quickstart](../harness-yaml-quickstart.md). + +### Visual Summary + +Here's a very quick video showing you how to build a Pipeline using YAML: + +### Viewing the YAML Schema + +The Harness YAML schema is over 20k lines long and we are investigating how to expose it in a way that makes it easy to navigate. + +In the meantime, you can use [Chrome DevTools](https://developer.chrome.com/docs/devtools/) to view the schema: + +![](./static/yaml-reference-cd-pipeline-00.png) +### Pipeline Studio YAML Editor + +The Pipeline Studio includes visual and YAML editors. + +The best way to get started with YAML is to do a CI or CD quickstart and then view the YAML in Pipeline Studio. + +See ​[CD Quickstarts](https://docs.harness.io/category/c9j6jejsws) and [CI Quickstarts](https://docs.harness.io/category/onboard-with-ci). + +The YAML editor validates YAML before allowing you to save it. + +To learn how to use the YAML editor, see [Harness YAML Quickstart for CD](../harness-yaml-quickstart.md). + +### Autocomplete and Command Palette + +The YAML editor has an autocomplete feature that makes it very easy to see what entries are available. + +The keyboard command for autocomplete is `Ctrl + Space`. + +![](./static/yaml-reference-cd-pipeline-01.png) +If an entry already has a value, the autocomplete will not show you other options. You need to delete the value and then enter `Ctrl + Space`. + +The command palette keyboard command is `F1`. + +![](./static/yaml-reference-cd-pipeline-02.png) +The command palette displays the keyboard shortcuts for all commands. + +### Limitations + +The visual editor lets you create Connectors inline but the YAML editor does not. For the YAML editor you need the entity Id of the Connector first. Typically, the Id is entered as the value for the `connectorRef` key. + +This is a minor limitation as once you have entered in the Id you can configure the rest of the settings in YAML using autocomplete. + +For example, here is a Connector with the name `GCP Example` and Id `GCP_Example`. You can see the Id used in the YAML as `connectorRef: GCP_Example`: + + + +| | | +| --------------------------------------------- | ------------------------------------ | +| **Connector** | **YAML** | +| ![](.static/../static/yamlrefcdpipeline1.png) | ![](./static/yamlrefcdpipeline2.png) | + +``` +...type: Gcrspec: connectorRef: GCP_Example imagePath: library/bar registryHostname: gcr.io tag: <+input>identifier: foo... +``` + | + +Once you have entered the Id in `connectorRef`, you can use autocomplete to view the remaining settings: + +![](./static/yaml-reference-cd-pipeline-03.png) +### Schema Overview + +Harness Pipeline YAML lets you model your release process declaratively. Each Pipeline entity, component, and setting has a YAML entry. + +#### Entries + +Entries are standard YAML associative arrays using `key: value`. + +Settings are not quoted. + +#### Indentation + +Whitespace indentation is 4 spaces. + +### Conventions + +Pipeline syntax uses the following conventions: + +* Quotes indicate an empty string. Values do not need quotes. Once you enter a value and save the quotes are removed. +* Entries are listed as `key: keyword`. The key is a data type that corresponds to a setting such as `skipResourceVersioning`. The keyword is a literal definition for the setting, like `false` or `K8sManifest`. +* Brackets indicate an inline series branch (an array of the data type). For example `variables: []`. To edit, you delete the brackets, enter a new line, and enter a dash `-`. Now you can use autocomplete to see the available entries. +* Curly braces indicate an array separated by new lines. For example, `tags: {}`. To enter the entries for this type, delete the curly braces, enter a new line, and then enter the `key: value` pairs. For example: +``` +... +tags: + docs: "CD" + yaml example: "" +... +``` +* The block style indicator `|` turns every newline within the string into a literal newline and adds one line at the end. The `-` indicates removes newlines from the end. For example: +``` +... +script: |- + echo "hello" + + echo <+pipeline.name> +... +``` + +### Pipeline + +A Pipeline can have an unlimited number of configurations. This section simply covers the basic YAML schema. + +#### Schema + +This is the basic schema for a Pipeline: + + +``` +pipeline: + name: "" + identifier: "" + projectIdentifier: "" + orgIdentifier: "" + tags: {} + stages: + - stage: + flowControl: + notificationRules: +``` +The `projectIdentifier` and `orgIdentifier` must match existing Project and Org Ids. + +From here you can add stages, Service, Infrastructure, and Execution. Harness will not allow you to save your Pipeline until these three components are set up. + +#### Example + +Here is a very basic Pipeline that meets the minimum requirements and uses a Shell Script step: + + +``` +pipeline: + name: YAML + identifier: YAML + projectIdentifier: CD_Examples + orgIdentifier: default + tags: {} + stages: + - stage: + name: Deploy + identifier: Deploy + description: "" + type: Deployment + spec: + serviceConfig: + serviceRef: nginx + serviceDefinition: + type: Kubernetes + spec: + variables: [] + infrastructure: + environmentRef: helmchart + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: <+input> + namespace: <+input> + releaseName: release-<+INFRA_KEY> + allowSimultaneousDeployments: false + execution: + steps: + - step: + type: ShellScript + name: Echo + identifier: Echo + spec: + shell: Bash + onDelegate: true + source: + type: Inline + spec: + script: echo "hello" + environmentVariables: [] + outputVariables: [] + executionTarget: {} + timeout: 10m + rollbackSteps: [] + tags: {} + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: StageRollback +``` +See also: + +* [Using Shell Scripts in CD Stages](https://docs.harness.io/article/k5lu0u6i1i-using-shell-scripts) + +### Stage + +A Stage is a subset of a Pipeline that contains the logic to perform one major segment of the Pipeline process. Stages are based on the different milestones of your Pipeline, such as building, approving, and delivering. + +#### Schema + + +``` +stages: + - stage: + name: "" + identifier: "" + description: "" + type: Deployment + spec: + serviceConfig: + serviceRef: "" + serviceDefinition: + type: "" + spec: + variables: [] + infrastructure: + environmentRef: "" + infrastructureDefinition: + type: "" + spec: + connectorRef: "" + namespace: "" + releaseName: release-"" + allowSimultaneousDeployments: "" + execution: + steps: + - step: + type: "" + name: "" + identifier: "" + spec: + rollbackSteps: [] + tags: {} + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: StageRollback +``` +Rollback steps, failure strategies, and execution conditions are set at the end of the stage YAML. + + +``` +... + rollbackSteps: [] + tags: {} + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: StageRollback +... +``` +#### Example + + +``` +stages: + - stage: + name: Deploy + identifier: Deploy + description: "" + type: Deployment + spec: + serviceConfig: + serviceRef: nginx + serviceDefinition: + type: Kubernetes + spec: + variables: [] + infrastructure: + environmentRef: helmchart + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: <+input> + namespace: <+input> + releaseName: release-<+INFRA_KEY> + allowSimultaneousDeployments: false + execution: + steps: + - step: + type: ShellScript + name: Echo + identifier: Echo + spec: + shell: Bash + onDelegate: true + source: + type: Inline + spec: + script: echo "hello" + environmentVariables: [] + outputVariables: [] + executionTarget: {} + timeout: 10m + rollbackSteps: [] + tags: {} + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: StageRollback +``` +See also: + +* [Add a Stage](../add-a-stage.md) +* [Define a Failure Strategy on Stages and Steps](../define-a-failure-strategy-on-stages-and-steps.md) +* [Set Execution Conditions on Stages and Steps](https://docs.harness.io/article/f5y37ke7ko-set-execution-conditions-on-stages-and-steps) + +### Service + +A Service represents your microservices and other workloads logically. + +A Service is a logical entity to be deployed, monitored, or changed independently. + +When a Service is added to the stage in a Pipeline, you define its Service Definition. Service Definitions represent the real artifacts, manifests, and variables of a Service. They are the actual files and variable values. + +#### Schema + + +``` +spec: + serviceConfig: + serviceRef: "" + serviceDefinition: + type: "" + spec: + variables: [] +``` +If you propagate a Service from a previous stage, the YAML indicated it this way: + + +``` +spec: + serviceConfig: + useFromStage: + stage: "" +``` +#### Example + + +``` +spec: + serviceConfig: + serviceRef: nginx + serviceDefinition: + type: Kubernetes + spec: + variables: [] +``` +See also: + +* [Propagate and Override CD Services](https://docs.harness.io/article/t57uzu1i41-propagate-and-override-cd-services) + +### Infrastructure + +Infrastructure is defined under Environments. Environments represent your deployment targets logically (QA, Prod, etc). You can add the same Environment to as many Stages as you need. + +#### Schema + + +``` +infrastructure: + environmentRef: "" + infrastructureDefinition: + type: "" + spec: + connectorRef: "" + namespace: "" + releaseName: release-"" + allowSimultaneousDeployments: true|false +``` +#### Example + + +``` +infrastructure: + environmentRef: <+input> + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: <+input> + namespace: <+input> + releaseName: release-<+INFRA_KEY> + allowSimultaneousDeployments: false +``` +That example is for the platform agnostic Kubernetes infrastructure. For a different infrastructure, such as GCP, it would look slightly different: + + +``` + infrastructure: + environmentRef: <+input> + infrastructureDefinition: + type: KubernetesGcp + spec: + connectorRef: <+input> + cluster: <+input> + namespace: <+input> + releaseName: release-<+INFRA_KEY> + allowSimultaneousDeployments: false +``` +See also: + +* [Define Your Kubernetes Target Infrastructure](https://docs.harness.io/article/0ud2ut4vt2-define-your-kubernetes-target-infrastructure) +* [Define Kubernetes Cluster Build Infrastructure](https://docs.harness.io/article/x7aedul8qs-kubernetes-cluster-build-infrastructure-setup) + +### Execution + +The stage Execution contains the steps for the stage. + +#### Schema + + +``` +execution: + steps: + - step: + identifier: "" + name: "" + type: "" + rollbackSteps: [] +``` +#### Example + +Here is an example using the Shell Script step. + + +``` +execution: + steps: + - step: + type: ShellScript + name: Step + identifier: Step + spec: + shell: Bash + onDelegate: true + source: + type: Inline + spec: + script: echo "hello" + environmentVariables: [] + outputVariables: [] + executionTarget: {} + timeout: 10m + rollbackSteps: [] +``` +See also: + +* [Viewing Execution Status](https://docs.harness.io/article/aiuwxmwfe9-viewing-execution-status) + +### Steps and Step Groups + +A step is an individual operation in a stage. + +Steps can be run in sequential and parallel order. + +A Step Group is a collection of steps that share the same logic such as the same rollback strategy. + +#### Schema + +Step: + + +``` +- step: + identifier: + name: + type: +``` +Step Group: + + +``` +- stepGroup: + name: + identifier: + steps: + - step: + identifier: + name: + type: +``` +#### Example + +Each step has different entries. + +Here is an example of a Canary Deployment step. + + +``` +- step: + name: Canary Deployment + identifier: canaryDeployment + type: K8sCanaryDeploy + timeout: 10m + spec: + instanceSelection: + type: Count + spec: + count: 1 + skipDryRun: false +``` +See also: + +* [CD How-tos](https://docs.harness.io/category/21a052rbi0) + +### Fixed Value, Runtime Input, and Expression + +You can use Fixed Value, Runtime Input, and Expressions for most settings. + +* **Fixed value:** The value is simply entered in the value of the `key: value` entry. +* **Runtime input:** You simple enter `<+input>` in the value of the `key: value` entry. +* **Expression:** You simply enter `<+` in the value of the `key: value` entry and a list of available expressions appears. Select the expression to use. + +![](./static/yaml-reference-cd-pipeline-04.png) +See also: + +* [Fixed Values, Runtime Inputs, and Expressions](../../20_References/runtime-inputs.md) + +### Flow Control + +Barriers allow you to synchronize different stages in your Pipeline, and control the flow of your deployment systematically. + +The Flow Control YAML is at the end of the Pipeline YAML, but it is not mandatory. + +#### Schema + + +``` +pipeline: + name: "" + identifier: "" + projectIdentifier: "" + orgIdentifier: "" + tags: {} + stages: +... + flowControl: + barriers: + - name: "" + identifier: "" +``` +#### Example + + +``` +flowControl: + barriers: + - name: mybarrier + identifier: mybarrier +``` +See also: + +* [Synchronize Deployments using Barriers](https://docs.harness.io/article/dmlf8w2aeh-synchronize-deployments-using-barriers) + +### Notification Rules + +You can send Pipeline event notifications using email and popular communication and incident management platforms. + +Event notifications are set up using Notification Rules in your Pipeline. You select the types of events to send, and then select how you want to send notifications. When those events occur, Harness sends event information to those channels and recipients. + +The Notification Rules YAML is at the end of the Pipeline YAML, but it is not mandatory. + +#### Schema + + +``` +pipeline: + name: "" + identifier: "" + projectIdentifier: "" + orgIdentifier: "" + tags: {} + stages: +... + notificationRules: + - name: "" + pipelineEvents: + - type: "" + notificationMethod: + type: "" + spec: + userGroups: [] + recipients: + - "" + enabled: true|false +``` +#### Example + + +``` +notificationRules: + - name: mynotification + pipelineEvents: + - type: AllEvents + notificationMethod: + type: Email + spec: + userGroups: [] + recipients: + - john.doe@harness.io + enabled: true +``` +See also: + +* [Add a Pipeline Notification Strategy](https://docs.harness.io/article/4bor7kyimj-notify-users-of-pipeline-events) + +### Triggers + +Triggers are used to automate the execution of Pipelines based on some event like new artifact/manifest, or run on a schedule or an external webhook. + +Trigger YAML is not part of the main Pipeline YAML, but Triggers can be configure using YAML in the **Triggers** tab of the Pipeline. + +#### Schema + +Webhook: + + +``` +trigger: + name: "" + identifier: "" + enabled: true|false + description: "" + tags: {} + orgIdentifier: "" + projectIdentifier: "" + pipelineIdentifier: "" + source: + type: Webhook + spec: + type: Github + spec: + type: "" + spec: + connectorRef: "" + autoAbortPreviousExecutions: true|false + payloadConditions: + - key: "" + operator: "" + value: "" + - key: "" + operator: "" + value: "" + - key: "" + operator: "" + value: "" + headerConditions: [] + actions: [] +``` +Schedule (Cron): + + +``` +trigger: + name: "" + identifier: "" + enabled: true|false + tags: {} + orgIdentifier: "" + projectIdentifier: "" + pipelineIdentifier: "" + source: + type: Scheduled + spec: + type: Cron + spec: + expression: "" +``` +Custom: + + +``` +trigger: + name: "" + identifier: "" + enabled: true|false + description: "" + tags: {} + orgIdentifier: "" + projectIdentifier: "" + pipelineIdentifier: "" + source: + type: Webhook + spec: + type: Custom + spec: + payloadConditions: [] + headerConditions: [] +``` +#### Example + +Here's a Webhook Trigger for a Pipeline with Runtime Input settings. Runtime Input settings in a Pipeline result in the `inputYaml` section of the Trigger. + + +``` +trigger: + name: mytrigger + identifier: mytrigger + enabled: true + description: "" + tags: {} + orgIdentifier: default + projectIdentifier: CD_Examples + pipelineIdentifier: YAML + source: + type: Webhook + spec: + type: Github + spec: + type: PullRequest + spec: + connectorRef: quickstart + autoAbortPreviousExecutions: false + payloadConditions: + - key: changedFiles + operator: Equals + value: filename + - key: sourceBranch + operator: Equals + value: foo + - key: targetBranch + operator: Equals + value: bar + headerConditions: [] + actions: [] + inputYaml: | + pipeline: + identifier: YAML + stages: + - stage: + identifier: Deploy + type: Deployment + spec: + infrastructure: + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: Kubernetes_Quickstart + namespace: default + - stage: + identifier: Canary + type: Deployment + spec: + infrastructure: + environmentRef: helmchart + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: Kubernetes_Quickstart + namespace: default +``` +See also: + +* + +### Input Sets and Overlays + +Harness Input Sets are collections of runtime inputs for a Pipeline provided before execution. Runtime inputs contain the values that you would be prompted to provide when you executed the Pipeline. + +Overlays are groups of Input Sets. Overlays enable you to provide several Input Sets when executing a Pipeline. + +#### Schema + +The Input Set YAML depends on the settings in your Pipeline that use Runtime Inputs. + +Input Set: + + +``` +inputSet: + name: "" + tags: {} + identifier: "" + orgIdentifier: "" + projectIdentifier: "" + pipeline: + identifier: "" + stages: + - stage: + identifier: "" + type: "" + spec: + infrastructure: + infrastructureDefinition: + type: "" + spec: + connectorRef: "" + namespace: "" +``` +Overlay: + + +``` +overlayInputSet: + name: "" + identifier: "" + orgIdentifier: "" + projectIdentifier: "" + pipelineIdentifier: "" + inputSetReferences: + - "" + - "" + tags: {} +``` +#### Example + +Input Set: + + +``` +inputSet: + name: My Input Set 1 + tags: {} + identifier: My_Input_Set + orgIdentifier: default + projectIdentifier: CD_Examples + pipeline: + identifier: YAML + stages: + - stage: + identifier: Deploy + type: Deployment + spec: + infrastructure: + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: Kubernetes_Quickstart + namespace: default + - stage: + identifier: Canary + type: Deployment + spec: + infrastructure: + environmentRef: helmchart + infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: Kubernetes_Quickstart + namespace: default +``` +Overlay: + + +``` +overlayInputSet: + name: My Overlay Set + identifier: My_Overlay_Set + orgIdentifier: default + projectIdentifier: CD_Examples + pipelineIdentifier: YAML + inputSetReferences: + - My_Input_Set + - My_Input_Set_2 + tags: {} +``` +See also: + +* [Input Sets and Overlays](../input-sets.md) + +### Connectors + +Connectors contain the information necessary to integrate and work with 3rd party tools. + +Harness uses Connectors at Pipeline runtime to authenticate and perform operations with a 3rd party tool. + +In the visual editor, Connectors can be added inline as your build your Pipeline. + +In YAML editor, Connectors are not configured inline. You can only reference existing Connectors. + +When you create a Connector, you can use YAML. + +Here's what the YAML for a Connector looks like: + + +``` +connector: + name: cd-doc + identifier: cddoc + description: "" + orgIdentifier: default + projectIdentifier: CD_Examples + type: K8sCluster + spec: + credential: + type: InheritFromDelegate + delegateSelectors: + - example +``` +You reference a Connector in your Pipeline by using its Id in `connectorRef`: + + +``` +... +infrastructureDefinition: + type: KubernetesDirect + spec: + connectorRef: cddoc + namespace: default + releaseName: release-<+INFRA_KEY> +... +``` + diff --git a/docs/platform/9_Approvals/_category_.json b/docs/platform/9_Approvals/_category_.json new file mode 100644 index 00000000000..99b6b7fd004 --- /dev/null +++ b/docs/platform/9_Approvals/_category_.json @@ -0,0 +1 @@ +{"label": "Approvals", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Approvals"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "2d7y1cr09y"}} \ No newline at end of file diff --git a/docs/platform/9_Approvals/adding-harness-approval-stages.md b/docs/platform/9_Approvals/adding-harness-approval-stages.md new file mode 100644 index 00000000000..171a58da475 --- /dev/null +++ b/docs/platform/9_Approvals/adding-harness-approval-stages.md @@ -0,0 +1,139 @@ +--- +title: Using Manual Harness Approval Stages +description: Approve or reject a Pipeline at any point in its execution using Manual Approval Stages. +# sidebar_position: 2 +helpdocs_topic_id: fkvso46bok +helpdocs_category_id: 2d7y1cr09y +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can specify Harness User Group(s) to approve or reject a Pipeline at any point in its execution. During deployment, the User Group members use the Harness Manager to approve or reject the Pipeline deployment manually. + +Approvals are added in between Stages to prevent the Pipeline execution from proceeding without an approval. + +For example, in a [Build Pipeline](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md), you might want to add an approval stage between a Build Stage and an Integration Test Stage. + +Other approval methods are: + +* [Manual Harness Approval Steps in CD Stages](https://docs.harness.io/article/43pzzhrcbv-using-harness-approval-steps-in-cd-stages): add Approval steps to a stage for manual intervention. +* [Adding Jira Approval Stages and Steps](adding-jira-approval-stages.md): add Jira Approval stages and steps. + +### Before you begin + +* [Add a Stage](../8_Pipelines/add-a-stage.md) + +### Visual Summary + +Here's a Manual Approval Stage step during the execution of a Pipeline: + +![](./static/adding-harness-approval-stages-15.png) +An approver can approve/reject the stage, stopping the Pipeline. The approver can also add comments and define variables for use by subsequent approvers and steps. + +Here's a quick video that walks you through setting up and running the step: + +Here's what a Manual Approval Stage and step looks like in YAML: + +YAML Example +``` +- stage: + name: Manual Stage + identifier: Manual_Stage + description: "" + type: Approval + spec: + execution: + steps: + - step: + name: Approval + identifier: approval + type: HarnessApproval + timeout: 1d + spec: + approvalMessage: |- + Please review the following information + and approve the pipeline progression + includePipelineExecutionHistory: true + approvers: + minimumCount: 1 + disallowPipelineExecutor: false + userGroups: + - docs + approverInputs: + - name: myvar + defaultValue: myvalue + failureStrategies: [] +``` +### Step 1: Add Approval Stage + +In a CD Pipeline, click **Add Stage**. + +Click **Approval**. + +Enter a name and then click **Harness Approval**. The **Harness Approval** stage appears, containing a new **Approval** step. + +Click the **Approval** step. + +### Step 2: Set Timeout + +Set a default for the step timeout. Leave enough time for the Users in **Approvers** to see and respond to the waiting step. + +The default timeout for an Approval step is **1d** (24 hours). You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day. + +The maximum timeout duration is 24 days.The timeout countdown appears when the step in executed. + +![](./static/adding-harness-approval-stages-16.png) +### Option: Add Message + +In **Approval Message**, add the message for the Users in **Approvers**. + +### Option: Include Pipeline Execution History in Approval Details + +Enable this option to provide approvers with the execution history for this Pipeline. This can help approvers make their decision. + +### Step 3: Select Approvers + +In **Approvers**, in **User Groups**, select the Harness User Groups across Project/Org/Account scope, that will approve the step. + +![](./static/adding-harness-approval-stages-17.png) +In **Number of approvers**, enter how many of the Users in the User Groups must approve the step. + +### Option: Prevent Approval by Pipeline Executor + +If you don't want the User that initiated the Pipeline execution to approve this step, select the **Disallow the executor from approving the pipeline** option. + +### Option: Approver Inputs + +You can enter variables and when the approver views the step they can provide new values for the variables. + +If there are multiple approvers, the first approver sees the variables as you entered them in the step. If the first approver enters new values, the next approver sees the values the first approver entered. + +A third approver will see all of the variables the first and second approver provided. + +The variable values entered by the final approver populate the expressions created from the inputs. + +For example, if there were three approvers and you added a Shell Script step that referenced the input variables with an expression, the expression would render the variable values entered by the final, third approver. + +You can reference input variables using the `approverInputs` expression: + +`<+pipeline.stages.[stage_name].spec.execution.steps.[step_name].output.approverInputs.[variable_name]>` + +These variables can serve as inputs to later stages of the same Pipeline, where they support conditional execution or user overrides.  + +For example, in a subsequent step's **Conditional Execution** settings, you could use an expression that only runs the step if the expression evaluates to 1. + +`<+pipeline.stages.Shell_Script.spec.execution.steps.Harness_Approval_Step.output.approverInputs.foo> == 1` + +### Option: Advanced Settings + +See: + +* [Step Skip Condition Settings](../8_Pipelines/w_pipeline-steps-reference/step-skip-condition-settings.md) +* [Step Failure Strategy Settings](../8_Pipelines/w_pipeline-steps-reference/step-failure-strategy-settings.md) +* [Select Delegates with Selectors](../2_Delegates/delegate-guide/select-delegates-with-selectors.md) + +### See also + +* [Using Manual Harness Approval Steps in CD Stages](https://docs.harness.io/article/43pzzhrcbv-using-harness-approval-steps-in-cd-stages) +* [Update Jira Issues in CD Stages](https://docs.harness.io/article/urdkli9e74-update-jira-issues-in-cd-stages) + diff --git a/docs/platform/9_Approvals/adding-jira-approval-stages.md b/docs/platform/9_Approvals/adding-jira-approval-stages.md new file mode 100644 index 00000000000..d69a99514d2 --- /dev/null +++ b/docs/platform/9_Approvals/adding-jira-approval-stages.md @@ -0,0 +1,161 @@ +--- +title: Adding Jira Approval Stages and Steps +description: Use Jira issues to approve or reject a Pipeline or stage at any point in its execution. +# sidebar_position: 2 +helpdocs_topic_id: 2lhfk506r8 +helpdocs_category_id: 2d7y1cr09y +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use Jira issues to approve or reject a Pipeline or stage at any point in its execution. + +During deployment, a Jira issue's fields are evaluated according to criteria you define and its approval/rejection determines if the Pipeline or stage may proceed. + +The **Jira Approval** step can be added in Jira Approval stages or in CD stages. The Jira Approval step prevents the stage execution from proceeding without an approval. + +For example, in a [Kubernetes Blue Green Deployment](https://ngdocs.harness.io/article/mog5tnk5pi-create-a-kubernetes-blue-green-deployment), you might want to add an approval step between the Stage Deployment step, where the new app version is deployed to the staging environment, and the Swap Primary with Stage step, where production traffic is routed to the pods for the new version. + +Looking to create or update Jira issues? See [Create Jira Issues in CD Stages](https://ngdocs.harness.io/article/yu40zr6cvm-create-jira-issues-in-cd-stages), [Update Jira Issues in CD Stages](https://ngdocs.harness.io/article/urdkli9e74-update-jira-issues-in-cd-stages). + +### Before you begin + +* [Connect to Jira](../7_Connectors/connect-to-jira.md) +* [Create Jira Issues in CD Stages](https://docs.harness.io/article/yu40zr6cvm-create-jira-issues-in-cd-stages) +* [Update Jira Issues in CD Stages](https://docs.harness.io/article/urdkli9e74-update-jira-issues-in-cd-stages) + +### Visual Summary + +The following video shows you how to use the Jira Create, Jira Update, and Jira Approval steps: + +### Limitations + +* Harness supports only Jira fields of type `Option`, `Array`, `Any`, `Number`, `Date`, and `String`. Harness does not integrate with Jira fields that manage users, issue links, or attachments. This means that Jira fields like Assignee and Sprint are not accessible in Harness' Jira integration. + +### Review: Jira Approval Stages vs Steps + +You can use Jira Approvals in two ways: + +* **Jira Approval step:** you can add a Jira Approval step to any CD or Approval stage. +* **Jira Approval stage:** the Jira Approval stage includes Jira Create, Jira Approval, and Jira Update steps: + +![](./static/adding-jira-approval-stages-08.png) +You do not need to use the Jira Create and Jira Update steps with the Jira Approval step, but they are included in the Jira Approval stage because many users want to create a Jira issue, approve/reject based on its settings, and then update the Jira issue all in one stage. + +You can also achieve this simply by using the Jira Create, Jira Approval, and Jira Update steps within a non-Approval stage. + +The Jira Create and Jira Update steps are described in other topics. This topic describes the Jira Approval step only. + +See: + +* [Create Jira Issues in CD Stages](https://docs.harness.io/article/yu40zr6cvm-create-jira-issues-in-cd-stages) +* [Update Jira Issues in CD Stages](https://docs.harness.io/article/urdkli9e74-update-jira-issues-in-cd-stages) + +### Step 1: Add a Jira Approval Step + +In a CD or Approval stage, click **Add Step**, and then click **Jira Approval**. + +When you add a Jira Approval stage, Harness automatically adds Jira Create, Jira Approval, and Jira Update steps. We'll only cover the Jira Approval step here. + +In **Name**, enter a name that describes the step. + +In **Timeout**, enter how long you want Harness to try to complete the step before failing (and initiating the stage or step [Failure Strategy](../8_Pipelines/define-a-failure-strategy-on-stages-and-steps.md)). + +You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day. + +Jira communication can take a few minutes. Do not use a brief timeout. + +The maximum is 3w 3d 20h 30m.In **Jira Connector**, create or select the [Jira Connector](../7_Connectors/connect-to-jira.md) to use. + +In **Project**, select the Jira project that contains the issue you want to evaluate. + +In **Issue Key**, enter the Jira issue key of the issue you want to evaluate. + +### Option: Use an Expression in Issue Key + +In **Issue Key**, you can use an expression to reference the Key ID from another Jira Create or Jira Update step. + +The Jira Create or Jira Update step you want to reference must be before the Jira Approval step that references it in the Pipeline and stage. + +First, identify the step where you want to get the ID from. + +You'll have to close the Jira Approval step to get the the ID from the previous step. An ID is required, so you can just enter any number for now and click **Save**.In the Pipeline, click **Execution History**. + +Select a successful execution, and click the Jira Create/Update step in the execution. + +Click the **Output** tab, locate the **Key** setting, and click the copy button. + +![](./static/adding-jira-approval-stages-09.png) +The expression will look something like this: + +`<+pipeline.stages.Jira_Stage.spec.execution.steps.jiraCreate.issue.key>` + +Now you have the expression that references the key ID from this step. + +Go back to your Jira Approval step. You can just select **Edit Pipeline**. + +In **Issue Key**, select **Expression**. + +![](./static/adding-jira-approval-stages-10.png) +In **Issue Key**, paste in the expression you copied from the previous Jira Create/Update step. + +Now this Jira Approval step will use the issue created by the Jira Create/Update step. + +Some users can forget that when you use a Jira Create step it creates a new, independent Jira issue every time it is run. If you are using the same issue ID in Jira Approval, you are approving using a new issue every run. + +### Step 2: Set Approval Criteria + +The **Approval Criteria** in the step determines if the Pipeline or stage is approved or rejected. + +![](./static/adding-jira-approval-stages-11.png) +Whether the Pipeline/stage stops executing depends on the stage or step [Failure Strategy](../8_Pipelines/define-a-failure-strategy-on-stages-and-steps.md).You can specify criteria using **Conditions** and/or **JEXL Expression**. If you use them in combination they both must evaluate to `True` for the step to be successful. + +In **Conditions**, you simply use the Jira Field, Operator, and Value to define approval criteria. + +In **JEXL Expression**, you can use [JEXL expressions](https://commons.apache.org/proper/commons-jexl/reference/syntax.html). You can use a JEXL expression if the field is set to **Fixed value**, **Runtime input**, or **Expression**. + +### Option: Set Rejection Criteria + +In **Optional Configuration**, in **Rejection Criteria**, you can define criteria for rejecting the approval. + +If you add rejection criteria it is used in addition to the settings in **Approval Criteria**. + +### Option: Advanced Settings + +In Advanced, you can use the following options: + +* [Step Skip Condition Settings](../8_Pipelines/w_pipeline-steps-reference/step-skip-condition-settings.md) +* [Step Failure Strategy Settings](../8_Pipelines/w_pipeline-steps-reference/step-failure-strategy-settings.md) + +### Step 3: Apply and Test + +Click **Apply Changes**. The Jira Update step is added to the stage. + +Run the Pipeline. + +When the Jira Approval step is reached, you can see its approval and rejection criteria: + +![](./static/adding-jira-approval-stages-12.png) +You can also click the **JIRA Ticket Pending Approval** link to open the ticket. + +The step can take a few minutes to receive information from Jira. + +### Review: Issue Expressions + +You can use `<+issue>` to refer to the value in the **Issue Key** setting. + +For example, `<+issue.Status> == "Done"` in the Approval Criteria **JEXL Expression** checks to see in the status of the issue in Issue Key is **Done**: + +![](./static/adding-jira-approval-stages-13.png) +`Status` is an issue field. You can use any issue field. + +### Notes + +* To add comments in you can use **Comment** key. Use `\\` for line breaks. + +![](./static/adding-jira-approval-stages-14.png) +### See also + +* [Using Manual Harness Approval Stages](adding-harness-approval-stages.md) +* [Using Manual Harness Approval Steps in CD Stages](https://docs.harness.io/article/43pzzhrcbv-using-harness-approval-steps-in-cd-stages) + diff --git a/docs/platform/9_Approvals/service-now-approvals.md b/docs/platform/9_Approvals/service-now-approvals.md new file mode 100644 index 00000000000..f67425002d5 --- /dev/null +++ b/docs/platform/9_Approvals/service-now-approvals.md @@ -0,0 +1,126 @@ +--- +title: Adding ServiceNow Approval Steps and Stages +description: Describes how to add ServiceNow-based approvals for a Pipeline. +# sidebar_position: 2 +helpdocs_topic_id: h1so82u9ub +helpdocs_category_id: 2d7y1cr09y +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can use ServiceNow tickets to approve or reject a Pipeline or stage at any point in its execution. + +During deployment, a ServiceNow ticket's fields are evaluated according to the criteria you define, and its approval/rejection determines if the Pipeline or stage may proceed. + +Approvals can be added as stages or in-between stage steps to prevent stage execution from proceeding without approval. + +For example, in a [Kubernetes Blue Green Deployment](https://docs.harness.io/article/mog5tnk5pi-create-a-kubernetes-blue-green-deployment), you might want to add an approval step between the Stage Deployment step, where the new app version is deployed to the staging environment, and the Swap Primary with Stage step, where production traffic is routed to the pods for the new version. + +### Before you begin + +* [Connect to ServiceNow](../7_Connectors/connect-to-service-now.md) + +### Review: ServiceNow Approval Stages vs Steps + +You can use ServiceNow Approvals in two ways: + +* **ServiceNow Approval step:** you can add a ServiceNow Approval step to any Pipeline or Approval stage. +* **ServiceNow** **Approval stage:** the ServiceNow Approval stage. + + ![](./static/service-now-approvals-00.png) + +### UTC Timezone Only + +The ServiceNow API only allows date time and time values in the UTC timezone. Consequently, input for any datetime/time fields in Harness ServiceNow steps must be provided in UTC format irrespective of time zone settings in your ServiceNow account. + +The timezone settings govern the display value of the settings not their actual value. + +The display values in the Harness UI depend on ServiceNow timezone settings. + +### Step: Add an Approval Step + +In your Pipeline, click **Add Stage**. + +![](./static/service-now-approvals-01.png) + +Click **Approval**. The Stage settings appear. + +![](./static/service-now-approvals-02.png) + +In **Name**, enter a name for your Stage and select **ServiceNow** as approval type. Click **Setup Stage**. The pipeline appears. + +In the pipeline, click **ServiceNow Approval.** The **ServiceNow Approval** settings appear. + +![](./static/service-now-approvals-03.png) + +In **Timeout**, enter how long you want Harness to try to complete the step before failing (and initiating the stage or step [Failure Strategy](../8_Pipelines/define-a-failure-strategy-on-stages-and-steps.md)). + +You can use `**w**`  for week, `**d**`  for day, `**h**`  for hour, `**m**`  for minutes, `**s**`  for seconds and `**ms**` for milliseconds. For example, 1d for one day. + +ServiceNow communication can take a few minutes. Do not use a brief timeout. + +The maximum is 3w 3d 20h 30m.In **ServiceNow** **Connector,** create or select the [ServiceNow Connector](../7_Connectors/connect-to-service-now.md) you want to use. + +Select the ServiceNow **Ticket Type**. Use the same type as the ticket you created in the Workflow. + +Enter the ServiceNow **Ticket Number**. + +### Step 2: Set Approval Criteria + +The **Approval Criteria** in the step determines if the Pipeline or stage is approved or rejected. Define the approval criteria using the ServiceNow status items. + +![](./static/service-now-approvals-04.png) + +Whether the Pipeline/stage stops executing depends on the stage or step [Failure Strategy](../8_Pipelines/define-a-failure-strategy-on-stages-and-steps.md).You can specify criteria using **Conditions** and/or **JEXL Expression**. If you use them in combination, they both must evaluate to `True` for the step to be successful. + +In **Conditions**, you can use the ServiceNow ticket related fields to define approval criteria. + +In **JEXL Expression**, you can use [JEXL expressions](https://commons.apache.org/proper/commons-jexl/reference/syntax.html). You can use a JEXL expression if the field is set to **Fixed value** or **Expression**. + +### Option: Set Rejection Criteria + +In **Optional Configuration**, in **Rejection Criteria**, you can define criteria for rejecting the approval. Define the rejected criteria using the ServiceNow status items. + +If you add rejection criteria, it is used in addition to the settings in **Approval Criteria**. + +### Option: Approval Change Window + +In **Approval Change Window**, use **Window Start** and **Window End** values to specify the window in which Harness will proceed with the deployment. Once this step is approved, Harness proceeds with deployment if the current time is within this window. The values that appear depend on the type selected in **Ticket Type**.  + +![](./static/service-now-approvals-05.png) + +The start and end times use the time zone set in the ServiceNow account selected in the ServiceNow Connector. + +### Option: Advanced Settings + +In **Advanced**, you can use the following options: + +* [Delegate Selector](../2_Delegates/delegate-guide/select-delegates-with-selectors.md#option-select-a-delegate-for-a-step-using-tags) +* [Step Skip Condition Settings](../8_Pipelines/w_pipeline-steps-reference/step-skip-condition-settings.md) +* [Step Failure Strategy Settings](../8_Pipelines/w_pipeline-steps-reference/step-failure-strategy-settings.md) + +### Step 3: Apply and Test + +Click **Apply Changes**. The ServiceNow approval step is added to the stage. + +Run the Pipeline. + +When the ServiceNow Approval step is reached, you can see its approval and rejection criteria: + +![](./static/service-now-approvals-06.png) + +### Review: Issue Expressions + +You can use `<+ticket>` to refer to the value in the **JEXL Expression** setting. + +For example, `<+ticket.state.displayValue> == "New"` in the Approval Criteria, **JEXL Expression** checks to see if the status of the ticket is **New.** + +![](./static/service-now-approvals-07.png) + +`state` is a ticket field. You can use any ticket field. + +### See also + +* [Using Manual Harness Approval Stages](adding-harness-approval-stages.md) +* [Using Manual Harness Approval Steps in CD Stages](https://docs.harness.io/article/43pzzhrcbv-using-harness-approval-steps-in-cd-stages) + diff --git a/docs/platform/9_Approvals/static/adding-harness-approval-stages-15.png b/docs/platform/9_Approvals/static/adding-harness-approval-stages-15.png new file mode 100644 index 00000000000..fd51925d9a5 Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-harness-approval-stages-15.png differ diff --git a/docs/platform/9_Approvals/static/adding-harness-approval-stages-16.png b/docs/platform/9_Approvals/static/adding-harness-approval-stages-16.png new file mode 100644 index 00000000000..9e41e216800 Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-harness-approval-stages-16.png differ diff --git a/docs/platform/9_Approvals/static/adding-harness-approval-stages-17.png b/docs/platform/9_Approvals/static/adding-harness-approval-stages-17.png new file mode 100644 index 00000000000..04bb3e0ff92 Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-harness-approval-stages-17.png differ diff --git a/docs/platform/9_Approvals/static/adding-jira-approval-stages-08.png b/docs/platform/9_Approvals/static/adding-jira-approval-stages-08.png new file mode 100644 index 00000000000..91f30f09cfb Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-jira-approval-stages-08.png differ diff --git a/docs/platform/9_Approvals/static/adding-jira-approval-stages-09.png b/docs/platform/9_Approvals/static/adding-jira-approval-stages-09.png new file mode 100644 index 00000000000..5fdab13b7cd Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-jira-approval-stages-09.png differ diff --git a/docs/platform/9_Approvals/static/adding-jira-approval-stages-10.png b/docs/platform/9_Approvals/static/adding-jira-approval-stages-10.png new file mode 100644 index 00000000000..4ba0064c7ee Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-jira-approval-stages-10.png differ diff --git a/docs/platform/9_Approvals/static/adding-jira-approval-stages-11.png b/docs/platform/9_Approvals/static/adding-jira-approval-stages-11.png new file mode 100644 index 00000000000..2f2a3237060 Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-jira-approval-stages-11.png differ diff --git a/docs/platform/9_Approvals/static/adding-jira-approval-stages-12.png b/docs/platform/9_Approvals/static/adding-jira-approval-stages-12.png new file mode 100644 index 00000000000..b84ed9c6389 Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-jira-approval-stages-12.png differ diff --git a/docs/platform/9_Approvals/static/adding-jira-approval-stages-13.png b/docs/platform/9_Approvals/static/adding-jira-approval-stages-13.png new file mode 100644 index 00000000000..8565230f43d Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-jira-approval-stages-13.png differ diff --git a/docs/platform/9_Approvals/static/adding-jira-approval-stages-14.png b/docs/platform/9_Approvals/static/adding-jira-approval-stages-14.png new file mode 100644 index 00000000000..373ffcdcbb7 Binary files /dev/null and b/docs/platform/9_Approvals/static/adding-jira-approval-stages-14.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-00.png b/docs/platform/9_Approvals/static/service-now-approvals-00.png new file mode 100644 index 00000000000..96531cff06a Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-00.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-01.png b/docs/platform/9_Approvals/static/service-now-approvals-01.png new file mode 100644 index 00000000000..5717142fce8 Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-01.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-02.png b/docs/platform/9_Approvals/static/service-now-approvals-02.png new file mode 100644 index 00000000000..f9183ab6962 Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-02.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-03.png b/docs/platform/9_Approvals/static/service-now-approvals-03.png new file mode 100644 index 00000000000..1f1445e5230 Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-03.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-04.png b/docs/platform/9_Approvals/static/service-now-approvals-04.png new file mode 100644 index 00000000000..2d09c1c4cca Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-04.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-05.png b/docs/platform/9_Approvals/static/service-now-approvals-05.png new file mode 100644 index 00000000000..2f8963947fb Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-05.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-06.png b/docs/platform/9_Approvals/static/service-now-approvals-06.png new file mode 100644 index 00000000000..dbe092b3b76 Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-06.png differ diff --git a/docs/platform/9_Approvals/static/service-now-approvals-07.png b/docs/platform/9_Approvals/static/service-now-approvals-07.png new file mode 100644 index 00000000000..5b2c15755d6 Binary files /dev/null and b/docs/platform/9_Approvals/static/service-now-approvals-07.png differ diff --git a/docs/platform/9_Approvals/w_approval-ref/_category_.json b/docs/platform/9_Approvals/w_approval-ref/_category_.json new file mode 100644 index 00000000000..7a02190f422 --- /dev/null +++ b/docs/platform/9_Approvals/w_approval-ref/_category_.json @@ -0,0 +1 @@ +{"label": "Approval Connectors Reference", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Approval Connectors Reference"}, "customProps": {"position": "To position the category, enter a number and move this to the root level.", "helpdocs_category_id": "r8mi3j15it"}} \ No newline at end of file diff --git a/docs/platform/9_Approvals/w_approval-ref/jira-connector-settings-reference.md b/docs/platform/9_Approvals/w_approval-ref/jira-connector-settings-reference.md new file mode 100644 index 00000000000..a9fc707d2b2 --- /dev/null +++ b/docs/platform/9_Approvals/w_approval-ref/jira-connector-settings-reference.md @@ -0,0 +1,45 @@ +--- +title: Jira Connector Settings Reference +description: This topic describes the settings and permissions for the Jira Connector. +# sidebar_position: 2 +helpdocs_topic_id: ud8rysntnz +helpdocs_category_id: r8mi3j15it +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes the settings and permissions for the Jira Connector. + +You can connect Harness to Jira using a Harness Jira Connector. This Connector allows you to create and update Jira issues, and to use Jira issues in Approval steps. + +For instructions on how to set up this Connector, see [Connect to Jira](../../7_Connectors/connect-to-jira.md). + +Looking for How-tos? See [Create Jira Issues in CD Stages](https://docs.harness.io/article/yu40zr6cvm-create-jira-issues-in-cd-stages), [Update Jira Issues in CD Stages](https://newdocs.helpdocs.io/article/urdkli9e74-update-jira-issues-in-cd-stages), and [Adding Jira Approval Stages and Steps](../adding-jira-approval-stages.md). + +### Limitations + +Your Jira REST API account must have permissions to create and edit issues in the relevant Jira projects. The **Administer Jira** permission includes all relevant permissions (as does the **Administrator** or **Member** permission on [Jira next-gen](https://confluence.atlassian.com/jirasoftwarecloud/overview-of-permissions-in-next-gen-projects-959283605.html)). + +For details, see Atlassian's documentation on [Operation Permissions](https://developer.atlassian.com/cloud/jira/platform/rest/v3/?utm_source=%2Fcloud%2Fjira%2Fplatform%2Frest%2F&utm_medium=302#permissions), [Issues](https://developer.atlassian.com/cloud/jira/platform/rest/v3/?utm_source=%2Fcloud%2Fjira%2Fplatform%2Frest%2F&utm_medium=302#api-group-Issues), and [Managing Project Permissions](https://confluence.atlassian.com/adminjiracloud/managing-project-permissions-776636362.html#Managingprojectpermissions-Projectpermissionsoverview). + +### Name + +Enter a name for this Connector. You will use this name to select the Connector in Pipeline steps and settings. + +### URL + +Enter the base URL by which your users access your Jira applications. For example: `https://mycompany.atlassian.net`. + +In Jira, the base URL is set to the same URL that Web browsers use to view your Jira instance. For details, see [Configuring the Base URL](https://confluence.atlassian.com/adminjiraserver071/configuring-the-base-url-802593107.html) from Atlassian.If you are using the on-premises Jira server with HTTPS redirects enabled, use the HTTPS URL to ensure the [JIRA client follows redirects](https://confluence.atlassian.com/adminjiraserver/running-jira-applications-over-ssl-or-https-938847764.html#:~:text=If%20you%20want%20to%20only,to%20the%20corresponding%20HTTPS%20URLs.). + +### Credentials + +Enter your credentials. For **API Key**, use a Harness [Text Secret](../../6_Security/2-add-use-text-secrets.md). See [Manage API tokens for your Atlassian account](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/) from Atlassian. + +### See also + +* [Create Jira Issues in CD Stages](https://docs.harness.io/article/yu40zr6cvm-create-jira-issues-in-cd-stages) +* [Update Jira Issues in CD Stages](https://docs.harness.io/article/urdkli9e74-update-jira-issues-in-cd-stages) +* [Using Jira Approval Steps in CD Stages](https://newdocs.helpdocs.io/article/urdkli9e74-update-jira-issues-in-cd-stages) +* [Adding Jira Approval Stages](../adding-jira-approval-stages.md) + diff --git a/docs/platform/_category_.json b/docs/platform/_category_.json new file mode 100644 index 00000000000..4e8e88932b6 --- /dev/null +++ b/docs/platform/_category_.json @@ -0,0 +1 @@ +{"label": "Platform", "collapsible": "true", "collapsed": "true", "className": "red", "link": {"type": "generated-index", "title": "Platform"}, "customProps": {"position": 90, "helpdocs_category_id": "uepsjmurpb", "helpdocs_parent_category_id": "3fso53aw1u"}} \ No newline at end of file diff --git a/docs/security-testing-orchestration/onboard-sto/30-tutorial-1-standalone-workflows.md b/docs/security-testing-orchestration/onboard-sto/30-tutorial-1-standalone-workflows.md index 92e3fb72797..70492331859 100644 --- a/docs/security-testing-orchestration/onboard-sto/30-tutorial-1-standalone-workflows.md +++ b/docs/security-testing-orchestration/onboard-sto/30-tutorial-1-standalone-workflows.md @@ -45,7 +45,7 @@ This workflow is supported for scanners that provide methods for transferring da ### Review: Scanner Coverage -See **Security Testing Orchestration** in [Supported Platforms and Technologies](https://docs.harness.io/article/1e536z41av-supported-platforms-and-technologies). +See **Security Testing Orchestration** in [Supported Platforms and Technologies](../../getting-started/supported-platforms-and-technologies.md). ### Stand-Alone STO Workflows diff --git a/docs/self-managed-enterprise-edition/_category_.json b/docs/self-managed-enterprise-edition/_category_.json new file mode 100644 index 00000000000..68c7ceb9ff5 --- /dev/null +++ b/docs/self-managed-enterprise-edition/_category_.json @@ -0,0 +1,15 @@ +{ + "label":"Self-Managed Enterprise Edition", + "collapsible":"true", + "collapsed":"true", + "className":"red", + "link":{ + "type":"generated-index", + "title":"Self-Managed Enterprise Edition" + }, + "customProps":{ + "position":"To position the category, enter a number and move this to the root level.", + "helpdocs_category_id":"mm97945oxz", + "helpdocs_parent_category_id":"" + } + } \ No newline at end of file diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/_category_.json b/docs/self-managed-enterprise-edition/deploy-with-kots/_category_.json new file mode 100644 index 00000000000..2487cf2f77a --- /dev/null +++ b/docs/self-managed-enterprise-edition/deploy-with-kots/_category_.json @@ -0,0 +1,14 @@ +{ + "label":"Install with KOTS", + "position": 30, + "collapsible":"true", + "collapsed":"true", + "className":"red", + "link":{ + "type":"generated-index", + "title":"Install with KOTS" + }, + "customProps":{ + "helpdocs_category_id":"vu99714ib1" + } + } \ No newline at end of file diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/installing-self-managed-enterprise-edition-using-kots.md b/docs/self-managed-enterprise-edition/deploy-with-kots/installing-self-managed-enterprise-edition-using-kots.md new file mode 100644 index 00000000000..de469dca9c1 --- /dev/null +++ b/docs/self-managed-enterprise-edition/deploy-with-kots/installing-self-managed-enterprise-edition-using-kots.md @@ -0,0 +1,18 @@ +--- +title: Install Self-Managed Enterprise Edition using KOTS +description: The following topics explain how to install Harness Self-Managed Enterprise Edition using KOTS Admin Console. +# sidebar_position: 2 +helpdocs_topic_id: jy75oy83vd +helpdocs_category_id: vu99714ib1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +The following topics provide information about using KOTS Admin Console to install Harness Self-Managed Enterprise Edition into a Kubernetes cluster: + +* [Infrastructure](kubernetes-cluster-on-prem-infrastructure-requirements.md) +* [Installation](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md) +* [Add Ingress Controller Annotations](kubernetes-cluster-self-managed-add-ingress-controller-service-annotations.md) + +For instructions on using Helm to install Self-Managed Enterprise Edition, see [Install Harness Self-Managed Enterprise Edition Using Helm](https://docs.harness.io/article/gqoqinkhck). + diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-on-prem-infrastructure-requirements.md b/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-on-prem-infrastructure-requirements.md new file mode 100644 index 00000000000..ed1b7133bb9 --- /dev/null +++ b/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-on-prem-infrastructure-requirements.md @@ -0,0 +1,190 @@ +--- +title: Infrastructure requirements for KOTS +description: This document lists the infrastructure requirements for installing Harness Self-Managed Enterprise Edition +# sidebar_position: 2 +helpdocs_topic_id: d5lptkp5ow +helpdocs_category_id: vu99714ib1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Installation of Harness Self-Managed Enterprise Edition in an existing Kubernetes cluster requires the following infrastructure. + +## Production environment + +Self-Managed Enterprise Edition NextGen is installed as an application on an existing Self-Managed Enterprise Edition FirstGen installation. + +The following tables list the resource requirements for the installation of Self-Managed Enterprise Edition in the production environment. + +### Self-Managed Enterprise Edition FirstGen + +| **Microservice** | **Pods** | **CPU / Pod** | **Memory / Pod** | **Total CPU** | **Total Memory** | +| :-- | :-: | :-: | :-: | :-: | :-: | +| Manager | 2 | 2 | 4 | 4 | 8 | +| Verification | 2 | 1 | 3 | 2 | 6 | +| Machine Learning Engine | 1 | 8 | 2 | 8 | 2 | +| UI | 2 | 0.25 | 0.25 | 0.5 | 0.5 | +| MongoDB | 3 | 4 | 8 | 12 | 24 | +| Proxy | 1 | 0.5 | 0.5 | 0.5 | 0.5 | +| Ingress | 2 | 0.25 | 0.25 | 0.5 | 0.5 | +| TimescaleDB | 3 | 2 | 8 | 6 | 24 | +| KOTS Admin Pods |   |   |   | 4 | 8 | +| **Total** | | | | **37.5** | **73.5** | + +The compute resources listed for the KOTS admin pods support a full stack. In an existing cluster, the requirements for KOTS are usually lower. + +### Self-Managed Enterprise Edition NextGen + +| **Microservice** | **Pods** | **CPU / Pod** | **Memory / Pod** | **Total CPU** | **Total Memory** | +| :-- | :-: | :-: | :-: | :-: | :-: | +| Log Minio | 1 | 1 | 4Gi | 1 | 4Gi | +| Log service | 1 | 1 | 3Gi | 1 | 3Gi | +| SCM | 1 | 0.1 | 0.5Gi | 0.1 | 0.5Gi | +| Gateway | 2 | 0.5 | 3Gi | 1 | 6Gi | +| NextGen UI | 2 | 0.2 | 0.2Gi | 0.4 | 0.4Gi | +| Platform service | 2 | 1 | 3Gi | 2 | 6Gi | +| Test Intelligence | 2 | 1 | 3Gi | 2 | 6Gi | +| Access Control | 2 | 1 | 3Gi | 2 | 6Gi | +| CI Manager | 2 | 1 | 3Gi | 2 | 6Gi | +| NextGen Manager | 2 | 2 | 6Gi | 4 | 12Gi | +| Pipeline | 2 | 1 | 6Gi | 2 | 12Gi | +| **Total** | **19** | | | **17.5** | **61.9Gi** | + +## Development environment + +The following table lists the requirements for the installation of Self-Managed Enterprise Edition in the development environment. + +| **Microservice** | **Pods** | **CPU / Pod** | **Memory / Pod** | **Total CPU** | **Total Memory** | +| :-- | :-: | :-: | :-: | :-: | :-: | +| Manager | 1 | 2 | 4 | 2 | 4 | +| Verification | 1 | 1 | 3 | 1 | 3 | +| Machine Learning Engine | 1 | 3 | 2 | 3 | 2 | +| UI | 1 | 0.25 | 0.25 | 0.25 | 0.25 | +| MongoDB | 3 | 2 | 4 | 6 | 12 | +| Proxy | 1 | 0.5 | 0.5 | 0.5 | 0.5 | +| Ingress | 1 | 0.25 | 0.25 | 0.25 | 0.25 | +| TimescaleDB | 1 | 2 | 8 | 2 | 8 | +| Kots Admin Pods |   |   |   | 4 | 8 | +| **Total** | | | | **19** | **38** | + +## Recommended node specifications + +Harness recommends the following minimum requirements for nodes. + +* 8 cores vCPU +* 12 GB memory + +## Storage requirements + +Your Kubernetes cluster must attach a Kubernetes [StorageClass](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/storage-class-v1/) resource. You provide the name of the [StorageClass](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/storage-class-v1/) during the installation process. + +A typical installation of Self-Managed Enterprise Edition uses a total of 1000 GB of storage in the following distribution: + +| **Component** | **Pods** | **Storage per pod** | **Total** | +| :-- | :-: | :-: | :-: | +| **MongoDB** | 3 | 200 GB | 600 GB | +| **Timescale DB** | 3 | 120 GB | 360 GB | +| **Redis** | n/a | n/a | 40 GB | + +A Proof of Concept (PoC) installation of Self-Managed Enterprise Edition requires 200 GB of storage in the following distribution: + +| **Component** | **Pods** | **Storage per pod** | **Total** | +| :-- | :-: | :-: | :-: | +| **MongoDB** | 3 | 50 GB | 150 GB | +| **Timescale DB** | 1 | 20 GB | 20 GB | +| **Redis** | n/a | n/a | 30 GB | + +## Allow list and outbound access requirements + +Add the following URLs to your allow list: + +| **URL** | **Usage** | +| :-- | :-- | +| **kots.io** | KOTS pulls the latest versions of the `kubectl` plugin and KOTS admin console (`kotsadm`). | +| **app.replicated.com** | KOTS admin console connects to check for the releases that your license allows. | +| **proxy.replicated.com** | Allows you to proxy your registry to pull your private images. | + +Provide outbound access to the following URLs: + +* proxy.replicated.com​ +* replicated.app +* k8s.kurl.sh​ +* app.replicated.com + +Outbound access is required for **connected install only**. Outbound access is not required to install in [Airgap mode](https://kots.io/kotsadm/installing/airgap-packages/). If your cluster does not have direct outbound connectivity and requires a proxy for outbound connections, see the following for information on how to create a proxy on the node machines: [https://docs.docker.com/network/proxy](https://docs.docker.com/network/proxy/). + +## Cluster and network architecture + +The following diagram describes the cluster and network architecture for a Self-Managed Enterprise Edition Kubernetes Cluster installation. + +![](./static/kubernetes-cluster-on-prem-infrastructure-requirements-04.png) + +## Namespace requirements + +The examples in this documentation use the `harness` namespace. + +If your installation will operate in a different namespace, you must update the Harness `spec` samples you use to apply the namespace you specified. + +## Load balancer + +The installation of Harness Self-Managed Enterprise Edition requires a load balancer. You enter the URL of the load balancer into the KOTS admin console when Self-Managed Enterprise Edition is installed. + +After Harness Self-Managed Enterprise Edition is installed, the load balancer is used to access the Harness Manager UI with a web browser. + +For information on how to create the load balancer, see [Self-Managed Enterprise Edition - Kubernetes Cluster: Setup Guide](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md). + +### gRPC and load balancer settings + +The configuration of gRPC depends on load balancer support for HTTP2. + +#### Load balancer support for HTTP2 over port 443 + +If your load balancer supports HTTP2 over port 443, you configure gRPC when you install Self-Managed Enterprise Edition NextGen. gRPC is configured in the **GRPC Target** and **GRPC Authority** fields. + +![](./static/kubernetes-cluster-on-prem-infrastructure-requirements-05.png) + +The following table describes the **GRPC Target** and **GRPC Authority** fields. + + +| **Value** | **Description** | +| :-- | :-- | +| **GRPC Target** | The hostname of the load balancer. This is the URL of the load balancer. | +| **GRPC Authority** | Append the hostname to the following string: `manager-grpc-`. For example, `manager-grpc-35.202.197.230`. | + +#### No load balancer support for HTTP2 over port 443 + +If your load balancer does not support HTTP2 over port 443, use one of the following configuration options: + +* **Load balancer supports multiple SSL ports.** Add port 9879 in the application load balancer and target port 9879 or node port 32510 on the Ingress controller. + + | **Value** | **Description** | + | :-- | :-- | + | **GRPC Target** | The hostname of the load balancer. | + | **GRPC Authority** | The hostname of the load balancer. | + +* **Load balancer does not support multiple SSL ports.** Create a new load balancer and target port 9879 or node port 32510 on the Ingress controller: + + | **Value** | **Description** | + | :-- | :-- | + | **GRPC Target** | The hostname of the new load balancer. | + | **GRPC Authority** | The hostname of the new load balancer. | + +## Trusted certificate requirement for Harness Self-Managed Enterprise Edition + +You can use secure or unencrypted connections to Harness Manager. This option depends on the URL scheme you apply during installation, when you configure the **Load Balancer URL** field. You can use `https://` or `http://`. + +![](./static/kubernetes-cluster-on-prem-infrastructure-requirements-06.png) + +For secure connections from your integrations to Harness Manager, you must use a public trusted certificate. This includes your integration with Harness Delegate as well as to Github Webhooks and so on. Harness does not support self-signed certificates for connections to Harness Manager. + +For connections from Harness Manager outbound to an integration, you can use a self-signed certificate. In this case, you must import the self-signed certificate into Harness Delegate's JRE keystore manually or by using a Harness Delegate Profile. + +### Terminate at Harness + +You have the option to terminate at the Harness ingress instead of the load balancer. If you configured the Harness ingress controller, you can add a TLS secret to the `harness` namespace. + +The following instruction adds a TLS secret based on a public certificate with the name `harness-cert`: + +``` +kubectl create secret tls harness-cert --cert=path/to/cert/file --key=path/to/key/file +``` diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-on-prem-kubernetes-cluster-setup.md b/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-on-prem-kubernetes-cluster-setup.md new file mode 100644 index 00000000000..83b3c64c833 --- /dev/null +++ b/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-on-prem-kubernetes-cluster-setup.md @@ -0,0 +1,796 @@ +--- +title: Install Self-Managed Enterprise Edition with KOTS +description: This topic covers installing Harness Self-Managed Enterprise Edition - Kubernetes Cluster NextGen. +# sidebar_position: 2 +helpdocs_topic_id: 95mwydgm6w +helpdocs_category_id: vu99714ib1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic covers installing Harness Self-Managed Enterprise Edition - Kubernetes Cluster **NextGen** in an existing Kubernetes cluster. Harness Self-Managed Enterprise Edition - Kubernetes Cluster **NextGen** uses the [KOTS kubectl plugin](https://kots.io/kots-cli/getting-started/) for installation. + +To install Harness Self-Managed Enterprise Edition - Kubernetes Cluster **NextGen**, first you install Harness Self-Managed Enterprise Edition - Kubernetes Cluster **FirstGen**, and then you install NextGen as an application. + +We assume that you are very familiar with Kubernetes, and can perform the standard Kubernetes and managing configurations using [Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/) overlays.Installing Harness Self-Managed Enterprise Edition into an existing Kubernetes cluster is a simple process where you prepare your existing cluster and network, and use the KOTS admin tool and Kustomize to complete the installation and deploy Harness. + +## Cluster requirements + +Do not perform the steps in this topic until you have set up the requirements in the [Self-Managed Enterprise Edition - Kubernetes Cluster: Infrastructure Requirements](kubernetes-cluster-on-prem-infrastructure-requirements.md) topic. + +## Summary + +Installing Harness Self-Managed Enterprise Edition in an existing cluster is performed as a [KOTS Existing Cluster Online Install](https://kots.io/kotsadm/installing/installing-a-kots-app/#existing-cluster-or-embedded-kubernetes). + +This simply means that you are using an existing Kubernetes cluster, as opposed to bare metal or VMs, and that your cluster can make outbound internet requests for an online installation. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-07.png) + +## Harness Self-Managed Enterprise Edition NextGen installation options + +How you install Harness Self-Managed Enterprise Edition NextGen will follow one of the use cases below: + +### NextGen on existing FirstGen cluster + +In this scenario, you have an existing Harness Self-Managed Enterprise Edition FirstGen cluster running and you want to add Harness Self-Managed Enterprise Edition NextGen to it. + +You simply add Harness Self-Managed Enterprise Edition NextGen as a new application in your existing FirstGen installation. + +1. Open the FirstGen KOTS admin tool. +2. Install NextGen as a new application on existing FirstGen cluster. +3. Upload the NextGen license file. +4. Use the exact same FirstGen configuration values for the NextGen configuration. + +If you are using this option, skip to [Install NextGen on Existing FirstGen Cluster](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#install-next-gen-on-existing-first-gen-cluster). + +### NextGen on new FirstGen cluster + +In this scenario, you want to install FirstGen and NextGen on a new cluster. + +1. Create a new Kubernetes cluster following the steps in the [Self-Managed Enterprise Edition - Kubernetes Cluster: Infrastructure Requirements](kubernetes-cluster-on-prem-infrastructure-requirements.md). +2. Install FirstGen. +3. Install NextGen as a new application on existing FirstGen cluster. +4. Upload the NextGen license file. +5. Use the exact same FirstGen configuration values for the NextGen configuration. + +If you are using this option, do the following: + +1. Follow all of the FirstGen installation instructions beginning with [Step 1: Set up Cluster Requirements](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#step-1-set-up-cluster-requirements). +2. Follow the NextGen installation instructions in [Install NextGen on Existing FirstGen Cluster](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#install-next-gen-on-existing-first-gen-cluster). + +### Legacy FirstGen not using KOTS + +In this scenario, you have a legacy FirstGen installation that is not a KOTS-based installation. + +This process will involve migrating your legacy FirstGen data to a new KOTS-based FirstGen and then installing NextGen. + +1. Create a new Kubernetes cluster following the steps in the [Self-Managed Enterprise Edition - Kubernetes Cluster: Infrastructure Requirements](kubernetes-cluster-on-prem-infrastructure-requirements.md). +2. Install FirstGen. +3. Migrate data to new FirstGen using a script from Harness Support. +4. Install NextGen as a new application on the new FirstGen cluster. +5. Upload the NextGen license file. +6. Use the exact same FirstGen configuration values for the NextGen configuration. + +If you are using this option, do the following: + +1. Follow all of the FirstGen installation instructions beginning with [Step 1: Set up Cluster Requirements](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#step-1-set-up-cluster-requirements). +2. Migrate data to new FirstGen using a script from Harness Support. +3. Follow the NextGen installation instructions in [Install NextGen on Existing FirstGen Cluster](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#install-next-gen-on-existing-first-gen-cluster). + +## Step 1: Set up cluster requirements + +As stated earlier, follow the steps in the [Self-Managed Enterprise Edition - Kubernetes Cluster: Infrastructure Requirements](kubernetes-cluster-on-prem-infrastructure-requirements.md) topic to ensure you have your cluster set up correctly. + +These requirements also include RBAC settings that might require your IT administrator to assist you unless your user account is bound to the `cluster-admin` Cluster Role. + +Specifically, you need to create a KOTS admin Role and bind it to the user that will install Harness. You also need to create a Harness ClusterRole. + +## Step 2: Set up networking requirements + +Perform the following steps to ensure that you have the load balancer set up for Harness Self-Managed Enterprise Edition. + +Later, when you set up the kustomization for Harness Self-Managed Enterprise Edition, you will provide an IP address for the cluster load balancer settings. + +Finally, when you configure the Harness Self-Managed Enterprise Edition application, you will provide the Load Balancer URL. This URL is what Harness Self-Managed Enterprise Edition users will use. + +### Using NodePort? + +If you are creating the load balancer's Service type using NodePort, create a load balancer that points to any port in range 30000-32767 on the node pool on which the Kubernetes cluster is running. + +If you are using NodePort, you can skip to [Step 3: Configure Harness](#step-3-configure-harness). + +### Set up a static, external IP address + +You should have a static IP address reserved to expose Harness outside of the Kubernetes cluster. + +For example, in the GCP console, click **VPC network**, and then click **External IP Addresses**. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-08.png) + + +For more information, see [Reserving a static external IP address](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address). + +For GCP, the External IP address must be [Premium Tier](https://cloud.google.com/network-tiers/docs/overview#premium-tier). + +### Set up DNS + +Set up DNS to resolve the domain name you want to use for Harness Self-Managed Enterprise Edition to the static IP address you reserved in the previous step. + +For example, the domain name **harness.abc.com** would resolve to the static IP: + + +``` +host harness.abc.com +harness.abc.com has address 192.0.2.0 +``` + +The above DNS setup can be tested by running `host `. + +## Review: OpenShift clusters + +If you will be using OpenShift Clusters, run the following commands after installing the KOTS plugin, but before installing Harness: + + +``` +oc adm policy add-scc-to-user anyuid -z harness-serviceaccount -n harness +``` + +``` +oc adm policy add-scc-to-user anyuid -z harness-default -n harness +``` + +``` +oc adm policy add-scc-to-user anyuid -z default -n harness +``` + +Once you've installed Harness and you want to install a Harness Kubernetes Delegate, see [Delegates and OpenShift](#delegates-and-open-shift) below. + +## Option 1: Disconnected installation (air gap) + +The following steps will install KOTS from your private repository and the Harness Self-Managed Enterprise Edition license and air-gap file you obtain from Harness. + +1. Download the latest KOTS (kotsadm.tar.gz) release from . +2. Push KOTS images to your private registry: + + ``` + kubectl kots admin-console push-images ./kotsadm.tar.gz /harness \ + --registry-username \ + --registry-password + ``` + +3. Obtain the Harness license file from your Harness Customer Success contact or email [support@harness.io](https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=support@harness.io). +4. Obtain the Harness air-gap file from Harness. +5. Log into your cluster. +6. Install KOTS and Harness using the following instruction: + + ``` + kubectl kots install harness + --namespace harness + --shared-password + --license-file + --config-values + --airgap-bundle .airgap> + --kotsadm-registry /harness + --kotsadm-namespace harness-kots + --registry-username + --registry-password + ``` + + +##### NOTE +* The `--namespace` parameter uses the namespace you created in [Self-Managed Enterprise Edition - Kubernetes Cluster: Infrastructure Requirements](kubernetes-cluster-on-prem-infrastructure-requirements.md). In this documentation, we use the namespace **harness****.** +* For the `--shared-password` parameter, enter a password for the KOTS admin console. Use this password to log into the KOTS admin tool. +* The `--config-values` parameter is required if you use `config-values` files, as described in [Config Values](https://kots.io/kotsadm/installing/automating/#config-values) from KOTS. + +In the terminal, it looks like this: + + +``` + • Deploying Admin Console + • Creating namespace ✓ + • Waiting for datastore to be ready ✓ +``` + +The KOTS admin tool URL is provided: + +``` + • Waiting for Admin Console to be ready ✓ + + • Press Ctrl+C to exit + • Go to http://localhost:8800 to access the Admin Console +``` +Use the URL provided in the output to open the KOTS admin console in a browser. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-09.png)Enter the password you provided earlier, and click **Log In**. + +You might be prompted to allow a port-forwarding connection into the cluster. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-10.png) + +With KOTS and Harness installed, you can continue with the necessary configuration. + +## Option 2: Connected installation + +The following steps will install KOTS and Harness Self-Managed Enterprise Edition online. There is also an option to use a Harness Self-Managed Enterprise Edition air-gap installation file instead of downloading Harness Self-Managed Enterprise Edition. + +### Install KOTS plugin + +1. Log into your cluster. +2. Install the KOTS `kubectl` plugin using the following command: + + ``` + curl https://kots.io/install | bash + ``` + + The output of the command is similar to this: + + ``` + Installing replicatedhq/kots v1.16.1 + (https://github.com/replicatedhq/kots/releases/download/v1.16.1/kots_darwin_amd64.tar.gz)... + ############################################# 100.0%#=#=-# # + ############################################# 100.0% + Installed at /usr/local/bin/kubectl-kots + ``` + + To test the installation, run this command: + + ``` + kubectl kots --help + ``` + + If the installation was successful, KOTS Help is displayed. You can continue with the installation of Harness Self-Managed Enterprise Edition into your cluster. + +### Install KOTS + +To install the KOTS admin tool, enter the following command: + +``` +kubectl kots install harness +``` + +You are prompted to enter the namespace for the Harness installation. This is the namespace you created in [Self-Managed Enterprise Edition - Kubernetes Cluster: Infrastructure Requirements](kubernetes-cluster-on-prem-infrastructure-requirements.md). + +In this documentation, we use the namespace `harness`. + +In the terminal, it looks like this: + +``` +Enter the namespace to deploy to: harness + • Deploying Admin Console + • Creating namespace ✓ + • Waiting for datastore to be ready ✓ +``` +Enter a password for the KOTS admin console. You'll use this password to login to the KOTS admin tool. + +The KOTS admin tool URL is provided: + +``` +Enter a new password to be used for the Admin Console: •••••••• + • Waiting for Admin Console to be ready ✓ + + • Press Ctrl+C to exit + • Go to http://localhost:8800 to access the Admin Console +``` +Use the URL provided in the output to open the KOTS admin console in a browser. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-11.png) + +Enter the password you provided earlier, and click **Log In**. + +You might be prompted to allow a port-forward connection into the cluster. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-12.png) + +### Upload your Harness license + +After you login to the KOTS admin console, you can upload your Harness license. + +Obtain the Harness license file from your Harness Customer Success contact or email [support@harness.io](mailto:support@harness.io). + +Drag your license YAML file into the KOTS admin tool: + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-13.png) + +Next, upload the license file: + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-14.png) + +Click **Upload license**. + +Now that license file is uploaded, you can install Harness. + +### Download Harness over the internet + +If you are installing Harness over the Internet, click the **download Harness from the Internet** link. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-15.png) + +KOTS begins to install Harness into your cluster. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-16.png) + +Next, you will configure Harness. + +## Step 3: Configure Harness + +Now that you have added your license you can configure the networking for the Harness installation. + +If the KOTS Admin tool is not running, point `kubectl` to the cluster where Harness is deployed and run the following command: + +``` +kubectl kots admin-console --namespace harness +``` + +In the KOTS admin tool, the **Configure Harness** settings appear. + +Harness Self-Managed Enterprise Edition - Kubernetes Cluster requires that you provide a NodePort and Application URL. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-17.png) + +### Mode + +* Select **Demo** to run Harness Self-Managed Enterprise Edition in demo mode and experiment with it. If you're a new user, choose **Demo** and upgrade to Production HA later. +* Select **Production HA** to run a production version of Harness Self-Managed Enterprise Edition. + +### Ingress service type + +By default, nginx is used for ingress automatically. If you are deploying nginx separately, do the following: + +1. Click **Advanced Configurations**. +2. Disable the **Install Nginx Ingress Controller** option. + +### NodePort + +Enter any port in the range of 30000 to 32767 on the node pool on which the Kubernetes cluster is running. + +If you do not enter a port, Harness uses 32500 by default. + +### External Loadbalancer + +Enter the IP address of the load balancer. + +### Application URL + +Enter the URL users will enter to access Harness. This is the DNS domain name mapped to the load balancer IP address. + +When you are done, click **Continue**. + +### Storage class + +You can also add a storage class. The name of the storage class depends on the provider that hosts your Kubernetes cluster. See [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters) from Kubernetes. + +If you don't provide a name, Harness uses `default`. + +When your installation of Harness Self-Managed Enterprise Edition is complete, run the following command to list the storage classes that are available in the namespace, for example, the **harness** namespace: + +``` +kubectl get storageclass -n harness +``` + +Type the name of the storage class. + +### Option: Advanced configurations + +In the **Advanced Configurations** section, there are a number of advanced settings you can configure. If this is the first time you are setting up Harness Self-Managed Enterprise Edition, there's no reason to fine-tune the installation with these settings. + +You can change the settings later in the KOTS admin console **Config** tab: + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-18.png) + +#### gRPC and load balancer settings + +In **Scheme**, if you select HTTPS, the GRPC settings appear. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-19.png)**If your load balancer does support HTTP2 over port 443**, enter the following: + +* **GRPC Target:** enter the load balancer hostname (hostname from the load balancer URL) +* **GRPC Authority:** enter `manager-grpc-`. For example: `manager-grpc-35.202.197.230`. + +**If your load balancer does not support HTTP2 over port 443** you have two options: + +* If your load balancer supports multiple ports for SSL then add port 9879 in the application load balancer and target port 9879 or node port 32510 on the Ingress controller. + + **GRPC Target:** enter the load balancer hostname + + **GRPC Authority:** enter the load balancer hostname +* If your load balancer does not support multiple ports for SSL then create a new load balancer and target port 9879 or node port 32510 on the Ingress controller: + + **GRPC Target:** enter the new load balancer hostname + + **GRPC Authority:** enter the new load balancer hostname + +#### Log service backend + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-20.png) + +There are two options for **Log Service Backend**: + +**Minio:** If you want to use the builtin [Minio](https://docs.min.io/docs/minio-quickstart-guide.html) log service then your load balancer needs to reach the Ingress controller on port 9000. Create a new load balancer and target port 9000 or node port 32507. + +**Amazon S3 Bucket:** Enter the S3 bucket settings to use. + +## Step 4: Perform preflight checks + +Preflight checks run automatically and verify that your setup meets the minimum requirements. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-21.png) + +You can skip these checks, but we recommend you let them run. + +Fix any issues in the preflight steps. A common example is the message: + +``` +Your cluster meets the minimum version of Kubernetes, but we recommend you update to 1.15.0 or later. +``` + +You can update your cluster's version of Kubernetes if you like. + +## Step 5: Deploy Harness + +After you complete the preflight checks, click **Deploy and Continue**. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-22.png) + +Harness is deployed in a few minutes. + +In a new browser tab, go to the following URL, replacing `` with the URL you entered in the **Application URL** setting in the KOTS admin console: + +``` +/auth/#/signup +``` + +For example: + +``` +http://harness.mycompany.com/auth/#/signup +``` + +The Harness sign-up page appears. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-23.png) + + +Sign up with a new account and then sign in. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-24.png) + +Your new account will be added to the Harness Account Administrators User Group. + +See [Add and Manage User Groups](https://docs.harness.io/article/dfwuvmy33m-add-user-groups). + +### Future versions + +To set up future versions of Harness Self-Managed Enterprise Edition, in the KOTS admin console, in the **Version history** tab, click **Deploy**. The new version is displayed in Deployed version. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-25.png) + +### Important next steps + +**Important:** You cannot invite other users to Harness until a Harness Delegate is installed and a Harness SMTP Collaboration Provider is configured. + +1. Install Harness Delegate: [Delegate Installation Overview](https://docs.harness.io/article/igftn7rrtg). + +2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager. +Ensure you open the correct port for your SMTP provider, such as [Office 365](https://support.office.com/en-us/article/server-settings-you-ll-need-from-your-email-provider-c82de912-adcc-4787-8283-45a1161f3cc3). + +3. [Add a Secrets Manager](https://docs.harness.io/article/bo4qbrcggv-add-secrets-manager). By default, Harness Self-Managed Enterprise Edition installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended. + +After Harness Self-Managed Enterprise Edition installation, configure a new Secret Manager (Vault, AWS, etc). You will need to open your network for the Secret Manager connection. + +### Delegates and OpenShift + +If you are deploying the Harness Kubernetes Delegate into an OpenShift cluster, you need to edit the Harness Kubernetes Delegate YAML before installing the Delegate. + +You simply need to point to the OpenShift image. + +Here's the default YAML with `harness/delegate:latest`: + +``` +... +apiVersion: apps/v1 +kind: StatefulSet +... + spec: + containers: + - image: harness/delegate:latest +``` + +Change the `image` entry to `harness/delegate:non-root-openshift`: + +``` +... +apiVersion: apps/v1 +kind: StatefulSet +... + spec: + containers: + - image: harness/delegate:non-root-openshift +``` + +## Updating Harness FirstGen + +**Do not upgrade Harness past 4 major releases.** Instead, upgrades each interim release until you upgrade to the latest release. A best practice is to upgrade Harness once a month.Please follow these steps to update your Harness Harness Self-Managed Enterprise Edition installation. + +The steps are very similar to how you installed Harness initially. + +For more information on updating KOTS and applications, see [Using CLI](https://kots.io/kotsadm/updating/updating-kots-apps/#using-cli) and [Updating the Admin Console](https://kots.io/kotsadm/updating/updating-admin-console/) from KOTS. + +### Disconnected (air gap) + +The following steps require a private registry, just like the initial installation of Harness. + +#### Upgrade Harness + +1. Download the latest release from Harness. + +2. Run the following command on the cluster hosting Harness, replacing the placeholders: + + ``` + kubectl kots upstream upgrade harness \ + --airgap-bundle .airgap> \ + --kotsadm-namespace harness-kots \ + --kotsadm-registry /harness \ + --registry-username \ + --registry-password \ + --deploy \ + -n harness + ``` + +#### Upgrade KOTS admin tool + +To upgrade the KOTS admin tool, first you will push images to your private Docker registry. + +1. Run the following command to push the images, replacing the placeholders: + + ``` + kubectl kots admin-console push-images ./.tar.gz \ + /harness \ + --registry-username rw-username \ + --registry-password rw-password + ``` + +2. Next, run the following command on the cluster hosting Harness, replacing the placeholders: + + ``` + kubectl kots admin-console upgrade \ + --kotsadm-registry /harness \ + --registry-username rw-username \ + --registry-password rw-password \ + -n harness + ``` + +### Connected + +The following steps require a secure connection to the Internet, just like the initial installation of Harness. + +#### Upgrade Harness + +* Run the following command on the cluster hosting Harness: + + + ``` + kubectl kots upstream upgrade harness --deploy -n harness + ``` +#### Upgrade KOTS admin tool + +* Run the following command on the cluster hosting Harness: + + ``` + kubectl kots admin-console upgrade -n harness + ``` + +## Monitoring Harness + +Harness monitoring is performed using the built in monitoring tools. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-26.png) + +For steps on using the monitoring tools, see [Prometheus](https://kots.io/kotsadm/monitoring/prometheus/) from KOTS. + +## License expired + +If your license has expired, you will see something like the following: + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-27.png) + +Contact your Harness Customer Success representative or [support@harness.io](mailto:support@harness.io). + +## Bring down Harness cluster for planned downtime + +If you need to bring down the Harness cluster for any reason, you simply scale down the Harness Manager and Verification Service deployments to zero replicas. That is sufficient to stop background tasks and remove connections to the database. + +Next, optionally, you can scale everything else down if needed, but it is not necessary. + +To bring Harness back up, first ensure the Harness MongoDB is scaled up to 3 instances and Redis is scaled up also. Next, scale up the Harness Manager and Verification Service. + +## Logging + +For Harness Self-Managed Enterprise Edition - Kubernetes Cluster, logs are available as standard output. + +Use `kubectl get logs` on any pod to see the logs. + +## Notes + +Harness Self-Managed Enterprise Edition installations do not currently support the Harness Helm Delegate. + +### Note: Remove previous kustomization for ingress controller + +**This option is only needed if you have installed Harness Self-Managed Enterprise Edition previously.** If this is a fresh install, you can go directly to [Configure Harness](#step-4-configure-harness). + +If you have installed Harness Harness Self-Managed Enterprise Edition previously, you updated Harness manifests using kustomize for the ingress controller. This is no longer required. + +Do the following to remove the kustomization as follows: + +1. If you are using a single terminal, close the KOTS admin tool (Ctrl+C). + +2. Ensure `kubectl` is pointing to the cluster. + +3. Run the following command: + + ``` + kubectl kots download --namespace harness --slug harness + ``` + + This example assumes we are installing Harness in a namespace named **harness**. Please change the namespace according to your configuration. This command downloads a folder named **harness** in your current directory. + +4. In the **harness** folder, open the file **kustomization.yaml**: + + ``` + vi harness/overlays/midstream/kustomization.yaml + ``` + +5. In `patchesStrategicMerge` **remove** `nginx-service.yaml`. + +6. Save the file. + +7. Remove the nginx-service.yaml file: + + ``` + rm -rf harness/overlays/midstream/nginx-service.yaml + ``` + +8. Upload Harness: + + ``` + kubectl kots upload --namespace harness --slug harness ./harness + ``` + +9. Open the KOTS admin tool, and then deploy the uploaded version of Harness. + +### Install NextGen on existing FirstGen cluster + +This section assumes you have a Harness Self-Managed Enterprise Edition FirstGen installation set up and running following the step earlier in this guide (beginning with [Step 1: Set up Cluster Requirements](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#step-1-set-up-cluster-requirements)). + +Now you can add Harness Self-Managed Enterprise Edition NextGen as a new application to your Harness Self-Managed Enterprise Edition FirstGen installation. + +1. Log into your Harness Self-Managed Enterprise Edition FirstGen KOTS admin tool. + +2. Click **Config**. + +3. Record all of the FirstGen settings. You will need to use these exact same settings when setting up Harness Self-Managed Enterprise Edition NextGen. + + If you want to change settings, change them and then record them so you can use them during the Harness Self-Managed Enterprise Edition NextGen installation. + +4. Click **Add a new application**. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-28.png) + +5. Add the Harness Self-Managed Enterprise Edition NextGen license file you received from Harness Support, and then click **Upload license**. + +![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-29.png) + +6. Depending on whether your Harness Self-Managed Enterprise Edition FirstGen installation is disconnected or connected, follow the installation steps described here: + + * [Option 1: Disconnected Installation (Airgap)](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#option-1-disconnected-installation-airgap) + * [Option 2: Connected Installation](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#option-2-connected-installation) + + When you are done, you'll be on the **Configure HarnessNG** page. This is the standard configuration page you followed when you set up Harness Self-Managed Enterprise Edition FirstGen in [Step 3: Configure Harness](kubernetes-cluster-on-prem-kubernetes-cluster-setup.md#step-3-configure-harness). + +7. Enter the exact same configuration options as your Harness Self-Managed Enterprise Edition FirstGen installation. + + Make sure you include your **Advanced Configuration**, including any **Ingress Controller Configurations** settings. + + Make sure you use the exact same **Scheme** you used in Harness Self-Managed Enterprise Edition FirstGen (HTTP or HTTPS). + + The **Load Balancer IP Address** setting does not appear because Harness Self-Managed Enterprise Edition NextGen is simply a new application added onto FirstGen. Harness Self-Managed Enterprise Edition NextGen will use the exact same **Load Balancer IP Address** setting by default. + +8. Click **Continue** at the bottom of the page. + + Harness will perform pre-flight checks. + + ![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-30.png) + +9. Click **Continue**. + + Harness is deployed in a few minutes. If autoscaling is required, then it can take more time. + + When Harness Self-Managed Enterprise Edition NextGen is ready, you will see it listed as **Ready**. + + ![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-31.png) + +10. In a new browser tab, go to the following URL, replacing `` with the URL you entered in the **Application URL** setting in the KOTS admin console: + + `/auth/#/signup` + + For example: + + `http://harness.mycompany.com/auth/#/signup` + + The Harness sign-up page appears. + + ![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-32.png) + + Sign up with a new account and then sign in. + + ![](./static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-33.png) + + If you are familiar with Harness, you can skip [Learn Harness' Key Concepts](../../getting-started/learn-harness-key-concepts.md). + + Try the [Quickstarts](../../getting-started/quickstarts.md). + +## Updating Harness NextGen + +**Do not upgrade Harness past 4 major releases.** Instead, upgrades each interim release until you upgrade to the latest release. A best practice is to upgrade Harness once a month. Use the following steps to update your Harness Self-Managed Enterprise Edition installation. + +The steps are very similar to how you installed Harness initially. + +For more information on updating KOTS and applications, see [Using CLI](https://kots.io/kotsadm/updating/updating-kots-apps/#using-cli) and [Updating the Admin Console](https://kots.io/kotsadm/updating/updating-admin-console/) from KOTS. + +### Disconnected (air gap) + +The following steps require a private registry, just like the initial installation of Harness. + +#### Upgrade Harness NextGen + +1. Download the latest release from Harness. + +2. Run the following command on the cluster hosting Harness, replacing the placeholders: + + ``` + kubectl kots upstream upgrade harnesss-ng \ + --airgap-bundle .airgap> \ + --kotsadm-namespace harness-kots \ + --kotsadm-registry /harness \ + --registry-username \ + --registry-password \ + --deploy \ + -n harness + ``` + +#### Upgrade KOTS admin tool + +To upgrade the KOTS admin tool, first you will push images to your private Docker registry. + +1. Run the following command to push the images, replacing the placeholders: + + ``` + kubectl kots admin-console push-images ./.tar.gz \ + /harness \ + --registry-username rw-username \ + --registry-password rw-password + ``` + +2. Next, run the following command on the cluster hosting Harness, replacing the placeholders: + + ``` + kubectl kots admin-console upgrade \ + --kotsadm-registry /harness \ + --registry-username rw-username \ + --registry-password rw-password \ + -n harness + ``` + +### Connected + +The following steps require a secure connection to the Internet, just like the initial installation of Harness. + +#### Upgrade Harness + +* Run the following command on the cluster hosting Harness: + + ``` + kubectl kots upstream upgrade harnesss-ng --deploy -n harness + ``` +#### Upgrade KOTS admin tool + +* Run the following command on the cluster hosting Harness: + + ``` + kubectl kots admin-console upgrade -n harness + ``` diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations.md b/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations.md new file mode 100644 index 00000000000..f7898d053a8 --- /dev/null +++ b/docs/self-managed-enterprise-edition/deploy-with-kots/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations.md @@ -0,0 +1,42 @@ +--- +title: Add ingress controller annotations +description: In Harness Self-Managed Enterprise Edition Kubernetes Cluster, you can annotate the Ingress controller to customize its behavior. +# sidebar_position: 2 +helpdocs_topic_id: zbqas64zn8 +helpdocs_category_id: vu99714ib1 +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can customize the behavior of the Nginx ingress controller with annotations. This topic explains how to use the KOTS admin tool to configure the ingress controller with annotations. + +### Step 1: Open Advanced Configurations + +In the KOTS admin tool, click **Config**. + +![](./static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-00.png) + +Click **Advanced Configurations**. + +![](./static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-01.png) + +Click **Advanced Configurations**. + +![](./static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-02.png) + +Scroll down to **Nginx Ingress Controller Service Annotations**. + +![](./static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-03.png) + +### Step 2: Annotate the ingress controller + +Locate the **Nginx Ingress Controller Service Annotations** section**.** Type your annotations into the text area. For more information, see [NGINX Ingress Controller Annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) and [Ingress Controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in Kubernetes.io. + +Click **Save Config**. + +### Step 3: Deploy + +Click **Version History** in the top nav. + +Click **Deploy** to update the ingress controller. + diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-04.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-04.png new file mode 100644 index 00000000000..39a576c5c73 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-04.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-05.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-05.png new file mode 100644 index 00000000000..cf6fd07cd4a Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-05.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-06.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-06.png new file mode 100644 index 00000000000..b2c0d3992a7 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-infrastructure-requirements-06.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-07.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-07.png new file mode 100644 index 00000000000..11ac9079a09 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-07.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-08.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-08.png new file mode 100644 index 00000000000..fcb3d77a17c Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-08.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-09.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-09.png new file mode 100644 index 00000000000..db6b3af9979 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-09.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-10.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-10.png new file mode 100644 index 00000000000..d1a005be0e4 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-10.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-11.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-11.png new file mode 100644 index 00000000000..db6b3af9979 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-11.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-12.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-12.png new file mode 100644 index 00000000000..d1a005be0e4 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-12.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-13.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-13.png new file mode 100644 index 00000000000..f000207d839 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-13.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-14.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-14.png new file mode 100644 index 00000000000..1fb4174e312 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-14.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-15.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-15.png new file mode 100644 index 00000000000..81364ddf3d6 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-15.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-16.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-16.png new file mode 100644 index 00000000000..68c0e5390bb Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-16.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-17.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-17.png new file mode 100644 index 00000000000..88e7eff3d5c Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-17.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-18.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-18.png new file mode 100644 index 00000000000..d199aec58af Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-18.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-19.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-19.png new file mode 100644 index 00000000000..cf6fd07cd4a Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-19.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-20.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-20.png new file mode 100644 index 00000000000..30c5a12c36f Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-20.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-21.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-21.png new file mode 100644 index 00000000000..91dffcffcfa Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-21.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-22.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-22.png new file mode 100644 index 00000000000..b4b3c042ef2 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-22.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-23.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-23.png new file mode 100644 index 00000000000..c0cfe4d0b8b Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-23.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-24.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-24.png new file mode 100644 index 00000000000..06c009b2365 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-24.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-25.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-25.png new file mode 100644 index 00000000000..4564248890d Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-25.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-26.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-26.png new file mode 100644 index 00000000000..3b24355426f Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-26.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-27.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-27.png new file mode 100644 index 00000000000..86c03cde575 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-27.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-28.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-28.png new file mode 100644 index 00000000000..c31ada46c09 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-28.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-29.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-29.png new file mode 100644 index 00000000000..cb14604ae27 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-29.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-30.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-30.png new file mode 100644 index 00000000000..774fcb5411b Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-30.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-31.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-31.png new file mode 100644 index 00000000000..8e9d9c07f60 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-31.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-32.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-32.png new file mode 100644 index 00000000000..c0cfe4d0b8b Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-32.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-33.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-33.png new file mode 100644 index 00000000000..06c009b2365 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-on-prem-kubernetes-cluster-setup-33.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-00.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-00.png new file mode 100644 index 00000000000..ccab1513327 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-00.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-01.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-01.png new file mode 100644 index 00000000000..7e32414815c Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-01.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-02.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-02.png new file mode 100644 index 00000000000..49b32324435 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-02.png differ diff --git a/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-03.png b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-03.png new file mode 100644 index 00000000000..17d6c7048e1 Binary files /dev/null and b/docs/self-managed-enterprise-edition/deploy-with-kots/static/kubernetes-cluster-self-managed-add-ingress-controller-service-annotations-03.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/_category_.json b/docs/self-managed-enterprise-edition/introduction/_category_.json new file mode 100644 index 00000000000..c5d0b9db5a6 --- /dev/null +++ b/docs/self-managed-enterprise-edition/introduction/_category_.json @@ -0,0 +1,14 @@ +{ + "label":"Get Started", + "position": 10, + "collapsible":"true", + "collapsed":"true", + "className":"red", + "link":{ + "type":"generated-index", + "title":"Get Started" + }, + "customProps":{ + "helpdocs_category_id":"tvlmjozubh" + } + } \ No newline at end of file diff --git a/docs/self-managed-enterprise-edition/introduction/getting-started-with-self-managed-enterprise-edition.md b/docs/self-managed-enterprise-edition/introduction/getting-started-with-self-managed-enterprise-edition.md new file mode 100644 index 00000000000..4666b9d68fc --- /dev/null +++ b/docs/self-managed-enterprise-edition/introduction/getting-started-with-self-managed-enterprise-edition.md @@ -0,0 +1,58 @@ +--- +title: Getting started with Self-Managed Enterprise Edition +description: This document provides the basics on how to create a Harness account and first project. These are the first tasks that come after installing Self-Managed Enterprise Edition. For links to information… +# sidebar_position: 2 +helpdocs_topic_id: 09gjhl0tcw +helpdocs_category_id: tvlmjozubh +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This document provides the basics on how to create a Harness account and first project. These are the first tasks that come after installing Self-Managed Enterprise Edition. + +For links to information about using CI and CD pipelines, as well as the basics on Security Testing Orchestration (STO), see the last section of this documentation. + +### Create your Harness account + +You create your Harness account the first time you use Harness Self-Managed Enterprise Edition. You are automatically assigned the Harness user role, **Account Admin**. + +![](./static/getting-started-with-self-managed-enterprise-edition-01.png) + +**To create your Harness account** + +1. On the Harness **Sign up** page, enter your email address and a secure password. + +2. Click **Sign up**. + + After your account is created, you arrive at the **Continuous Delivery** start page. + + ![](./static/getting-started-with-self-managed-enterprise-edition-02.png) + +### Create your first project + +It takes time to learn how to create complex CI/CD pipelines. You can start with opening a project and inviting collaborators.  + +**To create a project** + +1. From the **Continuous Delivery** start page, click **Create Project**. + + ![](./static/getting-started-with-self-managed-enterprise-edition-03.png) + +2. In **Invite Collaborators**, type of the names of the collaborators you want to invite. + + ![](./static/getting-started-with-self-managed-enterprise-edition-04.png) + +3. Enter the project name, and optionally enter a description and tags for your project. + + ![](./static/getting-started-with-self-managed-enterprise-edition-05.png) + +4. Click **Setup Later**. + +### Next steps + +To get started with creating pipelines and Helm-based installs in Harness Self-Managed Enterprise Edition, see [Harness Docs](https://docs.harness.io/): + +* For Harness CI, see [CI Pipeline Quickstart](../../continuous-integration/ci-quickstarts/ci-pipeline-quickstart.md). +* For Harness CD, see [CD Pipeline Basics](https://docs.harness.io/article/cqgeblt4uh-cd-pipeline-basics). +* For Harness STO, see [Security Testing Orchestration Basics (Public Preview)](../../security-testing-orchestration/onboard-sto/10-security-testing-orchestration-basics.md). + diff --git a/docs/self-managed-enterprise-edition/introduction/harness-self-managed-enterprise-edition-overview.md b/docs/self-managed-enterprise-edition/introduction/harness-self-managed-enterprise-edition-overview.md new file mode 100644 index 00000000000..a00cc416047 --- /dev/null +++ b/docs/self-managed-enterprise-edition/introduction/harness-self-managed-enterprise-edition-overview.md @@ -0,0 +1,72 @@ +--- +title: Harness Self-Managed Enterprise Edition overview +description: Harness offers the on-premises Harness Self-Managed Enterprise Edition. +# sidebar_position: 2 +helpdocs_topic_id: tb4e039h8x +helpdocs_category_id: tvlmjozubh +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness Self-Managed Enterprise Edition is an end-to-end solution for continuous, self-managed delivery. You can install and update Harness Self-Managed Enterprise Edition using online or offline (air-gapped) methods. This topic provides a summary comparison of Harness SaaS and self-managed offerings and describes the options for self-managed delivery. + +### Compare Harness SaaS with self-managed + +The following tables provide a summary of key differences between Harness SaaS and self-managed products. + +**Table 1. Impacts** + + + +| | **Harness SaaS** | **Harness Self-Managed Enterprise Edition** | +| --- | --- | --- | +| **Platform Management** | Harness | Customer | +| **Hardware Cost** | — | ~$25,000 | +| **Hardware Maintenance** | — | Required | +| **Continuous Updates** | Daily | Weekly | +| **Security** | TLS/SSL Outbound | TLS/SSL Outbound | +| **Data Governance** | No Corporate Data Leaves Firewall | No Corporate Data Leaves Firewall | +| **Avg. Onboarding Time** | Days | Weeks | +| **Avg. Site Readiness** | Days | Weeks | +| **Avg. Support Res. Time** | Days | Weeks | + + + +**Table 2. Modules and Features** + +| **Module** | **Helm Install** | **KOTS Install** | **Notes** | +| --- | :-: | :-: | --- | +| Continuous Delivery | **✓** | **✓** | GitOps is not included. | +| Security Testing Orchestration | **✓** | X | | +| Service Reliability Management | **✓** | **✓** | Error Tracking is not included. | +| Continuous Integration | **✓** | **✓** | | +| Feature Flags | X | **✓** | | +| Cloud Costs Management | X | X | | +| Harness Chaos Engineering | X | X | | +| Harness Platform | **✓** | **✓** | Policy as Code (Harness Policy Engine) and Custom Dashboards are not included. | + +### Install on Kubernetes + +Harness Self-Managed Enterprise Edition is installed in a Kubernetes cluster in the following configuration. + +![](./static/harness-self-managed-enterprise-edition-overview-00.png)To install Harness Self-Managed Enterprise Edition in a Kubernetes cluster, use the following instructions: + +1. For Self-Managed Enterprise Edition with Helm, see [Install Harness Self-Managed Enterprise Edition Using Helm](../self-managed-helm-based-install/install-harness-self-managed-enterprise-edition-using-helm-ga.md). +2. For Self-Managed Enterprise Edition with KOTS, see [Install Self-Managed Enterprise Edition Using KOTS](../deploy-with-kots/installing-self-managed-enterprise-edition-using-kots.md). + +### Install on virtual machine + +Harness Self-Managed Enterprise Edition is installed on virtual machines (VMs) in the following configuration. + + + +| | | +| :-: | :-: | +| **GCP Architecture** | **AWS Architecture** | +| ![](./static/gcp_architecture_smpOverview.png) | ![](./static/aws_architecture_smpOverview.png) | + +To install Harness Self-Managed Enterprise Edition on a virtual machine, see the following topics: + +- [Self-Managed Enterprise Edition > Virtual Machine > Infrastructure](../vm-self-managed-category/virtual-machine-on-prem-infrastructure-requirements.md) +- [Self-Managed Enterprise Edition > Virtual Machine > Installation](../vm-self-managed-category/virtual-machine-on-prem-installation-guide.md) + diff --git a/docs/self-managed-enterprise-edition/introduction/static/aws_architecture_smpOverview.png b/docs/self-managed-enterprise-edition/introduction/static/aws_architecture_smpOverview.png new file mode 100644 index 00000000000..b39a4cf9fcf Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/aws_architecture_smpOverview.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/static/gcp_architecture_smpOverview.png b/docs/self-managed-enterprise-edition/introduction/static/gcp_architecture_smpOverview.png new file mode 100644 index 00000000000..b0c721e7569 Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/gcp_architecture_smpOverview.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-01.png b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-01.png new file mode 100644 index 00000000000..5049b7387af Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-01.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-02.png b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-02.png new file mode 100644 index 00000000000..67da8072df2 Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-02.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-03.png b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-03.png new file mode 100644 index 00000000000..67da8072df2 Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-03.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-04.png b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-04.png new file mode 100644 index 00000000000..51a8f04909f Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-04.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-05.png b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-05.png new file mode 100644 index 00000000000..b90ec79a1ab Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/getting-started-with-self-managed-enterprise-edition-05.png differ diff --git a/docs/self-managed-enterprise-edition/introduction/static/harness-self-managed-enterprise-edition-overview-00.png b/docs/self-managed-enterprise-edition/introduction/static/harness-self-managed-enterprise-edition-overview-00.png new file mode 100644 index 00000000000..39a576c5c73 Binary files /dev/null and b/docs/self-managed-enterprise-edition/introduction/static/harness-self-managed-enterprise-edition-overview-00.png differ diff --git a/docs/self-managed-enterprise-edition/sample.md b/docs/self-managed-enterprise-edition/sample.md index 02fbc76d4a7..df7f22d2a17 100644 --- a/docs/self-managed-enterprise-edition/sample.md +++ b/docs/self-managed-enterprise-edition/sample.md @@ -2,4 +2,4 @@ Self-Managed Enterprise Edition docs will be available here soon. -You can find existing Self-Managed Enterprise Edition docs at: [https://docs.harness.io/article/09gjhl0tcw-getting-started-with-self-managed-enterprise-edition](https://docs.harness.io/article/09gjhl0tcw-getting-started-with-self-managed-enterprise-edition). +You can find existing Self-Managed Enterprise Edition docs at: [https://docs.harness.io/article/09gjhl0tcw-getting-started-with-self-managed-enterprise-edition](introduction/getting-started-with-self-managed-enterprise-edition.md). diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/_category_.json b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/_category_.json new file mode 100644 index 00000000000..01e48197f3d --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/_category_.json @@ -0,0 +1,14 @@ +{ + "label":"Self-Managed Enterprise Edition Guide", + "position": 50, + "collapsible":"true", + "collapsed":"true", + "className":"red", + "link":{ + "type":"generated-index", + "title":"Self-Managed Enterprise Edition Guide" + }, + "customProps":{ + "helpdocs_category_id":"75ydek1suj" + } + } \ No newline at end of file diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/harness-self-managed-support-policy-for-kubernetes.md b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/harness-self-managed-support-policy-for-kubernetes.md new file mode 100644 index 00000000000..cc2fc02b94c --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/harness-self-managed-support-policy-for-kubernetes.md @@ -0,0 +1,27 @@ +--- +title: Support for Kubernetes +description: This topic describes what Kubernetes versions Harness supports for its Harness Self-Managed Enterprise Edition editions. +# sidebar_position: 2 +helpdocs_topic_id: l5ai0zfw68 +helpdocs_category_id: 75ydek1suj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This topic describes Harness support for Kubernetes in Harness Self-Managed Enterprise Edition. + +### Supported Kubernetes versions + +* Self-Managed Enterprise Edition supports Kubernetes v.1.23. Self-Managed Enterprise Edition additionally supports Kubernetes versions 1.22, 1.21, and 1.20. +* Effective October 7, 2022, with the release of version 76918, Self-Managed Enterprise Edition no longer supports Kubernetes open-source versions 1.18 and earlier. +* Self-Managed Enterprise Edition supports the other versions of Kubernetes you use on a best-effort basis. +* Harness commits to support new minor versions of Kubernetes within three months of the first stable release. For example, if the stable release of 1.24.0 occurs on August 31, Harness extends compatibility by November 30. + +### Terms of support + +Self-Managed Enterprise Edition does not introduce changes that break compatibility with supported versions of Kubernetes. For example, Self-Managed Enterprise Edition does not use features from Kubernetes version n that do not work in Kubernetes version n-2. + +Installation and upgrade preflight checks provide warnings when you use Kubernetes versions that are not supported. + +In cases where you encounter a problem that is related to an incompatibility issue, you must upgrade your cluster. Harness will not issue a patch to accommodate the use of unsupported Kubernetes versions. + diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/how-to-use-self-signed-certificates-with-self-managed.md b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/how-to-use-self-signed-certificates-with-self-managed.md new file mode 100644 index 00000000000..c3fa8941073 --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/how-to-use-self-signed-certificates-with-self-managed.md @@ -0,0 +1,351 @@ +--- +title: How to use self-signed certificates with Self-Managed Enterprise Edition +description: Self-Managed Enterprise Edition supports authorization by self-signed certificate. This document explains how to modify the delegate truststore to generate self-signed certificates. +# sidebar_position: 2 +helpdocs_topic_id: h0yo0jwuo9 +helpdocs_category_id: 75ydek1suj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Applies to Helm-based installation + +Harness Self-Managed Enterprise Edition supports authorization by self-signed certificate. This document explains how to modify the delegate truststore for the use of self-signed certificates in the self-managed environment.  + +Harness Delegate makes outbound connections to the resources you specify—for example, artifact servers and verification providers. These services typically use public certificates that are included in the operating system or the JRE. The self-signed certificates that you use, however, must be added to the delegate. The process that this document describes is supported for use with the legacy delegate in combination with the Harness CD, CI and STO modules.  + +**IMPORTANT** + +* For Golang 1.15 and later, the self-signed certificate must include a Subject Alternative Name (SAN). For more information, see the JFrog [knowledge base](https://jfrog.com/knowledge-base/general-what-should-i-do-if-i-get-an-x509-certificate-relies-on-legacy-common-name-field-error/). +* For truststores used with Istio, the size of the RSA key must not exceed 2048 bits. + +### Create the truststore + +1. Generate a self-signed certificate. +2. Save it to a file named DigiCertGlobalRootCA.pem: + + ``` + keytool -import -file DigiCertGlobalRootCA.pem -alias DigiCertRootCA -keystore trustStore.jks + ``` + +3. Add the DigiCertGlobalRootCA.pem trusted certificate to the trustStore.jks truststore: + + ``` + kubectl create secret -n harness-delegate-ng generic mysecret --from-file harness_trustStore.jks=trustStore.jks + ``` + +Repeat this command for each certificate you want to include in the truststore. + +### Create the secret + +1. Copy the following YAML to your editor. + + ``` + apiVersion: v1 + kind: Secret + metadata: + name: addcerts + namespace: harness-delegate-ng + type: Opaque + stringData: + ca.bundle: | + -----BEGIN CERTIFICATE----- + XXXXXXXXXXXXXXXXXXXXXXXXXXX + -----END CERTIFICATE------- + -----BEGIN CERTIFICATE----- + XXXXXXXXXXXXXXXXXXXXXXXXXXX + -----END CERTIFICATE------- + ``` + +2. Add your certificates to the `ca.bundle` field. + +The `XXXXXXXXXXXXXXXXXXXXXXXXXXX` placeholder indicates the position for the certificate body. Enclose each certificate in `BEGIN CERTIFICATE` and `END CERTIFICATE` comments. + +3. Save the file as addcerts.yaml. Apply the manifest to your cluster. + + ``` + kubectl apply -f addcerts.yaml + ``` + +### Modify the delegate YAML + +1. Open the harness-delegate.yml file in your editor. +2. In the `template.spec` section, add the following security context: + + ``` + securityContext: + fsGroup: 1001 + ``` + +3. Locate the `JAVA_OPTS` environment variable. Set `value` as follows. + + ``` + value: "-Xms64M -Djavax.net.ssl.trustStore=/cacerts/harness_trustStore.jks -Djavax.net.ssl.trustStorePassword=*password*" + ``` + +4. Replace *password* with the password you created for the truststore. + + **Skip step 5 if your delegates do not run with Harness CI or STO** + +5. CI builds require the addition of the following environment variables to the  `env` field: + + ``` + - name: CI_MOUNT_VOLUMES + value: /tmp/ca.bundle:/tmp/ca.bundle,/tmp/ca.bundle:/some/other/path/a.crt,/tmp/ca.bundle:/other/path/b.crt,/tmp/ca.bundle:/path/to/ca.bundle + - name: ADDITIONAL_CERTS_PATH + value: /tmp/ca.bundle + ``` + +6. Locate the template container `spec`. Add the following volume mounts to the `spec.containers` field. + + ``` + volumeMounts: + - mountPath: /cacerts + name: custom-truststore + readOnly: true + - name: certvol + mountPath: /tmp/ca.bundle + subPath: ca.bundle + ``` + +7. Locate the template `spec` and add the following volumes: + + ``` + volumes: + - name: custom-truststore + secret: + secretName: mysecret + defaultMode: 400 + - name: certvol + secret: + secretName: addcerts + items: + - key: ca.bundle + path: ca.bundle + + ``` + +**Skip step 8 if your delegates do not run with Istio service mesh** + +8. In the `env` list of environment variables, locate and set the `POLL_FOR_TASKS` value to `true`. + + ``` + - name: POLL_FOR_TASKS + value: "true" + ``` + +This value enables polling for tasks. + +9. Save and apply the modified manifest: + + ``` + kubectl apply -f harness-delegate.yml + ``` + +### Example: Modified harness-delegate.yml with truststore + +The following Kubernetes manifest provides an example of a delegate truststore modified for the generation of self-signed certificates: + +Example harness-delegate.yml + +``` +apiVersion: v1 +kind: Namespace +metadata: + name: harness-delegate-ng +​ +--- +​ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: harness-delegate-ng-cluster-admin +subjects: + - kind: ServiceAccount + name: default + namespace: harness-delegate-ng +roleRef: + kind: ClusterRole + name: cluster-admin + apiGroup: rbac.authorization.k8s.io +​ +--- +​ +apiVersion: v1 +kind: Secret +metadata: + name: delegatenew-proxy + namespace: harness-delegate-ng +type: Opaque +data: + # Enter base64 encoded username and password, if needed + PROXY_USER: "" + PROXY_PASSWORD: "" +​ +--- +​ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + harness.io/name: delegatenew + name: delegatenew + namespace: harness-delegate-ng +spec: + replicas: 4 + podManagementPolicy: Parallel + selector: + matchLabels: + harness.io/name: delegatenew + serviceName: "" + template: + metadata: + labels: + harness.io/name: delegatenew + spec: + securityContext: + fsGroup: 1001 + containers: + - image: docker.io/harness/delegate:latest + imagePullPolicy: Always + name: harness-delegate-instance + ports: + - containerPort: 8080 + resources: + limits: + cpu: "2" + memory: "2048Mi" + requests: + cpu: "2" + memory: "2048Mi" + readinessProbe: + exec: + command: + - test + - -s + - delegate.log + initialDelaySeconds: 20 + periodSeconds: 10 + livenessProbe: + exec: + command: + - bash + - -c + - '[[ -e /opt/harness-delegate/msg/data/watcher-data && $(($(date +%s000) - $(grep heartbeat /opt/harness-delegate/msg/data/watcher-data | cut -d ":" -f 2 | cut -d "," -f 1))) -lt 300000 ]]' + initialDelaySeconds: 240 + periodSeconds: 10 + failureThreshold: 2 + env: + - name: JAVA_OPTS + value: "-Xms64M -Djavax.net.ssl.trustStore=/cacerts/harness_trustStore.jks -Djavax.net.ssl.trustStorePassword=changeit" + - name: ACCOUNT_ID + value: Sfeh1T94QsyLWatE8unScg + - name: MANAGER_HOST_AND_PORT + value: https://smp1.qa.harness.io + - name: DEPLOY_MODE + value: KUBERNETES_ONPREM + - name: DELEGATE_NAME + value: delegatenew + - name: DELEGATE_TYPE + value: "KUBERNETES" + - name: DELEGATE_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: CI_MOUNT_VOLUMES + value: /tmp/ca.bundle:/tmp/ca.bundle,/tmp/ca.bundle:/some/other/path/a.crt,/tmp/ca.bundle:/other/path/b.crt,/tmp/ca.bundle:/path/to/ca.bundle + - name: ADDITIONAL_CERTS_PATH + value: /tmp/ca.bundle + - name: INIT_SCRIPT + value: "" + - name: DELEGATE_DESCRIPTION + value: "" + - name: DELEGATE_TAGS + value: "" + - name: NEXT_GEN + value: "true" + - name: DELEGATE_TOKEN + value: ceb6ce7258713af4be089fdcbc2e2248 + - name: WATCHER_STORAGE_URL + value: https://smp1.qa.harness.io/storage/wingswatchers + - name: WATCHER_CHECK_LOCATION + value: watcherprod.txt + - name: DELEGATE_STORAGE_URL + value: https://smp1.qa.harness.io/storage/wingsdelegates + - name: DELEGATE_CHECK_LOCATION + value: delegateprod.txt + - name: HELM_DESIRED_VERSION + value: "" + - name: JRE_VERSION + value: 11.0.14 + - name: HELM3_PATH + value: "" + - name: HELM_PATH + value: "" + - name: KUSTOMIZE_PATH + value: "" + - name: KUBECTL_PATH + value: "" + - name: POLL_FOR_TASKS + value: "false" + - name: ENABLE_CE + value: "false" + - name: PROXY_HOST + value: "" + - name: PROXY_PORT + value: "" + - name: PROXY_SCHEME + value: "" + - name: NO_PROXY + value: "" + - name: PROXY_MANAGER + value: "true" + - name: PROXY_USER + valueFrom: + secretKeyRef: + name: delegatenew-proxy + key: PROXY_USER + - name: PROXY_PASSWORD + valueFrom: + secretKeyRef: + name: delegatenew-proxy + key: PROXY_PASSWORD + - name: GRPC_SERVICE_ENABLED + value: "true" + - name: GRPC_SERVICE_CONNECTOR_PORT + value: "8080" + volumeMounts: + - mountPath: /cacerts + name: custom-truststore + readOnly: true + - name: certvol + mountPath: /tmp/ca.bundle + subPath: ca.bundle + restartPolicy: Always + volumes: + - name: custom-truststore + secret: + secretName: mysecret + defaultMode: 400 + - name: certvol + secret: + secretName: addcerts + items: + - key: ca.bundle + path: ca.bundle +​ +--- +​ +apiVersion: v1 +kind: Service +metadata: + name: delegate-service + namespace: harness-delegate-ng +spec: + type: ClusterIP + selector: + harness.io/name: delegatenew + ports: + - port: 8080 + +``` diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/monitor-harness-on-prem.md b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/monitor-harness-on-prem.md new file mode 100644 index 00000000000..43ed2871684 --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/monitor-harness-on-prem.md @@ -0,0 +1,133 @@ +--- +title: Monitoring on-premise installations +description: Monitor your Harness On-Prem installation. +# sidebar_position: 2 +helpdocs_topic_id: ho0c1at9nv +helpdocs_category_id: 75ydek1suj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can monitor your Harness Self-Managed Enterprise Edition installation and receive alerts on metrics such as CPU, memory, and disk usage. + +### Monitoring overview + +The Harness Self-Managed Enterprise Edition monitoring options available depend on whether you are running Harness Self-Managed Enterprise Edition - Virtual Machine or Harness Self-Managed Enterprise Edition - Kubernetes Cluster. + +Harness Self-Managed Enterprise Edition - Virtual Machine comes with built in monitoring using Prometheus, Grafana, and Alertmanager, but Harness Self-Managed Enterprise Edition - Kubernetes Cluster requires that you set up monitoring on your own. + +### Monitoring Harness Self-Managed Enterprise Edition - virtual machine + +Monitoring is included in Harness Self-Managed Enterprise Edition - Virtual Machine by default. + +The KOTS admin tool for a running version of Harness Self-Managed Enterprise Edition - Virtual Machine displays Prometheus monitoring: + +![](./static/monitor-harness-on-prem-07.png)When you installed Harness Self-Managed Enterprise Edition - Virtual Machine, you were provided with Prometheus, Grafana, and Alertmanager ports and passwords in the output of the installer. For example: + + +``` +The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively. +To access Grafana use the generated user: xxxxx password of admin: xxxxx. +``` +To view these addresses, log into the VM running Harness, and then view the Kubernetes services running in the `monitoring` namespace: + + +``` +kubectl get svc -n monitoring +``` +The output will be something like this: + + +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +alertmanager-main NodePort 10.96.2.240 9093:30903/TCP 282d +alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 282d +grafana NodePort 10.96.2.252 3000:30902/TCP 282d +kube-state-metrics ClusterIP None 8443/TCP,9443/TCP 282d +node-exporter ClusterIP None 9100/TCP 282d +prometheus-adapter ClusterIP 10.96.1.45 443/TCP 282d +prometheus-k8s NodePort 10.96.2.94 9090:30900/TCP 282d +prometheus-operated ClusterIP None 9090/TCP 282d +prometheus-operator ClusterIP None 8080/TCP 282d +``` +#### Prometheus + +The Prometheus port number is taken from the `prometheus-k8s` service (in this example `30900`). + +Combine that port number with the public IP for Harness Self-Managed Enterprise Edition and you have the Prometheus endpoint. + +If you have a load balancer configured, then configure it to support the `prometheus-k8s` port number.In the KOTS admin tool, in **Application**, click **Configure Prometheus Address**. + +In **Configure graphs**, enter the URL using the public IP and the Prometheus port number. + +![](./static/monitor-harness-on-prem-08.png)Click **Save**. The graphs appear. + +#### Grafana + +The Grafana port is listed by running `kubectl get svc -n monitoring`: + + +``` +grafana NodePort 10.96.2.252 3000:30902/TCP +``` +Combine that port number with the public IP for Harness Self-Managed Enterprise Edition and you have the Grafana endpoint. For example `http://35.233.239.15:30902`. + +Log into Grafana using the generated username and password you received when you installed Harness Self-Managed Enterprise Edition: + + +``` +To access Grafana use the generated user: xxxxx password of admin: xxxxx. +``` +If you do not have the username and password, log into the VM hosting Harness Self-Managed Enterprise Edition and run the following: + + +``` +kubectl get secrets grafana-admin -n monitoring -o yaml +``` +Once you are logged in, go to Dashboards and click a default dashboard or created a new one. + +![](./static/monitor-harness-on-prem-09.png)For example, open the **Kubernetes / Pods** dashboard. + +![](./static/monitor-harness-on-prem-10.png)See [Grafana docs](https://grafana.com/docs/) for information on creating dashboards. + +For information on querying Prometheus, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/). + +#### Alertmanager + +The Alertmanager port is listed by running `kubectl get svc -n monitoring`: + + +``` +alertmanager-main NodePort 10.96.2.240 9093:30903/TCP +``` +Combine that port number with the public IP for Harness Self-Managed Enterprise Edition and you have the Alertmanager endpoint. For example `http://35.233.239.15:30903`. + +See [Alerting Rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) from Prometheus for details on setting up alerts. + +### Monitoring Harness Self-Managed Enterprise Edition - Kubernetes Cluster + +Harness does not provide default monitoring for Harness Self-Managed Enterprise Edition. + +You can deploy a Prometheus server and Grafana to monitor Harness Self-Managed Enterprise Edition. For steps on setting up monitoring using Prometheus, Grafana, and Alertmanager, see [Prometheus](https://kots.io/kotsadm/monitoring/prometheus/) in the KOTS docs. + +If you have an existing Prometheus setup, in the KOTS admin tool you can click **Configure Prometheus Address** and then enter the Prometheus URL endpoint. + +### Availability monitoring + +The following table shows the two available URL endpoints. These are microservices with external endpoints (they have Ingress configured by default). + +In these examples, `` represents your vanity URL (for example, `mycompany.harness.io`). If your load balancer directs internal traffic for `app.harness.io` then the URLs can use that address. + + + +| | | | +| --- | --- | --- | +| **Service Name** | **Endpoint** | **Response** | +| Verification | `https:///verification/health` | `{"metaData":{},"resource":"healthy","responseMessages":[]}` | +| NextGen Manager | `https:///ng/api/health` | `{"status":"SUCCESS","data":"healthy","metaData":null,"correlationId":"a38c51ac-07ec-4596-b40b-4cc9487f8506"}` | + +The following methods can be used for monitoring other Harness Self-Managed Enterprise Edition microservices: + +* MongoDB: there are many ways to monitor MongoDB instances. For example, you can monitor your MongoDB database with Grafana and Prometheus. See the article [MongoDB Monitoring with Grafana & Prometheus](https://devconnected.com/mongodb-monitoring-with-grafana-prometheus/) for a summary. +* Disk/Memory: use the [Kubernetes pod dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/). + diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-07.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-07.png new file mode 100644 index 00000000000..2f3ec944035 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-07.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-08.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-08.png new file mode 100644 index 00000000000..edf55e37237 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-08.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-09.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-09.png new file mode 100644 index 00000000000..58e18509bae Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-09.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-10.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-10.png new file mode 100644 index 00000000000..8078d68eafa Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/monitor-harness-on-prem-10.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-00.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-00.png new file mode 100644 index 00000000000..ef3274a915c Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-00.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-01.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-01.png new file mode 100644 index 00000000000..9d6d3455825 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-01.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-02.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-02.png new file mode 100644 index 00000000000..1bf4355e892 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-02.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-03.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-03.png new file mode 100644 index 00000000000..d813d57d6b5 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-03.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-04.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-04.png new file mode 100644 index 00000000000..a93c85156e4 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-04.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-05.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-05.png new file mode 100644 index 00000000000..0df3d4e8195 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-05.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-06.png b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-06.png new file mode 100644 index 00000000000..a93c85156e4 Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/static/virtual-machine-on-prem-backup-and-recovery-06.png differ diff --git a/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/virtual-machine-on-prem-backup-and-recovery.md b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/virtual-machine-on-prem-backup-and-recovery.md new file mode 100644 index 00000000000..ce294cb03f5 --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-enterprise-edition-guide/virtual-machine-on-prem-backup-and-recovery.md @@ -0,0 +1,128 @@ +--- +title: Backup and recover virtual machine installations with KOTS +description: You can back up and recover Self-Managed Enterprise Edition - Virtual Machine using KOTS snapshots. KOTS has documented snapshots extensively in Snapshots. This topic provides a summary of the steps… +# sidebar_position: 2 +helpdocs_topic_id: 1jqycx6omm +helpdocs_category_id: 75ydek1suj +helpdocs_is_private: false +helpdocs_is_published: true +--- + +You can back up and recover Self-Managed Enterprise Edition - Virtual Machine using KOTS snapshots. + +KOTS has documented snapshots extensively in [Snapshots](https://kots.io/kotsadm/snapshots/overview/). + +This topic provides a summary of the steps involved in backup and recovery, and provides details on storage and Self-Managed Enterprise Edition - Virtual Machine version requirements. + +### Overview + +You can back up and recover Self-Managed Enterprise Edition - Virtual Machine using [KOTS snapshots](https://kots.io/kotsadm/snapshots/overview/). + +There are two types of snapshots: + +* **Full snapshots:** a snapshot of all objects including setup data and applications. + + Full snapshots are useful in disaster recovery and rolling back to previous versions of Self-Managed Enterprise Edition - Virtual Machine. + + Full snapshots can restore fully or partially. +* **Partial snapshots:** a snapshot of application data only. + + Partial snapshots are useful for rolling back to previously installed versions of Self-Managed Enterprise Edition - Virtual Machine. + +You can automate both types of snapshots. + +### Requirements + +Ensure you meet the Self-Managed Enterprise Edition - Virtual Machine version and other requirements below. + +#### Self-Managed Enterprise Edition version required + +Automated backup and recovery is supported in the following Self-Managed Enterprise Edition - Virtual Machine versions: + +* 670XX and later. + +#### Storage required + +Snapshots are stored in an AWS S3 bucket or AWS S3-compatible storage (Harness recommends either), or in Internal Storage. + +![](./static/virtual-machine-on-prem-backup-and-recovery-00.png) + +You will select and set up the storage option before creating a snapshot. This setup is described below. + +#### Velero is already installed + +Velero is installed and configured automatically in Self-Managed Enterprise Edition - Virtual Machine installations. You do not have to install it. + +### Step 1: Choose storage destination + +You have the following options for snapshot storage: + +* AWS S3 bucket +* AWS S3-compatible storage +* Internal Storage + +Harness recommends using an S3 bucket or S3-compatible storage to accommodate large files. + +A Self-Managed Enterprise Edition - Virtual Machine **Production** installation has a minimum disk space requirement of 400G, so there should be enough space for Internal Storage. Still, Harness recommends an S3 bucket or S3-compatible storage to avoid any issues. + +Storage destinations are described in the [KOTS documentation](https://kots.io/kotsadm/snapshots/storage-destinations/). + +1. Log into the Self-Managed Enterprise Edition - Virtual Machine KOTS admin tool. +2. Click **Snapshots**. +3. Click **Settings & Schedule**. +4. In **Storage**, select the storage method to use.![](./static/virtual-machine-on-prem-backup-and-recovery-01.png) +5. For **Amazon S3** and **Other S3-Compatible Storage**, enter in the location and credentials. +For details on these settings, see [Compatible Backend Stores](https://kots.io/kotsadm/snapshots/storage-destinations/) from KOTS. +6. Click **Update storage settings**. The settings are updated. + +Now that you have storage for your snapshots, you can create the snapshots. + +### Step 2: Create full or partial snapshots + +1. In the KOTS admin tool, click **Full Snapshots (Instance)** or **Partial Snapshots (Application)**. +2. Click **Start a snapshot**. The snapshot begins. You can see its progress:![](./static/virtual-machine-on-prem-backup-and-recovery-02.png) +3. Click the **more options** (**︙**) button. The details of the snapshot appear. + +![](./static/virtual-machine-on-prem-backup-and-recovery-03.png)That's it! You now have a snapshot you can use for recovery. + +### Option: Automating snapshots + +Scheduling snapshots is covered in the KOTS [Schedules](https://kots.io/kotsadm/snapshots/schedule/) documentation. Here is a summary of the steps: + +1. To automate snapshots, click **Settings & Schedule**. +2. Select **Full snapshots** or **Partial snapshots**. +3. Select **Enable automatic scheduled snapshots**. +4. In **Schedule**, select the schedule for the snapshot. +5. In **Retention policy**, define how long to keep snapshots. + +The retention policy is described by KOTS: + + +> The default retention period for snapshots is 1 month. Setting the retention only affects snapshots created after the time of the change. For example, if an existing snapshot had a retention of 1 year and is already 6 months old, and a user then uses the UI to change the retention to 1 month, the existing snapshot will still be around for another 6 months. + +6. Click **Update schedule**. The schedule is updated. To disable it, deselect **Enable automatic scheduled snapshots**. + +### Option: Restore from a full snapshot + +You can perform a full or partial restore from a Full Snapshot. This is why KOTS recommends Full Snapshots. + +1. Click **Full Snapshots (Instance)**. +2. Click the restore button.![](./static/virtual-machine-on-prem-backup-and-recovery-04.png)**Restore from backup** appears.![](./static/virtual-machine-on-prem-backup-and-recovery-05.png) +3. Select **Full restore** or **Partial restore**. +4. For **Full restore**, do the following: + a. Copy the provided command and run it on any master node. You might need to log into the admin tool again after the restore. + b. Click **Ok, got it**. +5. For **Partial restore**, do the following: + a. Enter the slug **harness**. + b. Click **Confirm and restore**. You might need to log into the admin tool again after the restore. + +### Option: Restore from a partial snapshot + +You can perform a partial restore using a Full or Partial Snapshot. The Full Snapshot steps are described above. + +1. Click **Partial snapshots**. +2. Click the restore button.![](./static/virtual-machine-on-prem-backup-and-recovery-06.png)**Restore from Partial backup (Application)** appears. +3. Enter the slug **harness**. +4. Click **Confirm and restore**. You might need to log into the admin tool again after the restore. + +### Notes + +See the [KOTS documentation](https://kots.io/kotsadm/snapshots/overview/) for more details on backup and recovery settings. + diff --git a/docs/self-managed-enterprise-edition/self-managed-helm-based-install/_category_.json b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/_category_.json new file mode 100644 index 00000000000..cdd2b1e518b --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/_category_.json @@ -0,0 +1,14 @@ +{ + "label":"Install with Helm", + "position": 20, + "collapsible":"true", + "collapsed":"true", + "className":"red", + "link":{ + "type":"generated-index", + "title":"Install with Helm" + }, + "customProps":{ + "helpdocs_category_id":"66qbyn7ugu" + } + } \ No newline at end of file diff --git a/docs/self-managed-enterprise-edition/self-managed-helm-based-install/harness-helm-chart.md b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/harness-helm-chart.md new file mode 100644 index 00000000000..b3947f53a9f --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/harness-helm-chart.md @@ -0,0 +1,114 @@ +--- +title: Harness Helm chart for Self-Managed Enterprise Edition +description: The core modules and components, as well as the optional dependencies and additions, that are included in the Harness Helm chart. +# sidebar_position: 2 +helpdocs_topic_id: nsx1d4z86l +helpdocs_category_id: 66qbyn7ugu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +Harness recommends Helm package manager for the installation and deployment of Self-Managed Enterprise Edition. Helm offers benefits including: + +* Declarative and dynamic application management +* Built-in scalability +* Chart reuse across environments +* Repeatable results you can automate + +The Harness Helm chart packages the core modules and components that are required to operate Harness at scale. + +For instructions on installing Self-Managed Enterprise Edition using Helm, see Install Harness Self-Managed Enterprise Edition Using Helm. + +## Supported modules + +Self-Managed Enterprise Edition supports the installation of the following Harness modules by Helm chart. + + + +| **Module** | **Helm Install** | **Notes** | +| :-- | :-: | :-- | +| Continuous Delivery | ✓ | Gitops is not included. | +| Security Testing Orchestration | ✓ | | +| Service Reliability Management | ✓ | Error Tracking is not included. | +| Continuous Integration | ✓ | | +| Feature Flags | X | | +| Cloud Costs Management | X | | +| Harness Chaos Engineering | X | | +| Harness Platform | ✓ | Policy as Code (Harness Policy Engine) and Custom Dashboards are not included. | + +## Requirements + +The following infrastructure is required to install Self-Managed Enterprise Edition using Helm. + +Support each node with 8 cores vCPU and a minimum of 12 GB memory. + +### Production environment + +The production environment requires the following resources. + +| **Modules** | **Pods** | **CPU** | **Memory (GB)** | **Storage (GB)** | +| --- | --- | --- | --- | --- | +| CD (including Platform) | 38 | 49.3 | 123.2 | 1070 | +| CD and CI | 40 | 51.3 | 135.2 | 1070 | +| CD and STO | 42 | 52.3 | 130.2 | 1070 | +| CD, CI and STO | 44 | 54.3 | 142.2 | 1070 | + +### Development environment + +The development environment requires the following resources. + +| **Modules** | **Pods** | **CPU** | **Memory (GB)** | **Storage (GB)** | +| --- | --- | --- | --- | --- | +| CD (including Platform) | 20 | 22.8 | 58.4 | 530 | +| CD and CI | 21 | 23.8 | 64.4 | 530 | +| CD and STO | 22 | 24.3 | 61.9 | 530 | +| CD, CI and STO | 23 | 25.3 | 67.9 | 530 | + +## Included components + +Harness Helm chart includes the following components. + +**Table 1. Platform components for Continuous Delivery** + +| **Component** | **Description** | +| :-- | :-- | +| **Access control** | Provides pipelines with access controls including Kubernetes Role-Based Access Control (RBAC). | +| **Data capture** | Responsible for the capture of data related to the operation of Harness Pipelines, including but not limited to events, tasks, metrics, and logs. | +| **CV Nextgen** | Provides continuous verification (CV) services to Pipeline components including deployments, services, and logs. Aggregates data from multiple providers, including performance metrics, from monitoring activities for dashboard presentation. | +| **Gateway** | Manages application gateway services across Harness Pipelines. | +| **Harness Manager** | Responsible for the analysis and presentation of actionable data from the end-to-end Harness Pipeline in an administrative user interface. | +| **Harness Storage Proxy** | Supplies proxy services for storage. | +| **LE Nextgen** | Supplies Harness Learning Engine (LE), a machine-learning component used to fine-tune Pipelines and identify and flag anomalies. | +| **Log service** | Provides frontend logging services to Harness Pipelines. | +| **MinIO** | A distributed object storage system providing Kubernetes-based, S3-compatible storage. You can use MinIO to store runtime logs (build logs) for Harness Pipelines. | +| **MongoDB** | A NoSQL database offering high-volume storage of data represented as key-value pairs contained in documents and collections of documents. | +| **NextGen UI** | Provides the user interface for Harness NextGen. | +| **NgAuth UI** | User interface component for the AngularJS `ng-auth` authentication service. | +| **NgManager** | Provides NextGen Harness Manager. | +| **Pipeline service** | Supports creating a pipeline. | +| **Platform service** | Represents the Harness Platform service. | +| **Redis** | Provides services for Redis, an in-memory data structure store. | +| **Scm service** | Provides source code management services. | +| **Template service** | Provides Harness templates to enable the design of reusable content, logic, and parameters in Pipelines. | +| **Test Intelligence Service** | Provides the Test Intelligence service. | +| **TimescaleDB** | Provides the TimescaleDB time-series SQL database. | + + +**Table 2. Harness Platform components** + +The following components are included in addition to the Harness Platform components. + +| **Component** | **Module** | +| :-- | :-- | +| **Ci-manager** | Continuous Integration | +| **Sto-core** | Enables the creation and management of Harness Security Testing Orchestration | +| **Sto-manager** | Provides core services for Harness Security Testing Orchestration | + + +**Table 3. Optional dependencies** + +| **Dependency** | **Description** | +| :-- | :-- | +| **Ingress Controller** | [Istio](https://istio.io/latest/about/service-mesh/) is an open-source service mesh that supports the Kubernetes Ingress Controller. | +| **Istio** | Supported by default. | + diff --git a/docs/self-managed-enterprise-edition/self-managed-helm-based-install/install-harness-self-managed-enterprise-edition-using-helm-ga.md b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/install-harness-self-managed-enterprise-edition-using-helm-ga.md new file mode 100644 index 00000000000..38f64c7e600 --- /dev/null +++ b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/install-harness-self-managed-enterprise-edition-using-helm-ga.md @@ -0,0 +1,153 @@ +--- +title: Install Harness Self-Managed Enterprise Edition using Helm +description: This document explains how to use Helm to install, upgrade or uninstall Harness Self-Managed Enterprise Edition. This document describes an installation on Google Kubernetes Engine (GKE). The same in… +# sidebar_position: 2 +helpdocs_topic_id: 6tblwmh830 +helpdocs_category_id: 66qbyn7ugu +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This document explains how to use Helm to install, upgrade or uninstall Harness Self-Managed Enterprise Edition. This document describes an installation on Google Kubernetes Engine (GKE). The same installation process, however, applies to installations on Kubernetes versions 1.*x* and later. + +Helm package manager provides a declarative approach to Kubernetes application management in which software packages are specified as “charts.” For more information, see the [Helm documentation](https://helm.sh/docs/). + +## Download Harness Helm chart + +To download Harness Helm chart for the installation of Self-Managed Enterprise Edition, see . + +Harness Helm chart is available for demonstration and production environments. + +## Update the override.yaml file + +Depending on your target environment, you'll need to update the override.yaml file to specify a load balancer or to specify the Harness modules to be deployed. + +### Add a load balancer + +Use the following procedure to add a load balancer. + +**To add the URL for a load balancer** + +1. In the values.yaml file, set the `global.loadbalancerURL` field to the URL of your load balancer. This is the URL you use for Harness. + + ``` + global: + # -- Harness Application URL + loadbalancerURL: http:// + host_name: "" + ``` + +2. Set the `host_name` field to the IP address of the load balancer. + +3. Save the file. + +### Deploy Harness modules + +Harness Helm chart includes Harness Platform components. You can add modules by editing the override.yaml file. + +The following components are enabled by default: + +* Harness CD - Next Generation +* Harness CI +* Harness Security Testing Orchestration (STO) + +You can conditionally disable or enable the CI and STO modules by specifying a boolean value in the `enabled` field of the YAML: + +#### CI module + +``` +ci: +# -- Enable to deploy CI to your cluster +enabled: true +``` + +#### STO module + +``` +sto: +# -- Enable to deploy STO to your cluster +enabled: true +``` + +## Install the Helm chart + +To use the charts, you must install Helm. To get started with Helm, see the [Helm documentation](https://helm.sh/docs/). After you install Helm, follow the instructions below. + +**To install the Helm chart** + +1. Add the repository. + + ``` + $ helm repo add harness https://harness.github.io/helm-charts + ``` + +2. Create a namespace for your installation. + + ``` + $ kubectl create namespace + ``` + +3. Modify the override.yaml file with your environment settings. + +4. Install the Helm chart. + + ``` + $ helm install my-release harness/harness-prod -n -f override.yaml + ``` + +## Verify installation + +After the installation completes, the services that were installed are enumerated with their status. + +![](./static/install-harness-self-managed-enterprise-edition-using-helm-ga-00.png) + +The services that appear depend on the modules that were installed. + +**To verify installation** + +1. Review the list of services. +2. In your browser, type the following instruction: + + ``` + http://localhost/auth/#/signup + ``` + + If the installation was successful, the Harness **Sign up** page appears. + +## Upgrade the Helm chart + +Use the following instructions to upgrade the chart to a new release.  + +**To upgrade the chart** + +1. Use the following command to obtain the release name for the earlier release. + + ``` + $ helm ls -n + ``` + +2. Retrieve the values for the earlier release. + ``` + $ helm get values my-release > old_values.yaml + ``` + +3. Change the values of the old\_values.yaml file as required. + +4. Use the `helm upgrade` command to update the chart. + + ``` + $ helm upgrade my-release harness/harness-demo -n -f old_values.yaml + ``` + +## Uninstall the Helm chart + +To remove the Kubernetes components associated with the chart and delete the release, uninstall the chart. + +**To uninstall the chart** + +* Uninstall and delete the `my-release` deployment: + + ``` + $ helm uninstall my-release -n + ``` + diff --git a/docs/self-managed-enterprise-edition/self-managed-helm-based-install/static/install-harness-self-managed-enterprise-edition-using-helm-ga-00.png b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/static/install-harness-self-managed-enterprise-edition-using-helm-ga-00.png new file mode 100644 index 00000000000..d7ad9d27cae Binary files /dev/null and b/docs/self-managed-enterprise-edition/self-managed-helm-based-install/static/install-harness-self-managed-enterprise-edition-using-helm-ga-00.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/_category_.json b/docs/self-managed-enterprise-edition/vm-self-managed-category/_category_.json new file mode 100644 index 00000000000..9452cd3d85f --- /dev/null +++ b/docs/self-managed-enterprise-edition/vm-self-managed-category/_category_.json @@ -0,0 +1,14 @@ +{ + "label":"Install on Virtual Machine", + "position": 40, + "collapsible":"true", + "collapsed":"true", + "className":"red", + "link":{ + "type":"generated-index", + "title":"Install on Virtual Machine" + }, + "customProps":{ + "helpdocs_category_id":"ubhcaw8n0l" + } + } \ No newline at end of file diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-31.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-31.png new file mode 100644 index 00000000000..8334ded2ca5 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-31.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-32.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-32.png new file mode 100644 index 00000000000..b0c721e7569 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-32.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-33.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-33.png new file mode 100644 index 00000000000..b39a4cf9fcf Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-33.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-34.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-34.png new file mode 100644 index 00000000000..3d7994b240e Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-34.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-35.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-35.png new file mode 100644 index 00000000000..40d900f672b Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-35.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-36.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-36.png new file mode 100644 index 00000000000..cf6fd07cd4a Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-36.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-37.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-37.png new file mode 100644 index 00000000000..04833669a41 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-37.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-38.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-38.png new file mode 100644 index 00000000000..b2c0d3992a7 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-infrastructure-requirements-38.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-00.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-00.png new file mode 100644 index 00000000000..04833669a41 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-00.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-01.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-01.png new file mode 100644 index 00000000000..bad414548d2 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-01.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-02.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-02.png new file mode 100644 index 00000000000..d02db0d7fac Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-02.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-03.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-03.png new file mode 100644 index 00000000000..42bdf381880 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-03.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-04.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-04.png new file mode 100644 index 00000000000..f000207d839 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-04.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-05.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-05.png new file mode 100644 index 00000000000..1fb4174e312 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-05.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-06.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-06.png new file mode 100644 index 00000000000..04833669a41 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-06.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-07.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-07.png new file mode 100644 index 00000000000..bad414548d2 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-07.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-08.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-08.png new file mode 100644 index 00000000000..d02db0d7fac Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-08.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-09.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-09.png new file mode 100644 index 00000000000..42bdf381880 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-09.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-10.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-10.png new file mode 100644 index 00000000000..f000207d839 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-10.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-11.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-11.png new file mode 100644 index 00000000000..1fb4174e312 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-11.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-12.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-12.png new file mode 100644 index 00000000000..81364ddf3d6 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-12.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-13.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-13.png new file mode 100644 index 00000000000..68c0e5390bb Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-13.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-14.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-14.png new file mode 100644 index 00000000000..3814c47f26c Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-14.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-15.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-15.png new file mode 100644 index 00000000000..f3ed98780fb Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-15.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-16.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-16.png new file mode 100644 index 00000000000..d199aec58af Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-16.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-17.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-17.png new file mode 100644 index 00000000000..cf6fd07cd4a Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-17.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-18.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-18.png new file mode 100644 index 00000000000..30c5a12c36f Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-18.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-19.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-19.png new file mode 100644 index 00000000000..91dffcffcfa Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-19.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-20.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-20.png new file mode 100644 index 00000000000..b4b3c042ef2 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-20.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-21.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-21.png new file mode 100644 index 00000000000..c0cfe4d0b8b Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-21.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-22.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-22.png new file mode 100644 index 00000000000..06c009b2365 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-22.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-23.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-23.png new file mode 100644 index 00000000000..4564248890d Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-23.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-24.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-24.png new file mode 100644 index 00000000000..3b24355426f Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-24.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-25.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-25.png new file mode 100644 index 00000000000..86c03cde575 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-25.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-26.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-26.png new file mode 100644 index 00000000000..c31ada46c09 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-26.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-27.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-27.png new file mode 100644 index 00000000000..cb14604ae27 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-27.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-28.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-28.png new file mode 100644 index 00000000000..8e9d9c07f60 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-28.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-29.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-29.png new file mode 100644 index 00000000000..c0cfe4d0b8b Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-29.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-30.png b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-30.png new file mode 100644 index 00000000000..06c009b2365 Binary files /dev/null and b/docs/self-managed-enterprise-edition/vm-self-managed-category/static/virtual-machine-on-prem-installation-guide-30.png differ diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/virtual-machine-on-prem-infrastructure-requirements.md b/docs/self-managed-enterprise-edition/vm-self-managed-category/virtual-machine-on-prem-infrastructure-requirements.md new file mode 100644 index 00000000000..912f3eae58e --- /dev/null +++ b/docs/self-managed-enterprise-edition/vm-self-managed-category/virtual-machine-on-prem-infrastructure-requirements.md @@ -0,0 +1,238 @@ +--- +title: Infrastructure requirements for virtual. machine +description: This document lists the infrastructure requirements for a Self-Managed Enterprise Edition - Virtual Machine installation. The Virtual Machine option refers to installing the Self-Managed Enterprise E… +# sidebar_position: 2 +helpdocs_topic_id: emup5gv8f4 +helpdocs_category_id: ubhcaw8n0l +helpdocs_is_private: false +helpdocs_is_published: true +--- + +This document lists the infrastructure requirements for a **Self-Managed Enterprise Edition - Virtual Machine** installation. + +The Virtual Machine option refers to installing the Self-Managed Enterprise Edition Kubernetes cluster on VMs. + +First, you use the requirements below to bootstrap a Kubernetes cluster on your target VMs. + +After you stand up the Kubernetes cluster, you use it to install Self-Managed Enterprise Edition - Virtual Machine on the configured cluster. + +### Supported operating systems + +* Ubuntu 18.04 (recommended) +* CentOS 7.4, 7.5, 7.6, 7.7 +* RHEL 7.4, 7.5, 7.6, 7.7 + +### VM specifications + +There are different VM specifications for production and development installations. + +#### Number of VMs + +The number of VMs depends on the configuration mode you select during installation: + +![](./static/virtual-machine-on-prem-infrastructure-requirements-31.png) + +* **Demo:** 1 VM. +* **Single node production:** 1 VM. +* **HA production mode:** 3 VMs. + +You'll be able to add more nodes to the cluster later, if needed. + +#### Production installation + +Self-Managed Enterprise Edition - **NextGen** is installed as an application on an existing Self-Managed Enterprise Edition - **FirstGen** installation. + +Below are the requirements for each microservice in each Self-Managed Enterprise Edition installation. + +##### Self-Managed Enterprise Edition - FirstGen + +VM Specifications: 15 cores, 30 GB memory, 400 GB disk space. + +Here are the requirements for each microservice. + + + +| **Microservice** | **Pods** | **CPU / Pod** | **Memory / Pod** | **Total CPU** | **Total Memory** | +| :-- | :-: | :-: | :-: | :-: | :-: | +| Manager | 2 | 2 | 4 | 4 | 8 | +| Verification | 2 | 1 | 3 | 2 | 6 | +| Machine Learning Engine | 1 | 8 | 2 | 8 | 2 | +| UI | 2 | 0.25 | 0.25 | 0.5 | 0.5 | +| MongoDB | 3 | 4 | 8 | 12 | 24 | +| Proxy | 1 | 0.5 | 0.5 | 0.5 | 0.5 | +| Ingress | 2 | 0.25 | 0.25 | 0.5 | 0.5 | +| TimescaleDB | 3 | 2 | 8 | 6 | 24 | +| KOTS Admin and Kubernetes Installations |   |   |   | 10 | 18 | +| **Total** | | | | **43.5** | **83.5** | + +##### Self-Managed Enterprise Edition - NextGen + + + +| **Microservice** | **Pods** | **CPU / Pod** | **Memory / Pod** | **Total CPU** | **Total Memory** | +| :-- | :-: | :-: | :-: | :-: | :-: | +| Log Minio | 1 | 1 | 4Gi | 1 | 4Gi | +| Log service | 1 | 1 | 3Gi | 1 | 3Gi | +| SCM | 1 | 0.1 | 0.5Gi | 0.1 | 0.5Gi | +| Gateway | 2 | 0.5 | 3Gi | 1 | 6Gi | +| NextGen UI | 2 | 0.2 | 0.2Gi | 0.4 | 0.4Gi | +| Platform service | 2 | 1 | 3Gi | 2 | 6Gi | +| Test Intelligence | 2 | 1 | 3Gi | 2 | 6Gi | +| Access Control | 2 | 1 | 3Gi | 2 | 6Gi | +| CI Manager | 2 | 1 | 3Gi | 2 | 6Gi | +| NextGen Manager | 2 | 2 | 6Gi | 4 | 12Gi | +| Pipeline | 2 | 1 | 6Gi | 2 | 12Gi | +| **Total** | **19** | | | **17.5** | **61.9Gi** | + +#### Dev installation + +VM Specifications: 10 cores, 16 GB memory, 100 GB disk space. + +Here are the requirements for each microservice. + +| **Microservice** | **Pods** | **CPU / Pod** | **Memory / Pod** | **Total CPU** | **Total Memory** | +| :-- | :-: | :-: | :-: | :-: | :-: | +| Manager | 1 | 2 | 4 | 2 | 4 | +| Verification | 1 | 1 | 3 | 1 | 3 | +| Machine Learning Engine | 1 | 3 | 2 | 3 | 2 | +| UI | 1 | 0.25 | 0.25 | 0.25 | 0.25 | +| MongoDB | 3 | 2 | 4 | 6 | 12 | +| Proxy | 1 | 0.5 | 0.5 | 0.5 | 0.5 | +| Ingress | 1 | 0.25 | 0.25 | 0.25 | 0.25 | +| TimescaleDB | 1 | 2 | 8 | 2 | 8 | +| KOTS Admin Pods |   |   |   | 10 | 17.75 | +| **Total** | | | | **25** | **47.75** | + +  + +### Networking architecture + +The following examples diagram illustrate the simple networking architecture for Self-Managed Enterprise Edition - Virtual Machine. + +GCP Example: + +![](./static/virtual-machine-on-prem-infrastructure-requirements-32.png) + +AWS Example: + +![](./static/virtual-machine-on-prem-infrastructure-requirements-33.png) + +The following sections go into greater detail. + +### Open ports for 3 VMs + +* TCP ports 6443-6783 +* UDP ports 6783 and 6784 +* TCP port 80 for exposing Harness. Port 80/443 is used as the backend of the load balancer routing traffic to the VMs. +* TCP ports 30900-30905 for monitoring (Grafana Dashboards, Prometheus). +* TCP ports 9000 and 9879. These are required for Harness Self-Managed Enterprise Edition NextGen. +* The KOTS admin tool requires 8800. + +For example, here is a GCP firewall rule that includes the required ports (80 is already open): + +![](./static/virtual-machine-on-prem-infrastructure-requirements-34.png) + +### Load balancer + +There are two load balancers required for a Harness Self-Managed Enterprise Edition Virtual Machine installation. + +#### Load balancer to Harness application + +A load balancer routing all the incoming traffic to the port where Harness is exposed on all of the VM’s. Once you install Harness, this port will be used for accessing Harness Self-Managed Enterprise Edition. + +* The load balancer can be any of L4 or L7. +* The load balancer should forward unencrypted traffic to the nodes. + +Different cloud platforms have different methods for setting up load balancers and traffic routing. + +For example, in GCP, you create an HTTP Load Balancer with a frontend listening on port 80 and a backend sending traffic to the Instance group containing your VMs on port 80/443. + +Later, when you configure Harness Self-Managed Enterprise Edition, you will enter the frontend IP address in **Load Balancer URL** and the backend port 80/443 in the **NodePort** setting: + +![](./static/virtual-machine-on-prem-infrastructure-requirements-35.png) + +Typically, you will also set up DNS to resolve a domain to the frontend IP, and then use the domain name in **Load Balancer URL**. + +##### gRPC and Load Balancer Settings + +**If your load balancer does support HTTP2 over port 443**, when you install Harness Self-Managed Enterprise Edition NextGen you will set up gRPC settings: + +![](./static/virtual-machine-on-prem-infrastructure-requirements-36.png) + +Enter the following: + +* **GRPC Target:** enter the load balancer hostname (hostname from the load balancer URL) + + **GRPC Authority:** enter `manager-grpc-`. For example: `manager-grpc-35.202.197.230`. + +**If your load balancer does not support HTTP2 over port 443** you have two options: + +* If your load balancer supports multiple ports for SSL then add port 9879 in the application load balancer and target port 9879 or node port 32510 on the Ingress controller. + + **GRPC Target:** enter the load balancer hostname + + **GRPC Authority:** enter the load balancer hostname +* If your load balancer does not support multiple ports for SSL then create a new load balancer and target port 9879 or node port 32510 on the Ingress controller: + + **GRPC Target:** enter the new load balancer hostname + + **GRPC Authority:** enter the new load balancer hostname + +#### In-cluster load balancer for high availability + +A TCP forwarding load balancer (L4) distributing the traffic on port 6443. This will be used for Kubernetes cluster HA. The health check should be on port 6443, also. + +The TCP load balancer you created will be selected when you install Harness using the KOTS plugin via the `-s ha` parameter: + +``` +$ curl -sSL https://k8s.kurl.sh/harness | sudo bash -s ha +The installer will use network interface 'ens4' (with IP address '10.128.0.25') +Please enter a load balancer address to route external and internal traffic to the API servers. +In the absence of a load balancer address, all traffic will be routed to the first master. +Load balancer address: +``` +You will enter the IP address of your TCP load balancer. + +For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443: + +![](./static/virtual-machine-on-prem-infrastructure-requirements-37.png) + +When the kurl installation prompts you for the load balancer IP address, you will enter the load balancer IP and port 6443. For example `10.128.0.50:6443`. + +See [HA Installations](https://kots.io/kotsadm/installing/installing-embedded-cluster/#ha-installations) from KOTS. + +### User access requirements + +For initial setup: sudo/root access is required. + +### Network requirements + +Add the following URLs to your allow list: + +* kots.io — Kots pulls the latest versions of the kubectl plugin and Kots admin console. +* app.replicated.com — Kots admin console connects to check for the availability of releases according to your license +* proxy.replicated.com — Proxy your registry to pull your private images. + +Outbound access to the following URLs: + +* proxy.replicated.com​ +* replicated.app +* k8s.kurl.sh​ +* app.replicated.com + +The outbound access is required for a **connected install only**. If you have opted for [Airgap mode](https://kots.io/kotsadm/installing/airgap-packages/), this is not required. + +If your cluster does not have direct outbound connectivity and needs a proxy for outbound connections, use these instructions: [https://docs.docker.com/network/proxy](https://docs.docker.com/network/proxy/) to set up a proxy on the node machines. + +### Trusted certificate requirement for Harness Self-Managed Enterprise Edition + +All connections to the Harness Manager can be secure or unencrypted according to the URL scheme you use when you configure the Load Balancer URL during installation (`https://` or `http://`): + +![](./static/virtual-machine-on-prem-infrastructure-requirements-38.png) + +For secure connections from any integration into the Harness Manager (Github Webhooks, etc), including the **Harness Delegate**, you must use a publicly trusted certificate. + +Harness does not support self-signed certificates for connections to the Harness Manager. + +For connections from the Harness Manager outbound to an integration, you can use a self-signed certificate. In this case, you must import the self-signed certificate into Harness Delegate's JRE keystore manually or using a Harness Delegate Profile. + +### Install Harness Self-Managed Enterprise Edition + +Now that you have set up the requirements, proceed with installation in [Self-Managed Enterprise Edition - Virtual Machine: Setup Guide](virtual-machine-on-prem-installation-guide.md). + diff --git a/docs/self-managed-enterprise-edition/vm-self-managed-category/virtual-machine-on-prem-installation-guide.md b/docs/self-managed-enterprise-edition/vm-self-managed-category/virtual-machine-on-prem-installation-guide.md new file mode 100644 index 00000000000..312ebb45293 --- /dev/null +++ b/docs/self-managed-enterprise-edition/vm-self-managed-category/virtual-machine-on-prem-installation-guide.md @@ -0,0 +1,659 @@ +--- +title: Install Self-Managed Enterprise Edition on virtual machine +description: This topic covers installing Harness Self-Managed Enterprise Edition - Virtual Machine NextGen as a Kubernetes cluster embedded on your target VMs. To install Harness Self-Managed Enterprise Edition… +# sidebar_position: 2 +helpdocs_topic_id: tjvawty1do +helpdocs_category_id: ubhcaw8n0l +helpdocs_is_private: false +helpdocs_is_published: true +--- + + + +This topic covers installing Harness Self-Managed Enterprise Edition - Virtual Machine **NextGen** as a Kubernetes cluster embedded on your target VMs. + +To install Harness Self-Managed Enterprise Edition Virtual Machine **NextGen**, you install Harness Self-Managed Enterprise Edition Virtual Machine **FirstGen**, and then you install NextGen as an application. Installing Harness Self-Managed Enterprise Edition into an embedded Kubernetes cluster is a simple process where you prepare your VMs and network, and use the Kubernetes installer kURL and the KOTS plugin to complete the installation and deploy Harness. + +After you set up Harness on a VM, you can add additional worker nodes by simply running a command. Harness Self-Managed Enterprise Edition uses the open source Kubernetes installer kURL and the KOTS plugin for installation. See [Install with kURL](https://kurl.sh/docs/install-with-kurl/) from kURL and [Installing an Embedded Cluster](https://kots.io/kotsadm/installing/installing-embedded-cluster/) from KOTS. + +### Harness Self-Managed Enterprise Edition NextGen installation options + +How you install Harness Self-Managed Enterprise Edition NextGen will follow one of the use cases below: + +#### NextGen on existing FirstGen VMs + +In this scenario, you have an existing Harness Self-Managed Enterprise Edition FirstGen running and you want to add Harness NextGen to it. + +You simply add Harness Self-Managed Enterprise Edition NextGen as a new application in your existing Harness Self-Managed Enterprise Edition FirstGen installation. + +1. Open the FirstGen KOTS admin tool. +2. Install NextGen as a new application on existing FirstGen. +3. Upload the NextGen license file. +4. Use the exact same FirstGen configuration values for the NextGen configuration. + +If you are using this option, skip to [Install NextGen on Existing FirstGen](#install-next-gen-on-existing-first-gen). + +#### NextGen on new FirstGen VMs + +In this scenario, you want to install FirstGen and NextGen on new VMs. + +1. Set up your VMs according to the requirements specified in [Self-Managed Enterprise Edition - Virtual Machine: Infrastructure Requirements](virtual-machine-on-prem-infrastructure-requirements.md). +2. Install FirstGen. +3. Install NextGen as a new application on existing FirstGen. +4. Upload the NextGen license file. +5. Use the exact same FirstGen configuration values for the NextGen configuration. + +If you are using this option, do the following: + +1. Follow all of the FirstGen installation instructions beginning with [Step 1: Set up VM Requirements](#step-1-set-up-vm-requirements). +2. Follow the NextGen installation instructions in [Install NextGen on Existing FirstGen](#install-next-gen-on-existing-first-gen). + +#### Legacy FirstGen not using KOTS + +In this scenario, you have a legacy FirstGen installation that is not a KOTS-based installation. + +This process will involve migrating your legacy FirstGen data to a new KOTS-based FirstGen and then installing NextGen. + +1. Set up your VMs according to the requirements specified in [Self-Managed Enterprise Edition - Virtual Machine: Infrastructure Requirements](virtual-machine-on-prem-infrastructure-requirements.md). +2. Install FirstGen. +3. Migrate data to new FirstGen using a script from Harness Support. +4. Install NextGen as a new application on the new FirstGen. +5. Upload the NextGen license file. +6. Use the exact same FirstGen configuration values for the NextGen configuration. + +If you are using this option, do the following: + +1. Follow all of the FirstGen installation instructions beginning with [Step 1: Set up VM Requirements](#step-1-set-up-vm-requirements). +2. Migrate data to new FirstGen using a script from Harness Support. +3. Follow the NextGen installation instructions in [Install NextGen on Existing FirstGen](#install-next-gen-on-existing-first-gen). + +### Step 1: Set up VM requirements + +Ensure that your VMs meet the requirements specified in [Self-Managed Enterprise Edition - Virtual Machine: Infrastructure Requirements](virtual-machine-on-prem-infrastructure-requirements.md). + +Different cloud platforms use different methods for grouping VMs (GCP instance groups, AWS target groups, etc). Set up your 3 VMs using the platform method that works best with the platform's networking processes. + +### Step 2: Set up load balancer and networking requirements + +Ensure that your networking meets the requirements specified in [Self-Managed Enterprise Edition - Virtual Machine: Infrastructure Requirements](virtual-machine-on-prem-infrastructure-requirements.md). + +You will need to have two load balancers, as described in the [Self-Managed Enterprise Edition - Virtual Machine: Infrastructure Requirements](virtual-machine-on-prem-infrastructure-requirements.md). + +One for routing traffic to the VMs and one for the in-cluster load balancer. + +During installation, you are asked for the IP address of the in-cluster TCP load balancer first. + +When you configure the Harness Self-Managed Enterprise Edition application in the KOTS admin console, you are asked for the HTTP load balancer URL. + +### Option 1: Disconnected installation + +Disconnected Installation involves downloading the Self-Managed Enterprise Edition - Virtual Machine archive file onto a jump box, and then copying and the file to each host VM you want to use. + +One each VM, you extract and install Harness. + +On your jump box, run the following command to obtain the Self-Managed Enterprise Edition - Virtual Machine file: + +``` +curl -LO https://kurl.sh/bundle/harness.tar.gz +``` +Copy the file to a Harness host and extract it (`tar xvf harness.tar.gz`). + +On the VM, install Harness: + +``` +cat install.sh | sudo bash -s airgap ha +``` + +This will install the entire Self-Managed Enterprise Edition Kubernetes cluster and all related microservices. + +The `ha` parameter is used to set up high availability. If you are not using high availability, you can omit the parameter. + +#### Provide load balancer settings + +First, you are prompted to provide the IP address of the TCP Load Balancer for the cluster HA: + +``` +The installer will use network interface 'ens4' (with IP address '10.128.0.25') +Please enter a load balancer address to route external and internal traffic to the API servers. +In the absence of a load balancer address, all traffic will be routed to the first master. +Load balancer address: +``` +This is the TCP load balancer you created in [Self-Managed Enterprise Edition - Virtual Machine: Infrastructure Requirements](virtual-machine-on-prem-infrastructure-requirements.md). + +For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443: + +![](./static/virtual-machine-on-prem-installation-guide-00.png)Enter the IP address and port of your TCP load balancer (for example, `10.128.0.50:6443`), and press Enter. The installation process will continue. The installation process begins like this: + +``` +... +Fetching weave-2.5.2.tar.gz +Fetching rook-1.0.4.tar.gz +Fetching contour-1.0.1.tar.gz +Fetching registry-2.7.1.tar.gz +Fetching prometheus-0.33.0.tar.gz +Fetching kotsadm-1.16.0.tar.gz +Fetching velero-1.2.0.tar.gz +Found pod network: 10.32.0.0/22 +Found service network: 10.96.0.0/22 +... +``` + +#### Review configuration settings + +Once the installation process is complete, KOTS provides you with several configuration settings and commands. Save these settings and commands. + +* KOTS admin console and password: + + ``` + Kotsadm: http://00.000.000.000:8800 + Login with password (will not be shown again): D1rgBIu21 + + ``` +If you need to reset your password, enter `kubectl kots reset-password -n default`. You will be prompted for a new password. + +* Prometheus, Grafana, and Alertmanager ports and passwords: + + ``` + The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively. + To access Grafana use the generated user:password of admin:RF1KuqreN . + ``` + +* kubectl access to your cluster: + + ``` + To access the cluster with kubectl, reload your shell: + bash -l + ``` + +* The command to add worker nodes to the installation: + + ``` + To add worker nodes to this installation, run the following script on your other nodes: + + curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130 + ``` + +We will use this command later. + +* Add master nodes: + + ``` + To add MASTER nodes to this installation, run the following script on your other nodes + curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=34.71.32.244:6443 kubeadm-to + ken=c2yack.q7lt3z6yuevqlmtf kubeadm-token-ca-hash=sha256:9db504ecdee08ff6dfa3b299ce95302fe53dd632a2e9356c55e9272db7 + 2d60d1 kubernetes-version=1.15.3 cert-key=f0373e812e0657b4f727e90a7286c5b65539dfe7ee5dc535df0a1bcf74ad5c57 control- + plane docker-registry-ip=10.96.2.100 + ``` + +#### Log into the admin tool + +In a browser, enter the Kotsadm link. + +The browser displays a TLS warning. + +![](./static/virtual-machine-on-prem-installation-guide-01.png)Click **Continue to Setup**. + +In the warning page, click **Advanced**, then click **Proceed** to continue to the admin console. + +As KOTS uses a self-signed certification, but you can upload your own. + +![](./static/virtual-machine-on-prem-installation-guide-02.png)Upload your certificate or click **Skip and continue**. + +Log into the console using the password provided in the installation output. + +![](./static/virtual-machine-on-prem-installation-guide-03.png) + +#### Upload your Harness license + +Once you are logged into the KOTS admin console, you can upload your Harness license. + +Obtain the Harness license file from your Harness Customer Success contact or email [support@harness.io](mailto:support@harness.io). + +Drag your license YAML file into the KOTS admin tool: + +![](./static/virtual-machine-on-prem-installation-guide-04.png) + +Next, upload the license file: + +![](./static/virtual-machine-on-prem-installation-guide-05.png) + +Now that license file is uploaded, you can install Harness. + +Go to [Step 3: Configure Harness](#step-3-configure-harness). + +### Option 2: Connected installation + +Once you have your VMs and networking requirements set up, you can install Harness. + +Log into one of your VMs, and then run the following command: + +``` +curl -sSL https://k8s.kurl.sh/harness | sudo bash -s ha +``` +This will install the entireSelf-Managed Enterprise Edition Kubernetes cluster and all related microservices. + +The `-s ha` parameter is used to set up high availability. + +#### Provide load balancer settings + +First, you are prompted to provide the IP address of the TCP Load Balancer for the cluster HA: + +``` +The installer will use network interface 'ens4' (with IP address '10.128.0.25') +Please enter a load balancer address to route external and internal traffic to the API servers. +In the absence of a load balancer address, all traffic will be routed to the first master. +Load balancer address: +``` +This is the TCP load balancer you created in [Self-Managed Enterprise Edition - Virtual Machine: Infrastructure Requirements](virtual-machine-on-prem-infrastructure-requirements.md). + +For example, here is a GCP TCP load balancer with its frontend forwarding rule using port 6443: + +![](./static/virtual-machine-on-prem-installation-guide-06.png)Enter the IP address and port of your TCP load balancer (for example, `10.128.0.50:6443`), and press Enter. The installation process will continue. The installation process begins like this: + +``` +... +Fetching weave-2.5.2.tar.gz +Fetching rook-1.0.4.tar.gz +Fetching contour-1.0.1.tar.gz +Fetching registry-2.7.1.tar.gz +Fetching prometheus-0.33.0.tar.gz +Fetching kotsadm-1.16.0.tar.gz +Fetching velero-1.2.0.tar.gz +Found pod network: 10.32.0.0/22 +Found service network: 10.96.0.0/22 +... +``` + +#### Review configuration settings + +Once the installation process is complete, KOTS provides you with several configuration settings and commands. Save these settings and commands. + +* KOTS admin console and password: + +``` +Kotsadm: http://00.000.000.000:8800 +Login with password (will not be shown again): D1rgBIu21 +``` + +If you need to reset your password, enter `kubectl kots reset-password -n default`. You will be prompted for a new password. + +* Prometheus, Grafana, and Alertmanager ports and passwords: + + ``` + The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively. + To access Grafana use the generated user:password of admin:RF1KuqreN . + ``` + +* kubectl access to your cluster: + + ``` + To access the cluster with kubectl, reload your shell: + bash -l + ``` + +* The command to add worker nodes to the installation: + + ``` + To add worker nodes to this installation, run the following script on your other nodes: + + curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130 + ``` + +We will use this command later. + +* Add master nodes: + + ``` + To add MASTER nodes to this installation, run the following script on your other nodes + curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=34.71.32.244:6443 kubeadm-to + ken=c2yack.q7lt3z6yuevqlmtf kubeadm-token-ca-hash=sha256:9db504ecdee08ff6dfa3b299ce95302fe53dd632a2e9356c55e9272db7 + 2d60d1 kubernetes-version=1.15.3 cert-key=f0373e812e0657b4f727e90a7286c5b65539dfe7ee5dc535df0a1bcf74ad5c57 control- + plane docker-registry-ip=10.96.2.100 + ``` + +#### Log into the admin tool + +In a browser, enter the Kotsadm link. + +The browser displays a TLS warning. + +![](./static/virtual-machine-on-prem-installation-guide-07.png) + +Click **Continue to Setup**. + +In the warning page, click **Advanced**, then click **Proceed** to continue to the admin console. + +As KOTS uses a self-signed certification, but you can upload your own. + +![](./static/virtual-machine-on-prem-installation-guide-08.png) + +Upload your certificate or click **Skip and continue**. + +Log into the console using the password provided in the installation output. + +![](./static/virtual-machine-on-prem-installation-guide-09.png) + +#### Upload your Harness license + +Once you are logged into the KOTS admin console, you can upload your Harness license. + +Obtain the Harness license file from your Harness Customer Success contact or email [support@harness.io](mailto:support@harness.io). + +Drag your license YAML file into the KOTS admin tool: + +![](./static/virtual-machine-on-prem-installation-guide-10.png) + +Next, upload the license file: + +![](./static/virtual-machine-on-prem-installation-guide-11.png) + +Now that license file is uploaded, you can install Harness. + +#### Download Harness over the internet + +If you are installing Harness over the Internet, click the **download Harness from the Internet** link. + +![](./static/virtual-machine-on-prem-installation-guide-12.png) + +KOTS begins installing Harness into your cluster. + +![](./static/virtual-machine-on-prem-installation-guide-13.png) + +Next, you will provide KOTS with the Harness configuration information (Load Balancer URL and NodePort). + +### Step 3: Configure Harness + +Now that you have added your license you can configure the networking for the Harness installation. + +![](./static/virtual-machine-on-prem-installation-guide-14.png) + +#### Mode + +* Select **Demo** to run a Self-Managed Enterprise Edition in demo mode and experiment with it. +* Select **Production - Single Node** to run this on one node. You can convert to Production - High Availability later. +* Select **Production** - **High Availability** to run a production version of Self-Managed Enterprise Edition. + +If you use **Production - Single Node**, you can convert to **Production - High Availability** later by doing the following: + +1. In the KOTS admin console, go to **Cluster Management**. +2. Click **Add a node**. This will generate scripts for joining additional worker and master nodes. + +For Disconnected (Airgap) installations, the bundle must also be downloaded and extracted on the remote node prior to running the join script.#### NodePort and application URL + +Self-Managed Enterprise Edition - Virtual Machine requires that you provide a NodePort and Application URL. + +1. In **Application URL**, enter the **full URL** for the HTTP load balancer you set up for routing external traffic to your VMs. + + Include the scheme and hostname/IP. For example, `https://app.example.com`. + + Typically, this is the frontend IP address for the load balancer. For example, here is an HTTP load balancer in GCP and how you enter its information into **Harness Configuration**. + + If you have set up DNS to resolve a domain name to the load balancer IP, enter that domain name in **Application URL**. + +2. In **NodePort**, enter the port number you set up for load balancer backend: **80**.![](./static/virtual-machine-on-prem-installation-guide-15.png) + +3. When you are done, click **Continue**. + +#### Option: Advanced configurations + +In the **Advanced Configurations** section, there are a number of advanced settings you can configure. If this is the first time you are setting up Self-Managed Enterprise Edition, there's no reason to fine tune the installation with these settings. + +You can change the settings later in the KOTS admin console's Config tab: + +![](./static/virtual-machine-on-prem-installation-guide-16.png) + +##### Ingress service type + +By default, nginx is used for Ingress automatically. If you are deploy nginx separately, do the following: + +1. Click **Advanced Configurations**. +2. Disable the **Install Nginx Ingress Controller** option. + +##### gRPC and load balancer settings + +In **Scheme**, if you select HTTPS, the GRPC settings appear. + +![](./static/virtual-machine-on-prem-installation-guide-17.png) + +**If your load balancer does support HTTP2 over port 443**, enter the following: + +* **GRPC Target:** enter the load balancer hostname (hostname from the load balancer URL) +* **GRPC Authority:** enter `manager-grpc-`. For example: `manager-grpc-35.202.197.230`. + +**If your load balancer does not support HTTP2 over port 443** you have two options: + +* If your load balancer supports multiple ports for SSL then add port 9879 in the application load balancer and target port 9879 or node port 32510 on the Ingress controller. + + **GRPC Target:** enter the load balancer hostname + + **GRPC Authority:** enter the load balancer hostname +* If your load balancer does not support multiple ports for SSL then create a new load balancer and target port 9879 or node port 32510 on the Ingress controller: + + **GRPC Target:** enter the new load balancer hostname + + **GRPC Authority:** enter the new load balancer hostname + +##### Log Service Backend + +![](./static/virtual-machine-on-prem-installation-guide-18.png)There are two options for **Log Service Backend**: + +**Minio:** If you want to use the builtin [Minio](https://docs.min.io/docs/minio-quickstart-guide.html) log service then your load balancer needs to reach the Ingress controller on port 9000. Create a new load balancer and target port 9000 or node port 32507. + +**Amazon S3 Bucket:** Enter the S3 bucket settings to use. + +### Step 4: Perform preflight checks + +Preflight checks run automatically and verify that your setup meets the minimum requirements. + +![](./static/virtual-machine-on-prem-installation-guide-19.png) + +You can skip these checks, but we recommend you let them run. + +Fix any issues in the preflight steps. + +### Step 5: Deploy Harness + +When you are finished pre-flight checks, click **Deploy and** **Continue**. + +![](./static/virtual-machine-on-prem-installation-guide-20.png)Harness is deployed in a few minutes. + +It can take up to 30 minutes when installing the demo version on a system with the minimum recommended specs.In a new browser tab, go to the following URL, replacing `` with the URL you entered in the **Application URL** setting in the KOTS admin console: + +`/auth/#/signup` + +For example: + +`http://harness.mycompany.com/auth/#/signup` + +The Harness sign up page appears. + +![](./static/virtual-machine-on-prem-installation-guide-21.png) + +Sign up with a new account and then sign in. + +![](./static/virtual-machine-on-prem-installation-guide-22.png) + +Your new account will be added to the Harness Account Administrators User Group. + +See [Add and Manage User Groups](https://docs.harness.io/article/dfwuvmy33m-add-user-groups). + +#### Future versions + +To set up future versions of Self-Managed Enterprise Edition, in the KOTS admin console, in the **Version history** tab, click **Deploy**. The new version is displayed in Deployed version. + +![](./static/virtual-machine-on-prem-installation-guide-23.png) + +### Step 6: Add worker nodes + +Now that Self-Managed Enterprise Edition is installed in one VM, you can install it on other VMs using the command provided when you installed Harness: + +``` +To add worker nodes to this installation, run the following script on your other nodes + curl -sSL https://kurl.sh/harness/join.sh | sudo bash -s kubernetes-master-address=10.128.0.24:6443 kubeadm-token=xxxxx kubeadm-token-ca-hash=shaxxxxxx kubernetes-version=1.15.3 docker-registry-ip=10.96.3.130 +``` + +Run this on each VM in your group. The installation will begins something like this: + +``` +... +Docker already exists on this machine so no docker install will be performed +Container already exists on this machine so no container install will be performed +The installer will use network interface 'ens4' (with IP address '10.128.0.44') +Loaded image: replicated/kurl-util:v2020.07.15-0 +Loaded image: weaveworks/weave-kube:2.5.2 +Loaded image: weaveworks/weave-npc:2.5.2 +Loaded image: weaveworks/weaveexec:2.5.2 +... +``` + +When installation is complete, you will see the worker join the cluster and preflight checks are performed: + +``` +⚙ Join Kubernetes node ++ kubeadm join --config /opt/replicated/kubeadm.conf --ignore-preflight-errors=all +[preflight] Running pre-flight checks +validated versions: 19.03.4. Latest + validated version: 18.09 +``` + +The worker is now joined. + +### Important next steps + +**Important:** You cannot invite other users to Harness until a Harness Delegate is installed and a Harness SMTP Collaboration Provider is configured. + +1. Install the Harness Delegate. + +2. Set up an SMTP Collaboration Provider in Harness for email notifications from the Harness Manager. + Ensure you open the correct port for your SMTP provider, such as [Office 365](https://support.office.com/en-us/article/server-settings-you-ll-need-from-your-email-provider-c82de912-adcc-4787-8283-45a1161f3cc3). + +3. [Add a Secrets Manager](https://docs.harness.io/article/bo4qbrcggv-add-secrets-manager). By default, Self-Managed Enterprise Edition installations use the local Harness MongoDB for the default Harness Secrets Manager. This is not recommended. + + After Self-Managed Enterprise Edition installation, configure a new Secret Manager (Vault, AWS, etc). You will need to open your network for the Secret Manager connection. + +### Updating Harness + +**Do not upgrade Harness past 4 major releases.** Instead, upgrades each interim release until you upgrade to the latest release. A best practice is to upgrade Harness once a month.Please follow these steps to update your Self-Managed Enterprise Edition installation. + +The steps are very similar to how you installed Harness initially. + +For more information, see [Updating an Embedded Cluster](https://kots.io/kotsadm/updating/updating-embedded-cluster/) from KOTS. + +#### Disconnected (air gap) + +The following steps require a private registry, just like the initial installation of Self-Managed Enterprise Edition. + +##### Upgrade Harness + +1. Download the latest release from Harness. +2. Run the following command on the VM(s) hosting Harness, replacing the placeholders: + + ``` + kubectl kots upstream upgrade harness \ + --airgap-bundle .airgap> \ + --kotsadm-namespace harness-kots \ + -n default + ``` + +##### Upgrade embedded Kubernetes Cluster and KOTS + +1. Download the latest version of Self-Managed Enterprise Edition: + + ``` + curl -SL -o harnesskurl.tar.gz https://kurl.sh/bundle/harness.tar.gz + ``` + +2. Move the tar.gz file to the disconnected VMs. +3. On each VM, run the following command to update Harness Self-Managed Enterprise Edition: + + ``` + tar xzvf harnesskurl.tar.gz + cat install.sh | sudo bash -s airgap + ``` + +#### Connected + +The following steps require a secure connection to the Internet, just like the initial installation of Harness Self-Managed Enterprise Edition. + +##### Upgrade Harness + +* Run the following command on the VMs hosting Harness Self-Managed Enterprise Edition: + + ``` + kubectl kots upstream upgrade harness -n harness + ``` + +##### Upgrade embedded Kubernetes cluster and KOTS + +* Run the following command on the VMs hosting Harness Self-Managed Enterprise Edition: + + ``` + curl -sSL https://kurl.sh/harness | sudo bash + ``` + +### Monitoring Harness + +Harness monitoring is performed using the built in monitoring tools. + +![](./static/virtual-machine-on-prem-installation-guide-24.png)When you installed Harness, your were provided with connection information for Prometheus, Grafana, and Alertmanager ports and passwords: + +``` +The UIs of Prometheus, Grafana and Alertmanager have been exposed on NodePorts 30900, 30902 and 30903 respectively. +To access Grafana use the generated user:password of admin:RF1KuqreN . +``` +For steps on using the monitoring tools, see [Prometheus](https://kots.io/kotsadm/monitoring/prometheus/) from KOTS. + +### License expired + +If your license has expired, you will see something like the following: + +![](./static/virtual-machine-on-prem-installation-guide-25.png)Contact your Harness Customer Success representative or [support@harness.io](mailto:support@harness.io). + +### Install NextGen on existing FirstGen + +This section assumes you have a Self-Managed Enterprise Edition FirstGen installation set up and running following the step earlier in this guide (beginning with [Step 1: Set up VM Requirements](#step-1-set-up-vm-requirements)). + +Now you can add Self-Managed Enterprise Edition NextGen as a new application to your FirstGen installation. + +1. Log into your Self-Managed Enterprise Edition FirstGen KOTS admin tool. +2. Click **Config**. +3. Record all of the FirstGen settings. You will need to use these exact same settings when setting up Self-Managed Enterprise Edition NextGen. +If you want to change settings, change them and then record them so you can use them during the NextGen installation. +4. Click **Add a new application**. + +![](./static/virtual-machine-on-prem-installation-guide-26.png) + +5. Add the Self-Managed Enterprise Edition NextGen license file you received from Harness Support, and then click **Upload license**. + +![](./static/virtual-machine-on-prem-installation-guide-27.png) + +6. Depending on whether your Self-Managed Enterprise Edition FirstGen installation is Disconnected or Connected, follow the installation steps described here: + * [Option 1: Disconnected Installation](#option-1-disconnected-installation) + * [Option 2: Connected Installation](#option-2-connected-installation)When you are done, you'll be on the **Configure HarnessNG** page. This is the standard configuration page you followed when you set up Self-Managed Enterprise Edition FirstGen in [Step 3: Configure Harness](#step-3-configure-harness). +7. Enter the exact same configuration options as your FirstGen installation. + Please ensure you include your **Advanced Configuration** settings. + Ensure you use the exact same **Scheme** you used in FirstGen (HTTP or HTTPS). + The **Load Balancer IP Address** setting does not appear because Self-Managed Enterprise Edition NextGen is simply a new application added onto Self-Managed Enterprise Edition FirstGen. NextGen will use the exact same **Load Balancer IP Address** setting by default. +8. Click **Continue** at the bottom of the page. +Harness will perform pre-flight checks. +9. Click **Continue**. + Harness is deployed in a few minutes. + When Self-Managed Enterprise Edition NextGen is ready, you will see it listed as **Ready**:![](./static/virtual-machine-on-prem-installation-guide-28.png) +10. In a new browser tab, go to the following URL, replacing `` with the URL you entered in the **Application URL** setting in the KOTS admin console + + `/auth/#/signup` + +For example: + +`http://harness.mycompany.com/auth/#/signup` + +The Harness sign-up page appears. + +![](./static/virtual-machine-on-prem-installation-guide-29.png) + +Sign up with a new account and then sign in. + +![](./static/virtual-machine-on-prem-installation-guide-30.png)If you are familiar with Harness, you can skip [Learn Harness' Key Concepts](../../getting-started/learn-harness-key-concepts.md). + +Try the [NextGen Quickstarts](../../getting-started/quickstarts.md). + +### Notes + +Self-Managed Enterprise Edition installations do not currently support the Harness Helm Delegate. + diff --git a/docusaurus.config.js b/docusaurus.config.js index af8c1fe338e..3c4a9819c58 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -1,7 +1,7 @@ // @ts-check // Note: type annotations allow type checking and IDEs autocompletion -const lightCodeTheme = require("prism-react-renderer/themes/github"); +// const lightCodeTheme = require("prism-react-renderer/themes/github"); const darkCodeTheme = require("prism-react-renderer/themes/dracula"); const path = require("path"); @@ -35,16 +35,32 @@ const config = { /** @type {import('@docusaurus/preset-classic').Options} */ ({ docs: { - path: ".", + path: "docs", sidebarPath: require.resolve("./sidebars.js"), editUrl: "https://github.com/harness/developer-hub/tree/main", // /tree/main/packages/create-docusaurus/templates/shared/ - include: ["tutorials/**/*.{md, mdx}", "docs/**/*.{md, mdx}"], + // include: ["tutorials/**/*.{md, mdx}", "docs/**/*.{md, mdx}"], exclude: ["**/shared/**", "**/static/**"], - routeBasePath: "/", //CHANGE HERE + routeBasePath: "docs", //CHANGE HERE }, // blog: { - // showReadingTime: true, - // editUrl: "https://github.com/harness/developer-hub/tree/main", // /tree/main/packages/create-docusaurus/templates/shared/ + // // showReadingTime: true, + // editUrl: "https://github.com/harness/developer-hub/tree/main", + // blogTitle: "Release Notes", + // blogDescription: "Harness Platform Release Notes", + // postsPerPage: "ALL", + // blogSidebarTitle: "All Release Notes", + // blogSidebarCount: "ALL", + // feedOptions: { + // type: "all", + // copyright: `Copyright © ${new Date().getFullYear()} Harness, Inc.`, + // }, + // // URL route for the blog section of your site. + // // *DO NOT* include a trailing slash. + // routeBasePath: "release-notes", + // // Path to data on filesystem relative to site dir. + // path: "release-notes", + // include: ["**/*.{md,mdx}"], + // exclude: ["**/shared/**", "**/static/**"], // }, theme: { customCss: require.resolve("./src/css/custom.css"), @@ -75,16 +91,19 @@ const config = { href: "#", }, { - type: "search", + // type: "search", + // position: "right", + // className: "searchBar", + // use customized coveo search on sidebar + type: "custom-coveo-search", position: "right", - className: "searchBar", }, { position: "right", type: "dropdown", label: "Tutorials", items: [ - { + { // type: "doc", label: "All Tutorials", to: "tutorials/get-started", @@ -141,7 +160,7 @@ const config = { items: [ { label: "Get Started", - to: "tutorials/get-started", + to: "docs/getting-started", }, { label: "Continuous Integration", @@ -197,6 +216,58 @@ const config = { }, ], }, + { + // to: "release-notes", + label: "Release Notes", + position: "right", + type: "dropdown", + items: [ + { + label: "What's New", + to: "release-notes/whats-new", + }, + { + label: "Early Access", + to: "release-notes/early-access", + }, + { + label: "Continuous Integration", + to: "release-notes/continuous-integration", + }, + { + label: "Continuous Delivery", + to: "release-notes/continuous-delivery", + }, + { + label: "Feature Flags", + to: "release-notes/feature-flags", + }, + { + label: "Cloud Cost Management", + to: "release-notes/cloud-cost-management", + }, + { + label: "Service Reliability Management", + to: "release-notes/service-reliability-management", + }, + { + label: "Service Reliability Management", + to: "release-notes/security-testing-orchestration", + }, + { + label: "Chaos Engineering", + to: "release-notes/chaos-engineering", + }, + { + label: "Harness Platform", + to: "release-notes/platform", + }, + { + label: "Harness FirstGen", + to: "release-notes/first-gen", + }, + ], + }, { position: "right", href: "https://join.slack.com/t/harnesscommunity/shared_invite/zt-y4hdqh7p-RVuEQyIl5Hcx4Ck8VCvzBw", @@ -318,6 +389,7 @@ const config = { theme: darkCodeTheme, // lightCodeTheme, darkTheme: darkCodeTheme, }, + /* algolia: { // The application ID provided by Algolia appId: "HPP2NHSWS8", @@ -341,6 +413,7 @@ const config = { //... other Algolia params }, + */ colorMode: { defaultMode: "light", disableSwitch: true, @@ -360,8 +433,67 @@ const config = { oneTrust: { dataDomainScript: "59633b83-e34c-443c-a807-63232ce145e5", }, + rss: { + rssPath: "release-notes/rss.xml", + rssTitle: "Harness Release Notes", + copyright: "Harness Inc.", + rssDescription: "Harness Release Notes", + }, }), plugins: [ + [ + "@docusaurus/plugin-client-redirects", + { + // fromExtensions: ['html', 'htm'], // /myPage.html -> /myPage + // toExtensions: ['exe', 'zip'], // /myAsset -> /myAsset.zip (if latter exists) + redirects: [ + { + from: "/release-notes", + to: "/release-notes/whats-new", + }, + /* // Redirect from multiple old paths to the new path + { + to: '/docs/newDoc2', + from: ['/docs/oldDocFrom2019', '/docs/legacyDocFrom2016'], + }, */ + ], + /* + createRedirects(existingPath) { + if (existingPath.includes('/community')) { + // Redirect from /docs/team/X to /community/X and /docs/support/X to /community/X + return [ + existingPath.replace('/community', '/docs/team'), + existingPath.replace('/community', '/docs/support'), + ]; + } + return undefined; // Return a falsy value: no redirect created + }, + */ + }, + ], + [ + "@docusaurus/plugin-content-docs", + { + id: "tutorials", + path: "tutorials", + routeBasePath: "tutorials", + exclude: ["**/shared/**", "**/static/**"], + sidebarPath: require.resolve("./sidebars-tutorials.js"), + editUrl: "https://github.com/harness/developer-hub/tree/main", + // ... other options + }, + ], + [ + path.resolve(__dirname, "./plugins/docs-rss-plugin"), + { + id: "release-notes", + path: "release-notes", + routeBasePath: "release-notes", + exclude: ["**/shared/**", "**/static/**"], + sidebarPath: require.resolve("./sidebars-release-notes.js"), + editUrl: "https://github.com/harness/developer-hub/tree/main", + }, + ], "docusaurus-plugin-sass", path.join(__dirname, "/plugins/hotjar-plugin"), path.join(__dirname, "/plugins/onetrust-plugin"), diff --git a/package.json b/package.json index 06b503b0766..ee54251afe1 100644 --- a/package.json +++ b/package.json @@ -23,6 +23,7 @@ "dependencies": { "@docusaurus/core": "^2.2.0", "@docusaurus/cssnano-preset": "2.2.0", + "@docusaurus/plugin-client-redirects": "^2.2.0", "@docusaurus/plugin-debug": "2.2.0", "@docusaurus/plugin-google-analytics": "2.2.0", "@docusaurus/plugin-google-gtag": "2.2.0", @@ -33,6 +34,7 @@ "@mdx-js/react": "^1.6.22", "clsx": "^1.2.1", "docusaurus-plugin-sass": "^0.2.2", + "fs-extra": "^11.1.0", "prism-react-renderer": "^1.3.5", "rc-tooltip": "^5.2.2", "react": "^17.0.2", @@ -43,12 +45,12 @@ "@docusaurus/module-type-aliases": "2.2.0", "@tsconfig/docusaurus": "^1.0.5", "dictionary-en": "^3.2.0", + "eslint": "^8.25.0", + "eslint-plugin-react": "^7.31.10", "textlint": "^12.2.2", "textlint-rule-no-todo": "^2.0.1", "textlint-rule-spelling": "^0.3.0", "textlint-rule-write-good": "^2.0.0", - "eslint": "^8.25.0", - "eslint-plugin-react": "^7.31.10", "typescript": "^4.7.4" }, "browserslist": { @@ -66,4 +68,4 @@ "engines": { "node": ">=16.14" } -} \ No newline at end of file +} diff --git a/plugins/docs-rss-plugin/index.js b/plugins/docs-rss-plugin/index.js new file mode 100644 index 00000000000..212ff53ec2f --- /dev/null +++ b/plugins/docs-rss-plugin/index.js @@ -0,0 +1,141 @@ +const fs = require("fs-extra"); +const path = require("path"); +// import { LoadContext, Plugin } from "@docusaurus/types"; +const docsPluginExports = require("@docusaurus/plugin-content-docs"); + +const { load: cheerioLoad } = require("cheerio"); +const { normalizeUrl, readOutputHTMLFile } = require("@docusaurus/utils"); + +const { Feed } = require("feed"); + +const docsPlugin = docsPluginExports.default; + +async function docsPluginEnhanced(context, options) { + const docsPluginInstance = await docsPlugin(context, options); + + const { siteConfig } = context; + const { themeConfig, url: siteUrl, baseUrl, title, favicon } = siteConfig; + const { rss } = themeConfig || {}; + + if (!rss) { + throw new Error( + `You need to specify 'rss' object in 'themeConfig' with 'rssPath' field in it` + ); + } + + const { rssPath, rssTitle, copyright, rssDescription } = rss; + + if (!rssPath) { + throw new Error( + "You specified the `rss` object in `themeConfig` but the `rssPath` field was missing." + ); + } + + return { + ...docsPluginInstance, + + /* + async contentLoaded({ content, actions }) { + // Create default plugin pages + await docsPluginInstance.contentLoaded({ content, actions }); + + // Create your additional pages + console.log("...contentLoaded...", content); + // const {blogPosts, blogTags} = content; + }, + */ + + async postBuild(params) { + const { outDir, content, siteConfig } = params; + + if ( + !content || + !content.loadedVersions || + content.loadedVersions.length < 1 || + content.loadedVersions[0].docs.length < 1 + ) { + return null; + } + const { routeBasePath } = options; + const docsBaseUrl = normalizeUrl([siteUrl, baseUrl, routeBasePath]); + + const docs = content.loadedVersions[0].docs; + + const feed = new Feed({ + id: docsBaseUrl, + title: rssTitle ?? `${title} Release Notes`, + // updated, + // language: feedOptions.language ?? locale, + link: docsBaseUrl, + description: rssDescription ?? `${siteConfig.title} Release Notes`, + favicon: favicon + ? normalizeUrl([siteUrl, baseUrl, favicon]) + : undefined, + copyright: copyright, + }); + + function toFeedAuthor(author) { + return { name: author.name, link: author.url, email: author.email }; + } + + await Promise.all( + docs.map(async (post) => { + const { + id, + // metadata: { + title: metadataTitle, + permalink, + frontMatter: { date, authors = "Harness", tags }, + description, + // }, + } = post; + + const content = await readOutputHTMLFile( + permalink.replace(siteConfig.baseUrl, ""), + outDir, + siteConfig.trailingSlash + ); + const $ = cheerioLoad(content); + + const feedItem = { + title: metadataTitle, + id, + link: normalizeUrl([siteUrl, permalink]), + date: new Date(date), + description, + // Atom feed demands the "term", while other feeds use "name" + category: tags + ? (Array.isArray(tags) ? tags : []).map((tag) => + tag + ? { + name: tag, + term: tag, + } + : null + ) + : null, + content: $(".theme-doc-markdown").html() || null, + }; + + // json1() method takes the first item of authors array + // it causes an error when authors array is empty + const feedItemAuthors = ( + Array.isArray(authors) ? authors : [authors] + ).map(toFeedAuthor); + if (feedItemAuthors.length > 0) { + feedItem.author = feedItemAuthors; + } + + return feedItem; + }) + ).then((items) => items.forEach(feed.addItem)); + + fs.outputFile(path.join(outDir, rssPath), feed.rss2()); + }, + }; +} + +module.exports = { + ...docsPluginExports, + default: docsPluginEnhanced, +}; diff --git a/release-notes/chaos-engineering/2020-10-26-version-77110.md b/release-notes/chaos-engineering/2020-10-26-version-77110.md new file mode 100644 index 00000000000..89309b46ab2 --- /dev/null +++ b/release-notes/chaos-engineering/2020-10-26-version-77110.md @@ -0,0 +1,13 @@ +--- +# title: October 26, 2020, version 77110 +title: Chaos Engineering Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [nextGen, "chaos engineering"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Chaos Engineering will be available here soon. diff --git a/release-notes/cloud-cost-management/2020-10-27-version-77109.md b/release-notes/cloud-cost-management/2020-10-27-version-77109.md new file mode 100644 index 00000000000..9174078bc28 --- /dev/null +++ b/release-notes/cloud-cost-management/2020-10-27-version-77109.md @@ -0,0 +1,13 @@ +--- +# title: October 27, 2020, version 77109 +title: Cloud Cost Management Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [FirstGen, "cloud cost management"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Cloud Cost Management will be available here soon. diff --git a/release-notes/continuous-delivery/2020-10-21-version-77221.md b/release-notes/continuous-delivery/2020-10-21-version-77221.md new file mode 100644 index 00000000000..85cc221588e --- /dev/null +++ b/release-notes/continuous-delivery/2020-10-21-version-77221.md @@ -0,0 +1,13 @@ +--- +# title: October 21, 2022, version 77221 +title: Continuous Delivery Release Notes is coming soon +date: 2020-10-21T10:00 +tags: [FirstGen, "continuous delivery"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Continuous Delivery will be available here soon. diff --git a/release-notes/continuous-integration/2020-10-18-version-77116.md b/release-notes/continuous-integration/2020-10-18-version-77116.md new file mode 100644 index 00000000000..3756ccb3229 --- /dev/null +++ b/release-notes/continuous-integration/2020-10-18-version-77116.md @@ -0,0 +1,13 @@ +--- +# title: October 18, 2020, version 77116 +title: Continuous Integration Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [FirstGen, "continuous integration"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Continuous Integration will be available here soon. diff --git a/release-notes/early-access.md b/release-notes/early-access.md new file mode 100644 index 00000000000..2693af97e37 --- /dev/null +++ b/release-notes/early-access.md @@ -0,0 +1,10 @@ +--- +title: Early Access +sidebar_position: 2 +--- + +# Early Access + +## Coming soon... + +Release Notes will be available here soon. diff --git a/release-notes/feature-flags/2020-10-19-version-77115.md b/release-notes/feature-flags/2020-10-19-version-77115.md new file mode 100644 index 00000000000..9188adb2024 --- /dev/null +++ b/release-notes/feature-flags/2020-10-19-version-77115.md @@ -0,0 +1,13 @@ +--- +# title: October 19, 2020, version 77115 +title: Feature Flags Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [NextGen, "feature flags"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Feature Flags will be available here soon. diff --git a/release-notes/first-gen/2020-10-20-version-77114.md b/release-notes/first-gen/2020-10-20-version-77114.md new file mode 100644 index 00000000000..9012841a6dc --- /dev/null +++ b/release-notes/first-gen/2020-10-20-version-77114.md @@ -0,0 +1,13 @@ +--- +# title: October 20, 2020, version 77114 +title: Harness FirstGen Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [FirstGen] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Harness FirstGen will be available here soon. diff --git a/release-notes/platform/2020-10-22-version-77113.md b/release-notes/platform/2020-10-22-version-77113.md new file mode 100644 index 00000000000..584529a3709 --- /dev/null +++ b/release-notes/platform/2020-10-22-version-77113.md @@ -0,0 +1,13 @@ +--- +# title: October 22, 2020, version 77113 +title: Harness Platform Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [NextGen, "platform"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Harness Platform will be available here soon. diff --git a/release-notes/security-testing-orchestration/2020-10-23-version-77112.md b/release-notes/security-testing-orchestration/2020-10-23-version-77112.md new file mode 100644 index 00000000000..abed022f882 --- /dev/null +++ b/release-notes/security-testing-orchestration/2020-10-23-version-77112.md @@ -0,0 +1,13 @@ +--- +# title: October 23, 2020, version 77112 +title: Security Testing Orchestration Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [NextGen, "security testing orchestration"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Security Testing Orchestration will be available here soon. diff --git a/release-notes/service-reliability-management/2020-10-24-version-77111.md b/release-notes/service-reliability-management/2020-10-24-version-77111.md new file mode 100644 index 00000000000..9dae7ba6c4b --- /dev/null +++ b/release-notes/service-reliability-management/2020-10-24-version-77111.md @@ -0,0 +1,13 @@ +--- +# title: October 24, 2020, version 77111 +title: Service Reliability Management Release Notes is coming soon +date: 2020-10-18T10:00 +tags: [NextGen, "service reliability management"] +slug: coming-soon +--- + + + +## Coming soon... + +Release Notes for Service Reliability Management will be available here soon. diff --git a/release-notes/whats-new.md b/release-notes/whats-new.md new file mode 100644 index 00000000000..4a01dfbc990 --- /dev/null +++ b/release-notes/whats-new.md @@ -0,0 +1,47 @@ +--- +title: What's new +sidebar_position: 1 +# slug: / +--- + +# What's new + +Learn about the new features that are generally available in all Harness modules. + +Release Notes will be available here soon. + +In the meantime, for information about new features prior to July 2022, refer to [**Harness SaaS Release Notes**](https://docs.harness.io/article/7zkchy5lhj-harness-saa-s-release-notes-2022). + +## November 30, 2022 + +### ![](../static/img/icon_ci_m.svg) Continuous Integration + +- Release Notes for Continuous Integration will be available here soon. + +### ![](../static/img/icon_cd_m.svg) Continuous Delivery + +- Release Notes for Continuous Delivery will be available here soon. + +### ![](../static/img/icon_ff_m.svg) Feature Flags + +- Release Notes for Feature Flags will be available here soon. + +### ![](../static/img/icon_ccm_m.svg) Cloud Cost Management + +- Release Notes for Cloud Cost Management will be available here soon. + +### ![](../static/img/icon_srm_m.svg) Service Reliability Management + +- Release Notes for Service Reliability Management will be available here soon. + +### ![](../static/img/icon_sto_m.svg) Security Testing Orchestration + +- Release Notes for Security Testing Orchestration will be available here soon. + +### ![](../static/img/icon_ce_m.svg) Chaos Engineering + +- Release Notes for Chaos Engineering will be available here soon. + +### ![](../static/img/icon_harness_m.svg) Harness Platform + +- Release Notes for Harness Platform will be available here soon. diff --git a/sidebars-release-notes.js b/sidebars-release-notes.js new file mode 100644 index 00000000000..ad9b74b222e --- /dev/null +++ b/sidebars-release-notes.js @@ -0,0 +1,192 @@ +// @ts-check + +/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */ + +const sidebars = { + releaseNotes: [ + // Release Notes Parent + { + type: "category", + label: "Release Notes", + link: { + // type: "generated-index", + type: "doc", + id: "whats-new", + }, + collapsed: true, + items: [ + { + type: "doc", + label: "What's New", + id: "whats-new", + // link: { + // type: "doc", + // id: "whats-new", + // }, + // collapsed: true, + // items: [], + }, + { + type: "doc", + label: "Early Access", + id: "early-access", + // link: { + // type: "doc", + // id: "whats-new", + // }, + // collapsed: true, + // items: [], + }, + { + type: "category", + label: "Continuous Integration", + link: { + type: "generated-index", + slug: "continuous-integration", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "continuous-integration", + }, + ], + }, + { + type: "category", + label: "Continuous Delivery", + link: { + type: "generated-index", + slug: "continuous-delivery", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "continuous-delivery", + }, + ], + }, + + { + type: "category", + label: "Feature Flags", + link: { + type: "generated-index", + slug: "feature-flags", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "feature-flags", + }, + ], + }, + { + type: "category", + label: "Cloud Cost Management", + link: { + type: "generated-index", + slug: "cloud-cost-management", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "cloud-cost-management", + }, + ], + }, + { + type: "category", + label: "Service Reliability Management", + link: { + type: "generated-index", + slug: "service-reliability-management", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "service-reliability-management", + }, + ], + }, + { + type: "category", + label: "Security Testing Orchestration", + link: { + type: "generated-index", + slug: "security-testing-orchestration", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "security-testing-orchestration", + }, + ], + }, + { + type: "category", + label: "Chaos Engineering", + link: { + type: "generated-index", + slug: "chaos-engineering", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "chaos-engineering", + }, + ], + }, + { + type: "category", + label: "Harness Platform", + link: { + type: "generated-index", + slug: "platform", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "platform", + }, + ], + }, + { + type: "category", + label: "Harness FirstGen", + link: { + type: "generated-index", + slug: "first-gen", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "first-gen", + }, + ], + }, + ], + }, + { + type: "link", + label: "Subscribe RSS feed", + href: "pathname:///release-notes/rss.xml", + className: "sidebar-item-rss", + customProps: { + target: "_blank", + }, + }, + + //Additional Items in this parent can go here. + ], +}; + +module.exports = sidebars; diff --git a/sidebars-tutorials.js b/sidebars-tutorials.js new file mode 100644 index 00000000000..2fbb95b7a28 --- /dev/null +++ b/sidebars-tutorials.js @@ -0,0 +1,411 @@ +// @ts-check + +/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */ + +const sidebars = { + allcontent: [ + //Tutorial Parent + { + type: "category", + label: "Tutorials", + link: { + type: "doc", + id: "get-started", + }, + collapsed: false, + items: [ + // Build and Test Code + { + type: "category", + label: "Build & Test Code", + link: { + type: "doc", + id: "build-code", + }, + collapsed: true, + items: [{ type: "autogenerated", dirName: "build-code" }], + }, + + // Deploy Services + { + type: "category", + label: "Deploy Services", + link: { + type: "doc", + id: "deploy-services", + }, + collapsed: true, + items: [{ type: "autogenerated", dirName: "deploy-services" }], + }, + + // Manage Feature Flags - feature-flags + { + type: "category", + label: "Manage Feature Flags", + link: { + type: "doc", + id: "manage-feature-flags", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "manage-feature-flags", + }, + ], + }, + + // Manage Cloud Costs - cloud-cost-management + { + type: "category", + label: "Manage Cloud Costs", + link: { + type: "doc", + id: "manage-cloud-costs", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "manage-cloud-costs", + }, + ], + }, + + // Manage Service Reliability - service-reliability-management + { + type: "category", + label: "Manage Service Reliability", + link: { + type: "doc", + id: "manage-service-reliability", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "manage-service-reliability", + }, + ], + }, + + // Orchestrate Security Tests - security-testing-orchestration + { + type: "category", + label: "Orchestrate Security Tests", + link: { + type: "doc", // "generated-index", + id: "orchestrate-security-tests", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "orchestrate-security-tests", + }, + ], + }, + + // Run Chaos Experiments - chaos-engineering + { + type: "category", + label: "Run Chaos Experiments", + link: { + type: "doc", + id: "run-chaos-experiments", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "run-chaos-experiments", + }, + ], + }, + + // Platform - platform + { + type: "category", + label: "Administer Harness Platform", + link: { + type: "doc", + id: "platform", + }, + collapsed: true, + items: [{ type: "autogenerated", dirName: "platform" }], + }, + + //Additional Items in this parent can go here. + ], + }, + // Documentation Parent + /* + { + type: "category", + label: "Documentation", + link: { + type: "generated-index", + }, + collapsed: true, + items: [ + { + // type: "doc", + // label: "Continuous Integration", + // id: "docs/continuous-integration", + type: "category", + label: "Continuous Integration", + link: { + type: "generated-index", + slug: "/docs/continuous-integration", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/continuous-integration", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/continuous-integration", + }, + ], + }, + { + // type: "doc", + // label: "Continuous Delivery", + // id: "docs/continuous-delivery", + type: "category", + label: "Continuous Delivery & GitOps", + link: { + type: "generated-index", + slug: "/docs/continuous-delivery", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/continuous-delivery", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/continuous-delivery", + }, + ], + }, + { + // type: "doc", + // label: "Feature Flags", + // id: "docs/feature-flags", + type: "category", + label: "Feature Flags", + link: { + type: "generated-index", + slug: "/docs/feature-flags", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/feature-flags", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/feature-flags", + }, + ], + }, + { + // type: "doc", + // label: "Cloud Cost Management", + // id: "docs/cloud-cost-management", + type: "category", + label: "Cloud Cost Management", + link: { + type: "generated-index", + slug: "/docs/cloud-cost-management", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/cloud-cost-management", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/cloud-cost-management", + }, + ], + }, + { + // type: "doc", + // label: "Service Reliability Management", + // id: "docs/service-reliability-management", + type: "category", + label: "Service Reliability Management", + link: { + type: "generated-index", + slug: "/docs/service-reliability-management", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/security-testing-orchestration", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/service-reliability-management", + }, + ], + }, + { + // type: "doc", + // label: "Security Testing Orchestration", + // id: "docs/security-testing-orchestration", + type: "category", + label: "Security Testing Orchestration", + link: { + type: "generated-index", + slug: "/docs/security-testing-orchestration", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/security-testing-orchestration", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/security-testing-orchestration", + }, + ], + }, + { + // type: "doc", + // label: "Chaos Engineering", + // id: "docs/chaos-engineering", + type: "category", + label: "Chaos Engineering", + link: { + type: "generated-index", + slug: "/docs/chaos-engineering", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/chaos-engineering", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/chaos-engineering", + }, + ], + }, + { + // type: "doc", + // label: "Harness Platform", + // id: "docs/platform", + type: "category", + label: "Harness Platform", + link: { + type: "generated-index", + slug: "/docs/platform", + // // ncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/platform", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/platform", + }, + ], + }, + { + // type: "doc", + // label: "Self-Managed Enterprise Edition", + // id: "docs/self-managed-enterprise-edition", + type: "category", + label: "Self-Managed Enterprise Edition", + link: { + type: "generated-index", + slug: "/docs/self-managed-enterprise-edition", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/self-managed-enterprise-edition", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/self-managed-enterprise-edition", + }, + ], + }, + { + // type: "doc", + // label: "FirstGen docs", + // id: "docs/first-gen", + type: "category", + label: "Harness FirstGen", + link: { + type: "generated-index", + slug: "/docs/first-gen", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/first-gen", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/first-gen", + }, + ], + }, + { + // type: "doc", + // label: "FAQs", + // id: "docs/frequently-asked-questions", + type: "category", + label: "FAQs", + link: { + type: "generated-index", + slug: "/docs/frequently-asked-questions", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/frequently-asked-questions", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/frequently-asked-questions", + }, + ], + }, + { + // type: "doc", + // label: "Troubleshooting", + // id: "docs/troubleshooting", + type: "category", + label: "Troubleshooting", + link: { + type: "generated-index", + slug: "/docs/troubleshooting", + // // Uncomment this block while we have a landing page for module docs + // type: "doc", + // id: "docs/troubleshooting", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "docs/troubleshooting", + }, + ], + }, + ], + }, + */ + + //Additional Items in this parent can go here. + ], +}; + +module.exports = sidebars; diff --git a/sidebars.js b/sidebars.js index e639c339186..397a17b8297 100644 --- a/sidebars.js +++ b/sidebars.js @@ -5,6 +5,7 @@ const sidebars = { allcontent: [ //Tutorial Parent + /* { type: "category", label: "Tutorials", @@ -140,6 +141,7 @@ const sidebars = { //Additional Items in this parent can go here. ], }, + */ // Documentation Parent { type: "category", @@ -149,27 +151,42 @@ const sidebars = { }, collapsed: true, items: [ + { + type: "category", + label: "Getting Started", + link: { + type: "generated-index", + slug: "getting-started", + }, + collapsed: true, + items: [ + { + type: "autogenerated", + dirName: "getting-started", + }, + ], + }, { /* type: "doc", label: "Continuous Integration", - id: "docs/continuous-integration", + id: "continuous-integration", */ type: "category", label: "Continuous Integration", link: { type: "generated-index", - slug: "/docs/continuous-integration", + slug: "continuous-integration", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/continuous-integration", + id: "continuous-integration", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/continuous-integration", + dirName: "continuous-integration", }, ], }, @@ -177,47 +194,47 @@ const sidebars = { /* type: "doc", label: "Continuous Delivery", - id: "docs/continuous-delivery", + id: "continuous-delivery", */ type: "category", label: "Continuous Delivery & GitOps", link: { type: "generated-index", - slug: "/docs/continuous-delivery", + slug: "/continuous-delivery", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/continuous-delivery", + id: "continuous-delivery", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/continuous-delivery", + dirName: "continuous-delivery", }, ], }, { - /* + /* type: "doc", label: "Feature Flags", - id: "docs/feature-flags", + id: "feature-flags", */ type: "category", label: "Feature Flags", link: { type: "generated-index", - slug: "/docs/feature-flags", + slug: "/feature-flags", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/feature-flags", + id: "feature-flags", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/feature-flags", + dirName: "feature-flags", }, ], }, @@ -225,23 +242,23 @@ const sidebars = { /* type: "doc", label: "Cloud Cost Management", - id: "docs/cloud-cost-management", + id: "cloud-cost-management", */ type: "category", label: "Cloud Cost Management", link: { type: "generated-index", - slug: "/docs/cloud-cost-management", + slug: "/cloud-cost-management", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/cloud-cost-management", + id: "cloud-cost-management", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/cloud-cost-management", + dirName: "cloud-cost-management", }, ], }, @@ -249,23 +266,23 @@ const sidebars = { /* type: "doc", label: "Service Reliability Management", - id: "docs/service-reliability-management", + id: "service-reliability-management", */ type: "category", label: "Service Reliability Management", link: { type: "generated-index", - slug: "/docs/service-reliability-management", + slug: "/service-reliability-management", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/security-testing-orchestration", + id: "security-testing-orchestration", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/service-reliability-management", + dirName: "service-reliability-management", }, ], }, @@ -273,23 +290,23 @@ const sidebars = { /* type: "doc", label: "Security Testing Orchestration", - id: "docs/security-testing-orchestration", + id: "security-testing-orchestration", */ type: "category", label: "Security Testing Orchestration", link: { type: "generated-index", - slug: "/docs/security-testing-orchestration", + slug: "/security-testing-orchestration", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/security-testing-orchestration", + id: "security-testing-orchestration", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/security-testing-orchestration", + dirName: "security-testing-orchestration", }, ], }, @@ -297,23 +314,23 @@ const sidebars = { /* type: "doc", label: "Chaos Engineering", - id: "docs/chaos-engineering", + id: "chaos-engineering", */ type: "category", label: "Chaos Engineering", link: { type: "generated-index", - slug: "/docs/chaos-engineering", + slug: "/chaos-engineering", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/chaos-engineering", + id: "chaos-engineering", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/chaos-engineering", + dirName: "chaos-engineering", }, ], }, @@ -321,23 +338,23 @@ const sidebars = { /* type: "doc", label: "Harness Platform", - id: "docs/platform", + id: "platform", */ type: "category", label: "Harness Platform", link: { type: "generated-index", - slug: "/docs/platform", + slug: "/platform", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/platform", + id: "platform", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/platform", + dirName: "platform", }, ], }, @@ -345,23 +362,23 @@ const sidebars = { /* type: "doc", label: "Self-Managed Enterprise Edition", - id: "docs/self-managed-enterprise-edition", + id: "self-managed-enterprise-edition", */ type: "category", label: "Self-Managed Enterprise Edition", link: { type: "generated-index", - slug: "/docs/self-managed-enterprise-edition", + slug: "/self-managed-enterprise-edition", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/self-managed-enterprise-edition", + id: "self-managed-enterprise-edition", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/self-managed-enterprise-edition", + dirName: "self-managed-enterprise-edition", }, ], }, @@ -369,23 +386,23 @@ const sidebars = { /* type: "doc", label: "FirstGen docs", - id: "docs/first-gen", + id: "first-gen", */ type: "category", label: "Harness FirstGen", link: { type: "generated-index", - slug: "/docs/first-gen", + slug: "/first-gen", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/first-gen", + id: "first-gen", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/first-gen", + dirName: "first-gen", }, ], }, @@ -393,23 +410,23 @@ const sidebars = { /* type: "doc", label: "FAQs", - id: "docs/frequently-asked-questions", + id: "frequently-asked-questions", */ type: "category", label: "FAQs", link: { type: "generated-index", - slug: "/docs/frequently-asked-questions", + slug: "/frequently-asked-questions", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/frequently-asked-questions", + id: "frequently-asked-questions", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/frequently-asked-questions", + dirName: "frequently-asked-questions", }, ], }, @@ -417,23 +434,23 @@ const sidebars = { /* type: "doc", label: "Troubleshooting", - id: "docs/troubleshooting", + id: "troubleshooting", */ type: "category", label: "Troubleshooting", link: { type: "generated-index", - slug: "/docs/troubleshooting", + slug: "/troubleshooting", /* Uncomment this block while we have a landing page for module docs type: "doc", - id: "docs/troubleshooting", + id: "troubleshooting", */ }, collapsed: true, items: [ { type: "autogenerated", - dirName: "docs/troubleshooting", + dirName: "troubleshooting", }, ], }, diff --git a/src/components/Feedback/index.module.scss b/src/components/Feedback/index.module.scss index aa3abb6d14c..f1a5f56ee21 100644 --- a/src/components/Feedback/index.module.scss +++ b/src/components/Feedback/index.module.scss @@ -1,6 +1,6 @@ .feedback { position: fixed; - z-index: 99; + z-index: 96; right: 0; bottom: 100px; cursor: pointer; diff --git a/src/components/LandingPage/AllTutorials.tsx b/src/components/LandingPage/AllTutorials.tsx index 7ac4ed97a45..3a52a4a61e8 100755 --- a/src/components/LandingPage/AllTutorials.tsx +++ b/src/components/LandingPage/AllTutorials.tsx @@ -112,7 +112,22 @@ const CIList: FeatureItem[] = [ ribbon: true, type: [docType.Documentation], time: "15 min", - link: "/tutorials/build-code/ci-localstack-service-dependency", + link: "/tutorials/build-code/ci-localstack-background-step", + }, + { + title: "Build and publish a Java HTTP Server", + module: "ci", + Svg: "/img/icon_ci.svg", + description: ( + <> + Build, test, and publish a Docker image for a Java HTTP server application + + + ), + ribbon: true, + type: [docType.Documentation], + time: "20 min", + link: "/tutorials/build-code/ci-java-http-server", }, ]; @@ -312,6 +327,18 @@ const CEList: FeatureItem[] = [ time: "5min", link: "/tutorials/run-chaos-experiments/chaos-experiment-from-blank-canvas", }, + { + title: 'Integration with Harness CD', + module: 'ce', + Svg: "/img/icon_ce.svg", + description: ( + <>Execute a chaos experiment as part of a Harness CD pipeline for continuous resilience. + ), + ribbon: false, + type: [docType.Documentation], + time: '15min', + link: '/tutorials/run-chaos-experiments/integration-with-harness-cd' + }, ]; const PlatformList: FeatureItem[] = [ diff --git a/src/components/LandingPage/ChaosEngineering.tsx b/src/components/LandingPage/ChaosEngineering.tsx index 90a38a9d7b8..475acf596e3 100755 --- a/src/components/LandingPage/ChaosEngineering.tsx +++ b/src/components/LandingPage/ChaosEngineering.tsx @@ -63,6 +63,20 @@ const CEList: FeatureItem[] = [{ type: [docType.Documentation], time: '5min', link: '/tutorials/run-chaos-experiments/chaos-experiment-from-blank-canvas' +}, +{ + title: 'Integration with Harness CD', + module: 'ce', + Svg: "/img/icon_ce.svg", + description: ( + <> + Execute a chaos experiment as part of a Harness CD pipeline for continuous resilience. + + ), + ribbon: false, + type: [docType.Documentation], + time: '15min', + link: '/tutorials/run-chaos-experiments/integration-with-harness-cd' }]; export default function CE() { @@ -122,4 +136,4 @@ export default function CE() { // ); -} \ No newline at end of file +} diff --git a/src/components/LandingPage/ContinuousIntegration.tsx b/src/components/LandingPage/ContinuousIntegration.tsx index a1f6d69f9d2..7a3a7474c2d 100755 --- a/src/components/LandingPage/ContinuousIntegration.tsx +++ b/src/components/LandingPage/ContinuousIntegration.tsx @@ -78,7 +78,22 @@ const CIList: FeatureItem[] = [{ ribbon: true, type: [docType.Documentation], time: '15 min', - link: '/tutorials/build-code/ci-localstack-service-dependency', + link: '/tutorials/build-code/ci-localstack-background-step', +}, +{ + title: "Build and publish a Java HTTP Server", + module: "ci", + Svg: "/img/icon_ci.svg", + description: ( + <> + Build, test, and publish a Docker image for a Java HTTP server application + + + ), + ribbon: true, + type: [docType.Documentation], + time: "20 min", + link: "/tutorials/build-code/ci-java-http-server", }, ]; diff --git a/src/components/NavbarItems/CoveoSearch.js b/src/components/NavbarItems/CoveoSearch.js new file mode 100644 index 00000000000..173db32aad1 --- /dev/null +++ b/src/components/NavbarItems/CoveoSearch.js @@ -0,0 +1,217 @@ +/* eslint-disable no-undef */ +import React, { useEffect, useRef, useState } from "react"; +import Head from "@docusaurus/Head"; +import "./CoveoSearch.scss"; + +// Space => keyCode: 32 + +const CoveoSearch = () => { + const searchBoxEl = useRef(null); + const searchResultsEl = useRef(null); + const [isCoveoLoaded, setIsCoveoLoaded] = useState(false); + const checkCoveo = () => { + const coveoJustLoaded = !isCoveoLoaded && !!window.Coveo; + if (coveoJustLoaded) { + setIsCoveoLoaded(true); + } else { + setTimeout(checkCoveo, 200); + } + }; + + useEffect(() => { + checkCoveo(); + }, []); + + useEffect(() => { + const elemSearchResultConainer = searchResultsEl.current; + if ( + window.Coveo && + elemSearchResultConainer.getElementsByClassName("coveo-search-results") + .length < 1 + ) { + // setTimeout(() => { + // document.addEventListener("DOMContentLoaded", () => { + (async () => { + let searchboxRoot = searchBoxEl.current; // document.getElementById("instant-search"); + let searchRoot = document.createElement("div"); + searchRoot.setAttribute("class", "coveo-search-results"); + searchRoot.setAttribute("style", "display: none;"); + // const elemSearchResultConainer = searchResultsEl.current; + + if (elemSearchResultConainer) { + elemSearchResultConainer.appendChild(searchRoot); + } + + searchboxRoot.innerHTML = ` +
    +
    +
    + `; + searchRoot.innerHTML = ` + + `; + let coveoRoot = document.getElementById("coveo-search"); + + const resToken = await fetch( + "https://next.harness.io/api/gettoken-all/" + ); + const dataToken = await resToken.json(); + const orgId = dataToken?.orgId; + const apiToken = dataToken?.apiKey; + Coveo.SearchEndpoint.configureCloudV2Endpoint(orgId, apiToken); + + const elemDocusaurusRoot = document.getElementById("__docusaurus"); + const searchMask = document.createElement("div"); + searchMask.setAttribute("id", "coveo-search-mask"); + searchMask.setAttribute("style", "display: none;"); + if (elemDocusaurusRoot) { + elemDocusaurusRoot.appendChild(searchMask); + } + Coveo.$$(coveoRoot).on("doneBuildingQuery", function (e, args) { + let q = args.queryBuilder.expression.build(); + if (q) { + searchRoot.style.display = "block"; + searchMask.style.display = "block"; + // if (elmContent) { + // elmContent.style.display = "none"; + // } + } else { + searchRoot.style.display = "none"; + searchMask.style.display = "none"; + // if (elmContent) { + // elmContent.style.display = "block"; + // } + } + }); + Coveo.$$(coveoRoot).on("afterInitialization", function (e, args) { + Coveo.state(coveoRoot, "f:@commonsource", ["Developer Hub"]); + document + .querySelector(".CoveoSearchbox .magic-box-input input") + .focus(); + + // hacked into Coveo searchbox + const elemSearchbox = searchboxRoot.getElementsByTagName("input")[0]; + if (elemSearchbox) { + const handleKeyUp = (key) => { + if (key.keyCode === 32) { + const elemSearchButton = + searchboxRoot.getElementsByClassName("CoveoSearchButton")[0]; + if (elemSearchButton) { + // elemSearchbox.blur(); + elemSearchButton.click(); + // elemSearchbox.focus(); + } else { + console.warn("elemSearchButton not found!"); + } + } + }; + if (elemSearchbox.addEventListener) { + elemSearchbox.addEventListener("keyup", handleKeyUp); + } else if (button.attachEvent) { + elemSearchbox.attachEvent("onkeyup", handleKeyUp); + } + } else { + console.warn("elemSearchbox not found!"); + } + + const elemSearchMask = document.getElementById("coveo-search-mask"); + if (elemSearchMask) { + const handleCloseSearchResult = () => { + const elemClearSearchButton = + searchboxRoot.getElementsByClassName("magic-box-clear")[0]; + if (elemClearSearchButton) { + elemClearSearchButton.click(); + } else { + console.warn("elemClearSearchButton not found!"); + } + }; + if (elemSearchMask.addEventListener) { + elemSearchMask.addEventListener("click", handleCloseSearchResult); + } else if (button.attachEvent) { + elemSearchMask.attachEvent("onclick", handleCloseSearchResult); + } + } else { + console.warn("elemSearchMask not found!"); + } + }); + + // Coveo.$$(coveoRoot).on("newQuery", function (e, args) { + // console.log("...1.newQuery..", e, args); + // }); + // Coveo.$$(coveoRoot).on("duringQuery", function (e, args) { + // console.log("...2.duringQuery..", e, args); + // }); + + Coveo.init(coveoRoot, { + externalComponents: [searchboxRoot], + }); + })(); + // }, 900); + // }, false); + } + }, [isCoveoLoaded]); + return ( +
    + + + + {/* */} + + {isCoveoLoaded && ( + + + + )} +
    +
    +
    + ); +}; + +export default CoveoSearch; diff --git a/src/components/NavbarItems/CoveoSearch.scss b/src/components/NavbarItems/CoveoSearch.scss new file mode 100644 index 00000000000..8b8e7c55a3d --- /dev/null +++ b/src/components/NavbarItems/CoveoSearch.scss @@ -0,0 +1,681 @@ +/* --- Coveo style overwritten ---- */ +#searchBoxCoveo { + // width: 400px; + width: calc(100vw - 1080px); +} + +#coveo-search-mask{ + position: fixed; + background-color: var(--black); + opacity: 0.7; + width: 100vw; + height: 100vh; + left: 0; + top: 0; + z-index: 97; +} +#searchResultsCoveo { + #coveo-search { + background-color: var(--white); + // box-shadow: 0px 0px 1px rgba(40, 41, 61, 0.04), 0px 2px 4px rgba(96, 97, 112, 0.16); + min-width: 635px; + // z-index: 98; + } + .coveo-search-results { + // width: 1000px; + // width: calc(100vw - 890px); + // height: calc(100vh - 70px); + // width: calc(100vw - 30px); + height: calc(100vh - 95px); + position: absolute; + + // left: 15px; + top: 80px; + background-color: var(--white); + overflow-y: auto; + box-shadow: 0px 0px 1px rgba(40, 41, 61, 0.14), 0px 2px 4px rgba(96, 97, 112, 0.26); + margin-right: 12px; + border-radius: 5px; + } + .coveo-main-section { + display: none; + } +} + + +.CoveoSearchbox .magic-box { + // border: none; + // border-radius: 4px !important; +} +.CoveoSearchbox .magic-box .magic-box-input>input { + height: 36px; +} +.magic-box .magic-box-input .magic-box-underlay { + height: 36px !important; +} +.magic-box .magic-box-clear { + height: 36px !important; + width: 36px !important; + line-height: 36px !important; +} +.CoveoSearchbox .magic-box .magic-box-input { + background: var(--white); + border-radius: 4px !important; + height: 36px; + +} +#header #search-container input, #main[data-hd-template="barsv3"] #header input { + background-color: #ffffff !important; + border-radius: 25px !important; + color: #22262E !important; + font-size: 16px !important; +} +.magic-box .magic-box-clear { + background: unset; +} +.CoveoSearchbox .CoveoSearchButton { + border: none; + display: flex; + justify-content: center; + align-items: center; + height: unset; + width: 36px; +} +.CoveoSearchbox .CoveoSearchButton:hover .coveo-search-button-svg, .CoveoSearchbox .CoveoSearchButton:hover .coveo-magnifier-circle-svg { + color: var(--primary-6); + fill: var(--primary-6); +} +#header #search-container.search-responsive span, #header #search-container span { + margin-right: unset; + margin-top: unset; + float: unset !important; + /* z-index: 1049 !important; */ +} +#header #search-container.search-responsive span.coveo-omnibox-hightlight, #header #search-container span.coveo-omnibox-hightlight { + margin-right: unset; +} +.coveo-search-button-svg { + // width: 24px; + // height: 24px; + // color: #fff; +} + +.navbar{ + z-index: 99; +} + +/*** styles for coveo search UI ***/ +.harness-search-source { + background-color: #fafbfc; + border: 1px solid #d9dae5; + border-radius: 2px; + /* display: block; + */ + /* padding: 2px 1em; + */ + width: fit-content; + text-align: center; + font-weight: 500; + font-size: 12px; + line-height: 16px; + /* identical to box height, or 133% */ + text-align: center; + letter-spacing: 0.2px; + display: flex; + flex-direction: row; + align-items: center; + padding: 4px 8px; +} + .harness-search-source .CoveoFieldValue .coveo-clickable { + color: #000; +} + .harness-search-module .CoveoFieldValue { + /* Module Colors/Feature Flags Management/100 */ + /* Module Colors/Feature Flags Management/200 */ + background-color: #fafbfc; + border: 1px solid #d9dae5; + border-radius: 2px; + font-weight: 500; + font-size: 12px; + line-height: 16px; + /* identical to box height, or 133% */ + text-align: center; + letter-spacing: 0.2px; + /* Module Colors/Feature Flags Management/300 */ + color: var(--gray-800); + display: flex; + flex-direction: row; + align-items: center; + padding: 4px 8px; + width: fit-content; +} + .harness-search-module .CoveoFieldValue .coveo-clickable { + color: var(--gray-800); +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCI { + background: var(--mod-ci-100); + border: 1px solid var(--mod-ci-200); + color: var(--mod-ci-300); + &::before { + content: url('/img/icon_ci_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCI .coveo-clickable { + color: var(--mod-ci-300); +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCD { + background: var(--mod-cd-100); + border: 1px solid var(--mod-cd-200); + color: var(--mod-cd-300); + &::before { + content: url('/img/icon_cd_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCD .coveo-clickable { + color: var(--mod-cd-300); +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCC { + background: var(--mod-ccm-100); + border: 1px solid var(--mod-ccm-200); + color: var(--mod-ccm-300); + &::before { + content: url('/img/icon_ccm_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCC .coveo-clickable { + color: var(--mod-ccm-300); +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueFF { + background: var(--mod-ff-100); + border: 1px solid var(--mod-ff-200); + color: var(--mod-ff-300); + &::before { + content: url('/img/icon_ff_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueFF .coveo-clickable { + color: var(--mod-ff-300); +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCE { + background: var(--mod-ce-100); + border: 1px solid var(--mod-ce-200); + color: var(--mod-ce-300); + &::before { + content: url('/img/icon_ce_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} + .harness-search-module .CoveoFieldValue.CoveoFieldValueCE .coveo-clickable { + color: var(--mod-ce-300); +} + +.harness-search-module .CoveoFieldValue.CoveoFieldValueSTO { + background: var(--mod-sto-100); + border: 1px solid var(--mod-sto-200); + color: var(--mod-sto-300); + &::before { + content: url('/img/icon_sto_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} +.harness-search-module .CoveoFieldValue.CoveoFieldValueSTO .coveo-clickable { + color: var(--mod-sto-300); +} + +.harness-search-module .CoveoFieldValue.CoveoFieldValueSRM { + background: var(--mod-srm-100); + border: 1px solid var(--mod-srm-200); + color: var(--mod-srm-300); + &::before { + content: url('/img/icon_srm_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} +.harness-search-module .CoveoFieldValue.CoveoFieldValueSRM .coveo-clickable { + color: var(--mod-srm-300); +} + .CoveoResult a.CoveoResultLink, .CoveoResultLink, a.CoveoResultLink { + color: #000; + font-weight: 500; + font-size: 18px; + line-height: 28px; +} + .CoveoResult a.CoveoResultLink:hover, .CoveoResultLink, a.CoveoResultLink:hover { + color: #0278d5; +} + .CoveoExcerpt { + font-weight: normal; + font-size: 14px; + line-height: 20px; + /* or 143% */ + /* Gray Scale / Gray Scale 600 */ + color: #4f5162; +} + .coveo-checkbox-button, input[type='checkbox'].coveo-checkbox + button { + /* min-width: 18px; + */ + border: 1px solid #0278d5; + box-sizing: border-box; + border-radius: 4px; +} + .coveo-dynamic-facet-value .coveo-checkbox-span-label { + font-weight: 300; + font-size: 12px; + line-height: 15px; + /* identical to box height */ + display: flex; + align-items: flex-end; + text-align: center; + /* Gray Scale / Gray Scale 800 */ + color: #22222a; +} + .coveo-dynamic-facet-value .coveo-checkbox-label:hover .coveo-checkbox-span-label, .coveo-dynamic-facet-value.coveo-focused .coveo-checkbox-span-label, .coveo-dynamic-facet-value.coveo-selected .coveo-checkbox-span-label { + opacity: 1; + color: var(--black); + font-weight: 400; +} + .coveo-dynamic-facet-value .coveo-checkbox-span-label-suffix { + font-weight: 300; + font-size: 12px; + line-height: 15px; + /* identical to box height */ + display: flex; + align-items: flex-end; + text-align: center; + /* Gray Scale / Gray Scale 600 */ + color: #4f5162; +} + .coveo-dynamic-facet-header-title { + font-weight: 500; + font-size: 18px; + line-height: 28px; + /* identical to box height, or 156% */ + display: flex; + align-items: flex-end; + color: #000; +} + .coveo-dynamic-facet-header { + border-bottom: 1px solid #f3f3fa; +} + .coveo-checkbox-label { + display: grid; + grid-template-columns: auto auto 1fr; +} + .coveo-dynamic-facet-value .coveo-checkbox-span-label-suffix { + justify-content: flex-end; +} + .CoveoSearchInterface .coveo-facet-column { + padding: 13px 20px 5px; +} + .CoveoSearchInterface .coveo-results-column { + padding: 10px 10px 10px 25px; +} + .CoveoQuerySummary, .CoveoQueryDuration { + color: var(--black); + margin-right: 0.3em; +} + .CoveoSort { + padding: 0 36px 10px; + text-transform: capitalize; + font-weight: 500; + font-size: 14px; + line-height: 24px; + color: #1c1c28; +} + .CoveoSort.coveo-selected, .CoveoSort.coveo-selected:hover { + border-bottom: 2px solid var(--primary-7); +} + .coveo-result-frame .coveo-result-cell, .CoveoResult.coveo-result-frame .coveo-result-cell { + font-weight: 300; + /* font-size: 12px; + */ + /* line-height: 15px; + */ + color: var(--gray-800); +} + @media screen and (max-width: 834px) { + .coveo-result-frame .coveo-result-cell, .CoveoResult.coveo-result-frame .coveo-result-cell { + display: block; + } +} + .coveo-list-layout.CoveoResult { + background: var(--white); + /* Light / Elevation 02 */ + box-shadow: 0px 0px 1px rgba(40, 41, 61, 0.04), 0px 2px 4px rgba(96, 97, 112, 0.16); + border-radius: 8px; + border-bottom: unset; + margin: 8px auto; + padding: 24px; +} + .coveo-pager-list :first-child { + border-left: unset; +} + .coveo-pager-list :last-child { + border-right: unset; +} + .coveo-pager-list-item { + border: unset !important; + // border-left: 1px solid rgba(2, 120, 213, 0.21); + // border-right: 1px solid rgba(2, 120, 213, 0.21); + font-size: 12px; + background: var(--white); + // box-shadow: 0px 0px 1px #000, 0px 2px 4px #000; + border-radius: unset; + margin-left: -1px; + margin-right: 0; + padding: 4px 10px; + font-weight: 400; +} + .coveo-pager-list-item a { + color: var(--primary-7); + font-weight: 400; +} + .coveo-pager-list-item.coveo-active, .coveo-pager-list-item:hover { + background-color: var(--primary-7) !important; + color: var(--white) !important; + font-weight: 700; +} + .coveo-pager-list-item.coveo-active a, .coveo-pager-list-item:hover a { + color: var(--white); + font-weight: 700; +} + .coveo-pager-next, .coveo-pager-previous { + border: unset; + border-left: 1px solid rgba(2, 120, 213, 0.21); + border-radius: 0 50% 50% 0; + padding: 4px 10px; + margin: 0; + margin-left: -1px; +} +.coveo-pager-next .coveo-pager-next-icon { + padding-left: 4px; + padding-right: 4px; +} + .coveo-pager-next .coveo-pager-next-icon-svg, .coveo-pager-previous .coveo-pager-next-icon-svg, .coveo-pager-next .coveo-pager-previous-icon-svg, .coveo-pager-previous .coveo-pager-previous-icon-svg { + color: var(--primary-7); + width: 6px; + height: 12px; +} + .coveo-pager-next:hover, .coveo-pager-previous:hover, .coveo-pager-next:active, .coveo-pager-previous:active, .coveo-pager-next a:hover, .coveo-pager-previous a:hover, .coveo-pager-next a:active, .coveo-pager-previous a:active { + color: var(--primary-7); + background-color: var(--primary-7); +} + .coveo-pager-next:hover .coveo-pager-next-icon-svg, .coveo-pager-previous:hover .coveo-pager-next-icon-svg, .coveo-pager-next:active .coveo-pager-next-icon-svg, .coveo-pager-previous:active .coveo-pager-next-icon-svg, .coveo-pager-next a:hover .coveo-pager-next-icon-svg, .coveo-pager-previous a:hover .coveo-pager-next-icon-svg, .coveo-pager-next a:active .coveo-pager-next-icon-svg, .coveo-pager-previous a:active .coveo-pager-next-icon-svg, .coveo-pager-next:hover .coveo-pager-previous-icon-svg, .coveo-pager-previous:hover .coveo-pager-previous-icon-svg, .coveo-pager-next:active .coveo-pager-previous-icon-svg, .coveo-pager-previous:active .coveo-pager-previous-icon-svg, .coveo-pager-next a:hover .coveo-pager-previous-icon-svg, .coveo-pager-previous a:hover .coveo-pager-previous-icon-svg, .coveo-pager-next a:active .coveo-pager-previous-icon-svg, .coveo-pager-previous a:active .coveo-pager-previous-icon-svg { + color: var(--white); +} + .coveo-pager-previous { + border-radius: 50% 0 0 50%; + border: unset; + border-right: 1px solid rgba(2, 120, 213, 0.21); +} + .CoveoResultsPerPage { + font-size: 12px; +} + .CoveoResultsPerPage :first-child { + border-left: unset; +} + .CoveoResultsPerPage :last-child { + border-right: unset; +} + .coveo-results-per-page-list-item { + border: unset !important; + // border-left: 1px solid rgba(2, 120, 213, 0.21); + // border-right: 1px solid rgba(2, 120, 213, 0.21); + color: var(--primary-7); + font-size: 12px; + background: var(--white); + padding: 4px 10px; + margin: 0; + margin-left: -1px; + // box-shadow: 0px 0px 1px #000, 0px 2px 4px #000; +} + .coveo-results-per-page-list-item.coveo-active, .coveo-results-per-page-list-item:hover, .coveo-results-per-page-list-item:active { + color: var(--white) !important; + background-color: var(--primary-7) !important; + font-weight: 600; +} + .coveo-results-per-page-text { + margin-right: 0.7em; +} + .coveo-result-frame { + position: relative; +} + .coveo-result-cell-image { + width: 300px; + display: table-cell; +} + .coveo-result-cell-image img { + width: 280px; + height: auto; + max-height: 110px; + position: absolute; + right: 0; + bottom: 0; +} + @media screen and (max-width: 834px) { + .coveo-result-cell-image img { + position: unset; + margin-top: 8px; + } +} + .CoveoLogo { + display: none; +} + .coveo-results-header { + box-shadow: unset; +} + .CoveoSort { + border-bottom: 1px solid var(--primary-1); +} + .coveo-featured-result-badge, .coveo-recommended-result-badge { + color: var(--primary-7); + text-transform: capitalize; + padding: 5px 0; + /* workaround for moving recommended badge to beside the @source tag */ + position: absolute; + transform: translate(13em, -5px); + z-index: 2; +} + .CoveoSearchInterface.coveo-small-facets .coveo-facet-dropdown-header { + border-radius: 4px; + text-transform: capitalize; + padding: 3px 10px; + height: unset; + line-height: 14px; +} + .CoveoSearchInterface.coveo-small-facets .coveo-facet-dropdown-header:hover { + background-color: rgba(1, 120, 213, 10); +} + .CoveoBreadcrumb { + border-bottom: unset; +} + .CoveoBreadcrumb .coveo-breadcrumb-items { + padding-bottom: 0; +} + .coveo-dynamic-facet-header-btn, .coveo-dynamic-facet-breadcrumb-collapse, .coveo-dynamic-facet-breadcrumb-value, .coveo-breadcrumb-clear-all { + color: var(--primary-7); +} + .CoveoMissingTerms .coveo-clickable { + color: var(--primary-7); +} + .coveo-result-frame .coveo-result-row, .CoveoResult.coveo-result-frame .coveo-result-row { + position: relative; +} + +// body:not([data-article-id*='undefined']):not([data-category-id='undefined']) h1, body:not([data-article-id*='undefined']):not([data-category-id='undefined']) h2, body:not([data-article-id*='undefined']):not([data-category-id='undefined']) h3, body:not([data-article-id*='undefined']):not([data-category-id='undefined']) h4 { +// margin-top: unset; +// font-size: 16px; +// font-weight: 500; +// } +.magic-box .magic-box-suggestions .magic-box-suggestion span { + color: #666 !important; + // float: left !important; +} + +/* add icons to the module facet */ +.coveo-dynamic-facet-value .coveo-checkbox-label button+.coveo-checkbox-span-label { + &[title="Chaos Engineering"]::before { + content: url('/img/icon_ce_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } + &[title="Cloud Cost Management"]::before { + content: url('/img/icon_ccm_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } + &[title="Continuous Delivery"]::before { + content: url('/img/icon_cd_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } + &[title="Continuous Integration"]::before { + content: url('/img/icon_ci_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } + &[title="Feature Flags"]::before { + content: url('/img/icon_ff_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } + &[title="Security Testing Orchestration"]::before { + content: url('/img/icon_sto_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } + &[title="Service Reliability Management"]::before { + content: url('/img/icon_srm_s.svg'); + display: inline-block; + width: 16px; + height: 16px; + margin-right: 4px; + } +} + +.CoveoLogo { + display: none !important; +} + +@media screen and (max-width: 1199px) { + #searchBoxCoveo { + + width: calc(100vw - 1025px); + .CoveoSearchButton { + display: none; + } + .magic-box-input > input { + font-size: 14px; + } + } +} + +@media screen and (min-width: 997px) and (max-width: 1060px) { + #searchBoxCoveo { + display: none + } +} + +@media screen and (max-width: 996px) { + #searchBoxCoveo { + width: 300px; + } + #searchResultsCoveo .coveo-search-results { + left: 15px; + width: calc(100vw - 30px); + } + #coveo-search { + min-width: unset; + } +} + +@media screen and (max-width: 834px) { + .coveo-result-frame .coveo-result-row, .CoveoResult.coveo-result-frame .coveo-result-row { + display: block; + } +} + +@media screen and (max-width: 678px) { + #searchBoxCoveo { + width: 300px; + display: none; + } +} + +/* +@media screen and (max-width: 1058px) { + #searchBoxCoveo { + + width: calc(100vw - 880px); + .CoveoSearchButton { + display: none; + } + .magic-box-input > input { + font-size: 14px; + } + } +} + +@media screen and (max-width: 996px) { + #searchBoxCoveo { + width: 300px; + } + #searchResultsCoveo .coveo-search-results { + left: 15px; + width: calc(100vw - 30px); + } + #coveo-search { + min-width: unset; + } +} + +@media screen and (max-width: 834px) { + .coveo-result-frame .coveo-result-row, .CoveoResult.coveo-result-frame .coveo-result-row { + display: block; + } +} + +@media screen and (max-width: 678px) { + #searchBoxCoveo { + width: 300px; + display: none; + } +} +*/ diff --git a/src/css/custom.css b/src/css/custom.css index e27964e8363..9e6869eda2d 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -1,4 +1,4 @@ -@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600&display=swap'); +@import url("https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600&display=swap"); /** * Any CSS included here will be global. The classic template * bundles Infima by default. Infima is a CSS framework designed to @@ -7,226 +7,231 @@ /* You can override the default Infima variables here. */ :root { - /* ============ Harness Colors ============ */ - /* Primary colors - for links, buttons, etc. */ - /* --primary-10: #1a1a1a; */ - --primary-10: #07182B; - --primary-9: #0a3364; /* Darker Primary */ - --primary-8: #004ba4; - --primary-7: #0278d5; /* Main Primary */ - --primary-6: #0092e4; /* rgb(0 146 228 / 0.7) */ - --primary-5: #00ade4; /* Brand Primary */ - --primary-4: #3dc7f6; - --primary-3: #a3e9ff; /* Lighter Primary */ - --primary-2: #cdf4fe; - --primary-1: #effbff; - /* Text & Background - Gray Scale */ - --gray-1000: #000000; - --gray-900: #0b0b0d; - --gray-800: #22222a; - --gray-700: #383946; - --gray-600: #4f5162; - --gray-500: #6b6d85; /* Main */ - --gray-400: #9293ab; - --gray-300: #b0b1c4; - --gray-200: #d9dae5; - --gray-100: #f3f3fa; - --gray-50: #fafbfc; /* Background */ - --gray-0: #ffffff; - /* Alerts & Statues - Green */ - --green-900: #1e5c1f; - --green-800: #1b841d; - --green-700: #299b2c; - --green-600: #42ab45; - --green-500: #4dc952; /* Main */ - --green-400: #86d981; - --green-300: #a2e29b; - --green-200: #bdeab7; - --green-100: #d8f3d4; - --green-50: #e4f7e1; /* Background */ - /* Alerts & Statues - Cornflower Blue */ - --blue-900: #2d376d; - --blue-800: #39478f; - --blue-700: #4c5cb0; - --blue-600: #6374d0; - --blue-500: #798bec; /* Main */ - --blue-400: #8598ff; - --blue-300: #97a7ff; - --blue-200: #b1beff; - --blue-100: #dae0ff; - --blue-50: #f4f6ff; /* Background */ - /* Alerts & Statues - Yellow */ - --yellow-900: #fcb519; - --yellow-800: #fcc026; - --yellow-700: #fcc62d; - --yellow-600: #fdcc35; - --yellow-500: #fdd13b; /* Main */ - --yellow-400: #fedf76; - --yellow-300: #fee89d; - --yellow-200: #fef1c4; - --yellow-100: #fff9e7; - --yellow-50: #fffbee; /* Background */ - /* Alerts & Statues - Orange */ - --orange-900: #ff5310; - --orange-800: #ff661a; - --orange-700: #ff7020; - --orange-600: #ff7b26; - --orange-500: #ff832b; /* Main */ - --orange-400: #ffa86b; - --orange-300: #ffc195; - --orange-200: #ffdabf; - --orange-100: #fff0e6; - --orange-50: #fff5ed; /* Background */ - /* Alerts & Statues - Red */ - --red-900: #b41710; - --red-800: #c41f17; - --red-700: #cf2318; - --red-600: #da291d; - --red-500: #e43326; /* Main */ - --red-400: #ee5f54; - --red-300: #ef9790; - --red-200: #f5c0bc; - --red-100: #fbe6e4; - --red-50: #fcedec; /* Background */ - /* Secondary & Charts - Teal */ - --teal-900: #07a0ab; - --teal-800: #05aab6; - --teal-700: #06b7c4; - --teal-600: #03c0cd; - --teal-500: #0bc8d6; /* Main */ - --teal-400: #47d5df; - --teal-300: #73dfe7; - --teal-200: #a9eff2; - --teal-100: #c0fbfe; - --teal-50: #d3fcfe; /* Background */ - /* Secondary & Charts - Lime Green */ - --lime-900: #487a34; - --lime-800: #558b2f; - --lime-700: #689f38; - --lime-600: #76af34; - --lime-500: #7fb800; /* Main */ - --lime-400: #9ccc65; - --lime-300: #aadc72; - --lime-200: #c5e1a5; - --lime-100: #eaf8db; - --lime-50: #f1fae6; /* Background */ - /* Secondary & Charts - Purple */ - --purple-900: #4d0b8f; - --purple-800: #4d278f; - --purple-700: #592baa; - --purple-600: #6938c0; - --purple-500: #6938c0; /* Main */ - --purple-400: #ae82fc; - --purple-300: #c19eff; - --purple-200: #cfb4ff; - --purple-100: #e1d0ff; - --purple-50: #eadeff; /* Background */ - /* Secondary & Charts - Magenta */ - --magenta-900: #ca136c; - --magenta-800: #d91f79; - --magenta-700: #ee2a89; - --magenta-600: #f53693; - --magenta-500: #ff479f; /* Main */ - --magenta-400: #ff8ac1; - --magenta-300: #ffabd3; - --magenta-200: #ffcde4; - --magenta-100: #ffeef7; - --magenta-50: #fff3f9; /* Background */ + /* ============ Harness Colors ============ */ + /* Primary colors - for links, buttons, etc. */ + /* --primary-10: #1a1a1a; */ + --primary-10: #07182b; + --primary-9: #0a3364; /* Darker Primary */ + --primary-8: #004ba4; + --primary-7: #0278d5; /* Main Primary */ + --primary-6: #0092e4; /* rgb(0 146 228 / 0.7) */ + --primary-5: #00ade4; /* Brand Primary */ + --primary-4: #3dc7f6; + --primary-3: #a3e9ff; /* Lighter Primary */ + --primary-2: #cdf4fe; + --primary-1: #effbff; + /* Text & Background - Gray Scale */ + --gray-1000: #000000; + --gray-900: #0b0b0d; + --gray-800: #22222a; + --gray-700: #383946; + --gray-600: #4f5162; + --gray-500: #6b6d85; /* Main */ + --gray-400: #9293ab; + --gray-300: #b0b1c4; + --gray-200: #d9dae5; + --gray-100: #f3f3fa; + --gray-50: #fafbfc; /* Background */ + --gray-0: #ffffff; + /* Alerts & Statues - Green */ + --green-900: #1e5c1f; + --green-800: #1b841d; + --green-700: #299b2c; + --green-600: #42ab45; + --green-500: #4dc952; /* Main */ + --green-400: #86d981; + --green-300: #a2e29b; + --green-200: #bdeab7; + --green-100: #d8f3d4; + --green-50: #e4f7e1; /* Background */ + /* Alerts & Statues - Cornflower Blue */ + --blue-900: #2d376d; + --blue-800: #39478f; + --blue-700: #4c5cb0; + --blue-600: #6374d0; + --blue-500: #798bec; /* Main */ + --blue-400: #8598ff; + --blue-300: #97a7ff; + --blue-200: #b1beff; + --blue-100: #dae0ff; + --blue-50: #f4f6ff; /* Background */ + /* Alerts & Statues - Yellow */ + --yellow-900: #fcb519; + --yellow-800: #fcc026; + --yellow-700: #fcc62d; + --yellow-600: #fdcc35; + --yellow-500: #fdd13b; /* Main */ + --yellow-400: #fedf76; + --yellow-300: #fee89d; + --yellow-200: #fef1c4; + --yellow-100: #fff9e7; + --yellow-50: #fffbee; /* Background */ + /* Alerts & Statues - Orange */ + --orange-900: #ff5310; + --orange-800: #ff661a; + --orange-700: #ff7020; + --orange-600: #ff7b26; + --orange-500: #ff832b; /* Main */ + --orange-400: #ffa86b; + --orange-300: #ffc195; + --orange-200: #ffdabf; + --orange-100: #fff0e6; + --orange-50: #fff5ed; /* Background */ + /* Alerts & Statues - Red */ + --red-900: #b41710; + --red-800: #c41f17; + --red-700: #cf2318; + --red-600: #da291d; + --red-500: #e43326; /* Main */ + --red-400: #ee5f54; + --red-300: #ef9790; + --red-200: #f5c0bc; + --red-100: #fbe6e4; + --red-50: #fcedec; /* Background */ + /* Secondary & Charts - Teal */ + --teal-900: #07a0ab; + --teal-800: #05aab6; + --teal-700: #06b7c4; + --teal-600: #03c0cd; + --teal-500: #0bc8d6; /* Main */ + --teal-400: #47d5df; + --teal-300: #73dfe7; + --teal-200: #a9eff2; + --teal-100: #c0fbfe; + --teal-50: #d3fcfe; /* Background */ + /* Secondary & Charts - Lime Green */ + --lime-900: #487a34; + --lime-800: #558b2f; + --lime-700: #689f38; + --lime-600: #76af34; + --lime-500: #7fb800; /* Main */ + --lime-400: #9ccc65; + --lime-300: #aadc72; + --lime-200: #c5e1a5; + --lime-100: #eaf8db; + --lime-50: #f1fae6; /* Background */ + /* Secondary & Charts - Purple */ + --purple-900: #4d0b8f; + --purple-800: #4d278f; + --purple-700: #592baa; + --purple-600: #6938c0; + --purple-500: #6938c0; /* Main */ + --purple-400: #ae82fc; + --purple-300: #c19eff; + --purple-200: #cfb4ff; + --purple-100: #e1d0ff; + --purple-50: #eadeff; /* Background */ + /* Secondary & Charts - Magenta */ + --magenta-900: #ca136c; + --magenta-800: #d91f79; + --magenta-700: #ee2a89; + --magenta-600: #f53693; + --magenta-500: #ff479f; /* Main */ + --magenta-400: #ff8ac1; + --magenta-300: #ffabd3; + --magenta-200: #ffcde4; + --magenta-100: #ffeef7; + --magenta-50: #fff3f9; /* Background */ - /* ================================== * + /* ================================== * * ===== Style Variable Aliases ===== * * ================================== */ - /* ----- Colors ----- */ - --white: var(--gray-0); - --black: var(--gray-1000); - --gray: var(--gray-500); - --gray-bg: var(--gray-50); + /* ----- Colors ----- */ + --white: var(--gray-0); + --black: var(--gray-1000); + --gray: var(--gray-500); + --gray-bg: var(--gray-50); - --primary-darker: var(--primary-9); - --primary-main: var(--primary-7); - --primary: var(--primary-main); - --primary-brand: var(--primary-5); - --primary-lighter: var(--primary-3); - --primary-bg: #fafcff; - --green: var(--green-500); - --green-bg: var(--green-50); - --blue: var(--blue-500); - --blue-bg: var(--blue-50); - --yellow: var(--yellow-500); - --yellow-bg: var(--yellow-50); - --yellow: var(--yellow-500); - --yellow-bg: var(--yellow-50); - --red: var(--red-500); - --red-bg: var(--red-50); - --teal: var(--teal-500); - --teal-bg: var(--teal-50); - --lime: var(--lime-500); - --lime-bg: var(--lime-50); - --purple: var(--purple-500); - --purple-bg: var(--purple-50); - --magenta: var(--magenta-500); - --magenta-bg: var(--magenta-50); + --primary-darker: var(--primary-9); + --primary-main: var(--primary-7); + --primary: var(--primary-main); + --primary-brand: var(--primary-5); + --primary-lighter: var(--primary-3); + --primary-bg: #fafcff; + --green: var(--green-500); + --green-bg: var(--green-50); + --blue: var(--blue-500); + --blue-bg: var(--blue-50); + --yellow: var(--yellow-500); + --yellow-bg: var(--yellow-50); + --yellow: var(--yellow-500); + --yellow-bg: var(--yellow-50); + --red: var(--red-500); + --red-bg: var(--red-50); + --teal: var(--teal-500); + --teal-bg: var(--teal-50); + --lime: var(--lime-500); + --lime-bg: var(--lime-50); + --purple: var(--purple-500); + --purple-bg: var(--purple-50); + --magenta: var(--magenta-500); + --magenta-bg: var(--magenta-50); - /* Module Colors/Continuous Delivery/300 */ - --mod-cd-300: #30841f; - /* Module Colors/Continuous Delivery/200 */ - --mod-cd-200: #5fb34e; - /* Module Colors/Continuous Delivery/100 */ - --mod-cd-100: #f6fff2; + /* Module Colors/Continuous Delivery/300 */ + --mod-cd-300: #30841f; + /* Module Colors/Continuous Delivery/200 */ + --mod-cd-200: #5fb34e; + /* Module Colors/Continuous Delivery/100 */ + --mod-cd-100: #f6fff2; - /* Module Colors/Continuous Integration/300 */ - --mod-ci-300: #0672b6; - /* Module Colors/Continuous Integration/200 */ - --mod-ci-200: #2bb1f2; - /* Module Colors/Continuous Integration/100 */ - --mod-ci-100: #e2f5ff; + /* Module Colors/Continuous Integration/300 */ + --mod-ci-300: #0672b6; + /* Module Colors/Continuous Integration/200 */ + --mod-ci-200: #2bb1f2; + /* Module Colors/Continuous Integration/100 */ + --mod-ci-100: #e2f5ff; - /* Module Colors/Cloud Cost/300 */ - --mod-ccm-300: #24807f; - /* Module Colors/Cloud Cost/200 */ - --mod-ccm-200: #01c9cc; - /* Module Colors/Cloud Cost/100 */ - --mod-100: #ecffff; + /* Module Colors/Cloud Cost/300 */ + --mod-ccm-300: #24807f; + /* Module Colors/Cloud Cost/200 */ + --mod-ccm-200: #01c9cc; + /* Module Colors/Cloud Cost/100 */ + --mod-100: #ecffff; - /* Module Colors/Feature Flags/300 */ - --mod-ff-300: #c05809; - /* Module Colors/Feature Flags/200 */ - --mod-ff-200: #ee8625; - /* Module Colors/Feature Flags/100 */ - --mod-ff-100: #fcf4e3; + /* Module Colors/Feature Flags/300 */ + --mod-ff-300: #c05809; + /* Module Colors/Feature Flags/200 */ + --mod-ff-200: #ee8625; + /* Module Colors/Feature Flags/100 */ + --mod-ff-100: #fcf4e3; - /* Module Colors/STO/200 */ - --mod-sto-200: #2660ff; + --mod-sto-300: #1947ec; + /* Module Colors/STO/200 */ + --mod-sto-200: #2660ff; + --mod-sto-100: #ebefff; - /* Module Colors/SRM/200 */ - --mod-srm-200: #8b73ce; + --mod-srm-300: #6938c0; + /* Module Colors/SRM/200 */ + --mod-srm-200: #8b73ce; + --mod-srm-100: #f6f1ff; - /* Module Colors/Chaos Engineering/200 */ - --mod-ce-200: #cd0056; + --mod-ce-300: #cd0056; + /* Module Colors/Chaos Engineering/200 */ + --mod-ce-200: #ff006a; + --mod-ce-100: #fff1f7; + /* ----- */ + --ifm-link-decoration: none; - /* ----- */ - --ifm-link-decoration: none; + --ifm-color-primary: var(--primary-7); + --ifm-color-primary-dark: var(--primary-8); + --ifm-color-primary-darker: var(--primary-9); + --ifm-color-primary-darkest: var(--primary-10); + --ifm-color-primary-light: var(--primary-5); + --ifm-color-primary-lighter: var(--primary-4); + --ifm-color-primary-lightest: var(--primary-3); + --ifm-code-font-size: 95%; + --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1); - --ifm-color-primary: var(--primary-7); - --ifm-color-primary-dark: var(--primary-8); - --ifm-color-primary-darker: var(--primary-9); - --ifm-color-primary-darkest: var(--primary-10); - --ifm-color-primary-light: var(--primary-5); - --ifm-color-primary-lighter: var(--primary-4); - --ifm-color-primary-lightest: var(--primary-3); - --ifm-code-font-size: 95%; - --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1); - - --ifm-link-color: var(--ifm-color-primary); + --ifm-link-color: var(--ifm-color-primary); - --ifm-navbar-padding-vertical: calc(var(--ifm-spacing-vertical) * 0.75); - --ifm-navbar-height: 4.25rem; + --ifm-navbar-padding-vertical: calc(var(--ifm-spacing-vertical) * 0.75); + --ifm-navbar-height: 4.25rem; } /* For readability concerns, you should choose a lighter palette in dark mode. */ -[data-theme='dark'] { +[data-theme="dark"] { --ifm-color-primary: var(--primary-5); --ifm-color-primary-dark: var(--primary-6); --ifm-color-primary-darker: var(--primary-7); @@ -240,33 +245,93 @@ } .button--cta { - background-color: var(--primary-7); - font-weight: 600; - font-size: 16px; - line-height: 24px; - color: var(--white); + background-color: var(--primary-7); + font-weight: 600; + font-size: 16px; + line-height: 24px; + color: var(--white); } .button--cta:hover { - background-color: var(--primary-5); + background-color: var(--primary-5); } /* style overwriting temporarily for the beta icon */ .navbar__brand { - margin-right: 0; + margin-right: 0; } .navbar__title { - display: none; + display: none; +} +.navbar__item img[alt="BETA"] { + margin-left: -0.25rem; + margin-top: 10px; + width: 36px; + height: auto; +} +.theme-doc-markdown h3 { + display: flex; +} +.theme-doc-markdown h3 > img { + margin-right: 0.2em; +} + +.color-ci { + color: var(--mod-ci-200); +} +.color-cd { + color: var(--mod-cd-200); +} +.color-ff { + color: var(--mod-ff-200); +} +.color-ccm { + color: var(--mod-ccm-200); +} +.color-srm { + color: var(--mod-srm-200); +} +.color-sto { + color: var(--mod-sto-200); +} +.color-ce { + color: var(--mod-ce-200); +} +.table-of-contents ul li span[class*="color-"] { + color: unset; +} +.theme-doc-markdown h3 { + display: flex; } -.navbar__item img[alt='BETA'] { - margin-left: -0.25rem; - margin-top: 10px; - width: 36px; - height: auto; +.theme-doc-markdown h3 > img { + margin-right: 0.2em; +} + +.color-ci { + color: var(--mod-ci-200); +} +.color-cd { + color: var(--mod-cd-200); +} +.color-ff { + color: var(--mod-ff-200); +} +.color-ccm { + color: var(--mod-ccm-200); +} +.color-srm { + color: var(--mod-srm-200); +} +.color-sto { + color: var(--mod-sto-200); +} +.color-ce { + color: var(--mod-ce-200); } /* hide the algolia logo */ -.DocSearch-Logo, a[aria-label="Search by Algolia"] { - display: none; +.DocSearch-Logo, +a[aria-label="Search by Algolia"] { + display: none; } .flexContainer { @@ -279,84 +344,103 @@ /* for 3rd-party libs */ .rc-tooltip-inner { - text-transform: capitalize; + text-transform: capitalize; } .rc-tooltip-hidden { - display: none !important; + display: none !important; } button[title="Copy"] { - opacity: 0.6 !important; + opacity: 0.6 !important; } .tooltip-max-width { - max-width: 200px; - padding: 0.5em; - margin: 0; + max-width: 200px; + padding: 0.5em; + margin: 0; } .searchBar .DocSearch-Button { - border-radius: 4px; - background: var(--white); - border: 1px solid var(--gray-500); + border-radius: 4px; + background: var(--white); + border: 1px solid var(--gray-500); } .searchBar .DocSearch-Button .DocSearch-Search-Icon { - color: var(--gray-600); + color: var(--gray-600); } .searchBar .DocSearch-Button .DocSearch-Button-Key { - background: none; - box-shadow: none; - border: 1px solid var(--gray-400); - width: 22px; - height: 22px; - padding: 0; - display: flex; - justify-content: center; - align-items: center; + background: none; + box-shadow: none; + border: 1px solid var(--gray-400); + width: 22px; + height: 22px; + padding: 0; + display: flex; + justify-content: center; + align-items: center; } body.DocSearch--active #__docusaurus > div[class*="announcementBar"] { - /* hide the announcement bar while the search bar called out */ - display: none; + /* hide the announcement bar while the search bar called out */ + display: none; } +.theme-doc-sidebar-container .sidebar-item-rss > a { + display: flex; + align-items: center; +} +.theme-doc-sidebar-container .sidebar-item-rss > a > svg { + display: none; +} +.theme-doc-sidebar-container .sidebar-item-rss > a::after { + content: url(/img/icon_rss.svg); + margin-left: 0.3em; +} /* customize LHS menu items */ -aside nav.menu > ul > li > div > a, aside nav.menu > ul > li > ul > li > div > a, aside nav.menu > ul > li > a, aside nav.menu > ul > li > ul > li > a { - font-size: 13px; - line-height: 20px; - font-weight: 600; -} -aside nav.menu > ul > li > ul > li ul > li > div > a, aside nav.menu > ul > li > ul > li ul > li > a { - font-weight: 500; - font-size: 12px; - line-height: 18px; -} -.menu__link--sublist-caret:after, .menu__caret:before { - /* width: 0.7rem; +aside nav.menu > ul > li > div > a, +aside nav.menu > ul > li > ul > li > div > a, +aside nav.menu > ul > li > a, +aside nav.menu > ul > li > ul > li > a { + font-size: 13px; + line-height: 20px; + font-weight: 600; +} +aside nav.menu > ul > li > ul > li ul > li > div > a, +aside nav.menu > ul > li > ul > li ul > li > a { + font-weight: 500; + font-size: 12px; + line-height: 18px; +} +.menu__link--sublist-caret:after, +.menu__caret:before { + /* width: 0.7rem; height: 0.7rem; min-width: 0.7rem; min-height: 0.7rem; */ - /* background: var(--ifm-menu-link-sublist-icon) 50% / 1.25rem 1.25rem; */ - background-size: 1.25rem 1.25rem; + /* background: var(--ifm-menu-link-sublist-icon) 50% / 1.25rem 1.25rem; */ + background-size: 1.25rem 1.25rem; } -@media screen and (min-width: 1080px) { -/* style for the search bar in higher resolution */ - .searchBar { - width: calc(100vw - 925px); - } - .searchBar .DocSearch-Button { - width: 100%; - } +@media screen and (min-width: 1200px) { + /* style for the search bar in higher resolution */ + .searchBar { + width: calc(100vw - 1040px); /* - 925px */ + } + .searchBar .DocSearch-Button { + width: 100%; + } } -@media screen and (max-width: 1079px) { -/* for tablet */ - .navbar__item img[alt='BETA'] { - display: none - } +@media screen and (max-width: 1199px) { + /* for tablet */ + .navbar__item img[alt="BETA"] { + display: none; + } + .searchBar { + display: none; + } } @media screen and (max-width: 393px) { -/* for mobile */ - .navbar__logo { - max-width: calc(100% - 44px); - } -} \ No newline at end of file + /* for mobile */ + .navbar__logo { + max-width: calc(100% - 44px); + } +} diff --git a/src/pages/index.tsx b/src/pages/index.tsx index 98fb669d42c..feb3a353724 100644 --- a/src/pages/index.tsx +++ b/src/pages/index.tsx @@ -1,23 +1,23 @@ -import React from 'react'; -import clsx from 'clsx'; -import Link from '@docusaurus/Link'; -import useDocusaurusContext from '@docusaurus/useDocusaurusContext'; -import Layout from '@theme/Layout'; -import HomepageFeatures from '@site/src/components/HomepageFeatures'; -import WhatsNew from '@site/src/components/WhatsNew'; -import LearnAboutPlatform from '@site/src/components/LearnAboutPlatform'; -import HarnessU from '@site/src/components/HarnessU'; -import Feedback from '@site/src/components/Feedback'; -import MDXContent from '@theme/MDXContent'; +import React from "react"; +import clsx from "clsx"; +import Link from "@docusaurus/Link"; +import useDocusaurusContext from "@docusaurus/useDocusaurusContext"; +import Layout from "@theme/Layout"; +import HomepageFeatures from "@site/src/components/HomepageFeatures"; +import WhatsNew from "@site/src/components/WhatsNew"; +import LearnAboutPlatform from "@site/src/components/LearnAboutPlatform"; +import HarnessU from "@site/src/components/HarnessU"; +import Feedback from "@site/src/components/Feedback"; +import MDXContent from "@theme/MDXContent"; // import Lottie from "lottie-react"; // import allModuleAnimation from "./all_module_animation.json"; -import styles from './index.module.scss'; +import styles from "./index.module.scss"; function HomepageHeader() { - const {siteConfig} = useDocusaurusContext(); + const { siteConfig } = useDocusaurusContext(); return ( -
    +

    {siteConfig.title}

    {siteConfig.tagline}

    @@ -33,7 +33,7 @@ function HomepageHeader() {
    {/* */}
    @@ -41,29 +41,29 @@ function HomepageHeader() { } export default function Home(): JSX.Element { - const {siteConfig} = useDocusaurusContext(); + const { siteConfig } = useDocusaurusContext(); return ( - -
    - -
    -
    -
    - - {/* */} + +
    + +
    +
    +
    + + {/* */} +
    +
    - -
    - -
    + + - -
    -
    + + +
    ); } diff --git a/src/pages/legal/terms-of-use.module.scss b/src/pages/legal/terms-of-use.module.scss new file mode 100644 index 00000000000..50740fad97a --- /dev/null +++ b/src/pages/legal/terms-of-use.module.scss @@ -0,0 +1,4 @@ +.container { + padding-top: 3em; + padding-bottom: 3em; +} \ No newline at end of file diff --git a/src/pages/legal/terms-of-use.tsx b/src/pages/legal/terms-of-use.tsx new file mode 100644 index 00000000000..112565f23d5 --- /dev/null +++ b/src/pages/legal/terms-of-use.tsx @@ -0,0 +1,24 @@ +import React from "react"; +import MDXContent from "@theme/MDXContent"; +import Layout from "@theme/Layout"; +import useDocusaurusContext from "@docusaurus/useDocusaurusContext"; +import TermsOfUse from "../../../docs/legal/terms-of-use.md"; +import styles from "./terms-of-use.module.scss"; + +export default function LandingPage() { + const { siteConfig } = useDocusaurusContext(); + return ( +
    + + +
    + +
    +
    +
    +
    + ); +} diff --git a/src/theme/NavbarItem/ComponentTypes.js b/src/theme/NavbarItem/ComponentTypes.js new file mode 100644 index 00000000000..1775aa040c0 --- /dev/null +++ b/src/theme/NavbarItem/ComponentTypes.js @@ -0,0 +1,8 @@ +import ComponentTypes from "@theme-original/NavbarItem/ComponentTypes"; +import CoveoSearch from "@site/src/components/NavbarItems/CoveoSearch"; + +export default { + ...ComponentTypes, + // add CoveoSearch as a navbar item + "custom-coveo-search": CoveoSearch, +}; diff --git a/static/img/icon_ccm_m.svg b/static/img/icon_ccm_m.svg new file mode 100644 index 00000000000..b22fa4f0cf9 --- /dev/null +++ b/static/img/icon_ccm_m.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_ccm_s.svg b/static/img/icon_ccm_s.svg new file mode 100644 index 00000000000..6028ee4fb50 --- /dev/null +++ b/static/img/icon_ccm_s.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_cd_m.svg b/static/img/icon_cd_m.svg new file mode 100644 index 00000000000..2584ecd98f0 --- /dev/null +++ b/static/img/icon_cd_m.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_cd_s.svg b/static/img/icon_cd_s.svg new file mode 100644 index 00000000000..8a036d4219f --- /dev/null +++ b/static/img/icon_cd_s.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_ce_m.svg b/static/img/icon_ce_m.svg new file mode 100644 index 00000000000..a816e5c6e9f --- /dev/null +++ b/static/img/icon_ce_m.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_ce_s.svg b/static/img/icon_ce_s.svg new file mode 100644 index 00000000000..f3e94bf3446 --- /dev/null +++ b/static/img/icon_ce_s.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_ci_m.svg b/static/img/icon_ci_m.svg new file mode 100644 index 00000000000..c32aad7cf56 --- /dev/null +++ b/static/img/icon_ci_m.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_ci_s.svg b/static/img/icon_ci_s.svg new file mode 100644 index 00000000000..743bfa0dad5 --- /dev/null +++ b/static/img/icon_ci_s.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_ff_m.svg b/static/img/icon_ff_m.svg new file mode 100644 index 00000000000..1c956ab454a --- /dev/null +++ b/static/img/icon_ff_m.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_ff_s.svg b/static/img/icon_ff_s.svg new file mode 100644 index 00000000000..b3dcb5c11b3 --- /dev/null +++ b/static/img/icon_ff_s.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_harness.svg b/static/img/icon_harness.svg new file mode 100644 index 00000000000..9f850e381b5 --- /dev/null +++ b/static/img/icon_harness.svg @@ -0,0 +1,5 @@ + + + \ No newline at end of file diff --git a/static/img/icon_harness_m.svg b/static/img/icon_harness_m.svg new file mode 100644 index 00000000000..5cb15b6b870 --- /dev/null +++ b/static/img/icon_harness_m.svg @@ -0,0 +1,5 @@ + + + \ No newline at end of file diff --git a/static/img/icon_rss.svg b/static/img/icon_rss.svg new file mode 100644 index 00000000000..f4c15b93d51 --- /dev/null +++ b/static/img/icon_rss.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_srm_m.svg b/static/img/icon_srm_m.svg new file mode 100644 index 00000000000..adbb3b478a7 --- /dev/null +++ b/static/img/icon_srm_m.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_srm_s.svg b/static/img/icon_srm_s.svg new file mode 100644 index 00000000000..0498c4aa8a3 --- /dev/null +++ b/static/img/icon_srm_s.svg @@ -0,0 +1,20 @@ + + + + + + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_sto_m.svg b/static/img/icon_sto_m.svg new file mode 100644 index 00000000000..1538b60fe46 --- /dev/null +++ b/static/img/icon_sto_m.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/static/img/icon_sto_s.svg b/static/img/icon_sto_s.svg new file mode 100644 index 00000000000..293be0beceb --- /dev/null +++ b/static/img/icon_sto_s.svg @@ -0,0 +1,12 @@ + + + + + + + + + \ No newline at end of file diff --git a/tutorials/build-code/ci-java-http-server.md b/tutorials/build-code/ci-java-http-server.md new file mode 100644 index 00000000000..d60ee72c7ac --- /dev/null +++ b/tutorials/build-code/ci-java-http-server.md @@ -0,0 +1,262 @@ +--- +sidebar_position: 3 +description: This build automation guide walks you through building and testing a Java HTTP server application in a CI Pipeline +keywords: [Hosted Build, Continuous Integration, Hosted, CI Tutorial] +--- + +# Build, test, and publish a Docker Image for a Java HTTP server application + +In this tutorial, you will build, test, and publish a Docker image for a Java HTTP server application, and then run a connectivity test using the published image. + +:::tip + +For a comprehensive guide on application testing, Harness provides O'Reilly's **Full Stack Testing** book for free at https://harness.io/resources/oreilly-full-stack-testing. + +::: + +## Create your pipeline + +```mdx-code-block +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +``` + +1. Fork the repository https://github.com/keen-software/jhttp into your GitHub account. +2. Follow the **Get Started** wizard in Harness CI. + ```mdx-code-block + + + If you are signing in to Harness for the first time, select the Continuous Integration module after your initial sign in. This brings you to the Get Started wizard. + + + If you have an existing Harness account, either create a new project or select an existing project, and then select the Continuous Integration module. In the Project pane, expand the Project Setup menu, and then select Get Started. + + + ``` +3. When you are prompted to select a repository, search for **jhttp**, select the repository that you forked in the earlier step, and then select **Configure Pipeline**. +4. Select **Starter Pipeline**, and then select **Create Pipeline**. + You should now see the **Execution** tab for your pipeline. + +### Add a run tests step + +This step runs the application's unit tests. + +1. Select **Add Step**. The **Step Library** dialog appears. Select **Run Tests** from the list. +2. From the **Language** menu, select **Java**. +3. From the **Build Tool** menu, select **Maven**. +4. In the **Build Arguments** field, enter `test`. +5. In the **Packages** field, enter `io.harness.`. +6. Under **Test Report Paths**, select **+ Add**, and then enter `**/*.xml` in the provided field. + + :::info + + `**/*.xml` finds all JUnit XML formatted files that the tests generate. + + ::: + +7. In the **Post-Command** field, enter `mvn package -DskipTests`. +8. Expand the **Additional Configuration** section, select the **Container Registry** field, and then either select an existing [Docker Hub](https://hub.docker.com/) connector or create one. +9. In the **Image** field, enter`maven:3.5.2-jdk-8-alpine`. +10. Ensure that **Run only selected tests** is selected. +11. In the **Timeout** field, enter `30m`. +12. Select **Apply Changes**. + +### Add a Docker build and publish step + +This step packages the application as a Docker image and publishes the image to Docker Hub. + +1. Select **Add Step**. In the **Step Library** dialog, select **Build and Push an image to Docker Registry** from the list. +2. Select the **Docker Connector** field, and then either select an existing [Docker Hub](https://hub.docker.com/) connector or create one. + + :::tip + + Since your pipeline will publish the image to your Docker Hub account, your connector needs an access token with **Read, Write, Delete** permissions. + + ::: + +3. In the **Docker Repository** field, enter `your_user/jhttp`. Replace `your_user` with your Docker Hub username. +4. Under **Tags**, select **+ Add**. A new field appears. + +5. Select the icon on the right side of the field, and then select **Expression**. +6. Start to type `<+pipeline.` in the field to see suggestions for all available expressions. Select `sequenceId`. The field should now contain `<+pipeline.sequenceId>`. + + :::info + + `<+pipeline.sequenceId>` is a value provided at pipeline execution time. This value matches the pipeline sequence number, which is unique to each pipeline execution. + + This ensures that the resulting Docker image always has a unique tag. + + ::: + +7. Select **Apply Changes**. + +### Add an integration tests stage + +This separate pipeline stage pulls the Docker image that was published in the previous step, runs it as a [Background step](/docs/continuous-integration/ci-technical-reference/background-step-settings), and verifies that the container started successfully. + +1. In the **Pipeline Studio**, select **Add Stage**, and then select **Build**. + +2. In the **Stage Name** field, enter `Run Connectivity Test`, and then select **Set Up Stage**. + +:::tip performance tip + +The steps in this stage do not require code from the Git repository. To save time in each pipeline execution, disable **Clone Codebase**. + +::: + +3. On the **Infrastructure** tab, select **Propagate from an existing stage**, select the previous stage from the drop-down menu, and then select **Continue**. + +### Add a background step + +This background step runs the Docker image that was published in the previous stage. + +1. Select **Add Step**. In the **Step Library** dialog, select **Background** from the list. +2. In the **Name** field, enter `Run Java HTTP Server`. +3. Expand the **Additional Configuration** section, select the **Container Registry** field, and then select your Docker Hub connector. +4. In the **Image** field, etner `your_user/jhttp:<+pipeline.sequenceId>`. Replace `your_user` with your Docker Hub username. +5. Under **Port Bindings**, select **+ Add**, and then enter `8888` in the **Host Port** and **Container Port** fields. + + :::info + + This exposes port `8888` from the running container to the host operating system. This allows the connection test step to reach the application at `localhost:8888`. + + ::: + +6. Select **Apply Changes**. + +### Add a connection test step + +This step runs a connection test from the host operating system to verify that the application started successfully. + +1. Select **Add Step**. On the **Step Library** dialog, select **Run** from the list. +2. In the **Name** field, enter `Test Connection to Java HTTP Server` . +3. In the **Command** field, enter the following command: + + ``` + until curl --max-time 1 http://localhost:8888; do + sleep 2; + done + ``` + + :::info + + This simple connectivity test attempts to reach the service every two seconds until it is successful. + + ::: + +4. Select **Apply Changes**, and then select **Save**. + + :::tip + + With the application up and running, many types of tests are possible: integration, security, performance, and more. + + ::: + +### Optional YAML configuration + +If you switch from **Visual** to **YAML** in the Pipeline Studio, your pipeline should look similar to this: + +
    Click to expand +

    + +```yaml +pipeline: + name: Build jhttp + identifier: Build_jhttp_XYZ123 + projectIdentifier: Default_Project_XYZ123 + orgIdentifier: default + stages: + - stage: + name: Build + identifier: Build + type: CI + spec: + cloneCodebase: true + execution: + steps: + - step: + type: RunTests + name: Run Tests + identifier: RunTests + spec: + connectorRef: docker_hub + image: maven:3.5.2-jdk-8-alpine + language: Java + buildTool: Maven + args: test + packages: io.harness. + runOnlySelectedTests: true + postCommand: mvn package -DskipTests + reports: + type: JUnit + spec: + paths: + - "**/*.xml" + timeout: 30m + - step: + type: BuildAndPushDockerRegistry + name: Build and Push an image to Docker Registry + identifier: BuildandPushanimagetoDockerRegistry + spec: + connectorRef: docker_hub + repo: your_user/jhttp + tags: + - <+pipeline.sequenceId> + platform: + os: Linux + arch: Amd64 + runtime: + type: Cloud + spec: {} + - stage: + name: Run Connectivity Test + identifier: Run_Connectivity_Test + description: "" + type: CI + spec: + cloneCodebase: false + platform: + os: Linux + arch: Amd64 + runtime: + type: Cloud + spec: {} + execution: + steps: + - step: + type: Background + name: Run Java HTTP Server + identifier: Run_Java_HTTP_Server + spec: + connectorRef: docker_hub + image: your_user/jhttp:<+pipeline.sequenceId> + shell: Sh + portBindings: + "8888": "8888" + - step: + type: Run + name: Test Connection to Java HTTP Server + identifier: Test_Connection_to_Java_HTTP_Server + spec: + shell: Sh + command: |- + until curl --max-time 1 http://localhost:8888; do + sleep 2; + done + properties: + ci: + codebase: + connectorRef: account.Github_OAuth_XYZ123 + repoName: your_user/jhttp + build: <+input> +``` + +

    +
    + +## Run your pipeline + +1. In the **Pipeline Studio**, select **Run**. +2. Select **Git Branch** as the **Build Type**, and then enter `main` in the **Branch Name** field. +3. Select **Run Pipeline**. +4. Observe each step of the pipeline execution. When the first stage completes, test results appear in the **Tests** tab. When the second stage completes, you should see the successful `curl` command in the final step. diff --git a/tutorials/build-code/ci-localstack-service-dependency.md b/tutorials/build-code/ci-localstack-background-step.md similarity index 50% rename from tutorials/build-code/ci-localstack-service-dependency.md rename to tutorials/build-code/ci-localstack-background-step.md index eba3ddf0458..85137bc9828 100644 --- a/tutorials/build-code/ci-localstack-service-dependency.md +++ b/tutorials/build-code/ci-localstack-background-step.md @@ -1,12 +1,12 @@ --- sidebar_position: 2 -description: This build automation guide walks you through running LocalStack as a Service Dependency in a CI Pipeline +description: This build automation guide walks you through running LocalStack as a Background step in a CI Pipeline keywords: [Hosted Build, Continuous Integration, Hosted, CI Tutorial] --- -# Run LocalStack as a Service Dependency +# Run LocalStack as a Background Step -A Service Dependency is a detached service that is accessible to all Steps in a Stage. This tutorial shows how to run [LocalStack](https://localstack.cloud/) as a Service Dependency in a Harness CI pipeline. LocalStack is software that emulates cloud services (such as AWS) when developing locally, or for testing in continuous integration pipelines. +[Background steps](/docs/continuous-integration/ci-technical-reference/background-step-settings) are useful for running services that need to run for the entire lifetime of a Build stage. This tutorial shows how to run [LocalStack](https://localstack.cloud/) as a Background step in a Harness CI pipeline. LocalStack is software that emulates cloud services (such as AWS) when developing locally, or for testing in continuous integration pipelines. ## Create Your Pipeline @@ -14,57 +14,74 @@ A Service Dependency is a detached service that is accessible to all Steps in a 2. Click **Add Stage** and select **Build**. Give your stage a name, optionally configure the repository to be cloned, then click **Set Up Stage**. 3. Select "Cloud" in the **Infrastructure** tab. -### Add Service Dependency +### Add Background Step -This will run the LocalStack Docker image as a Service Dependency in your pipeline. +This will run the LocalStack Docker image as a Background step in your pipeline. -1. In the **Execution** tab of your pipeline stage, click **Add Service Dependency**, the **Configure Service Dependency** dialogue window should appear. -2. Enter "localstack" in the **Dependency Name** field. -3. Click the **Container Registry** field and either select an existing [Docker Hub](https://hub.docker.com/) connector, or create one. +1. In the **Execution** tab of your pipeline stage, click **Add Step**. The **Step Library** dialogue window should appear, select the **Background** step. +2. Enter "localstack" in the **Name** field. +3. Expand the **Additional Configuration** section, click the **Container Registry** field and either select an existing [Docker Hub](https://hub.docker.com/) connector, or create one. 4. Enter the desired LocalStack Docker image in the **Image** field (for example, `localstack/localstack:1.2.0`). -5. If you have a LocalStack API key, expand the **Optional Configuration** section and add an environment variable named `LOCALSTACK_API_KEY`. +5. If you have a LocalStack API key, add a `LOCALSTACK_API_KEY` environment variable in the **Environment Variables** section. 6. Click **Apply Changes**. -:::note - -Notice that the `curl` command is able to reach the LocalStack service at `localstack:4566`. This is because both the service and step share the same Docker network. - -::: - ### Add Step This will add a step to ensure the LocalStack service is healthy. The step will run the `curl` command to poll the LocalStack service's `/health` endpoint until it returns successfully. This ensures that LocalStack is ready to receive traffic before the pipeline continues. 1. In the **Execution** tab of your pipeline stage, click **Add Step** then select **Run**. 2. Enter "localstack health" in the **Name** field. -3. Click the **Container Registry** field and select your Docker Hub connector. -4. Enter `curlimages/curl:7.83.1` in the **Images** field. -5. Enter the following command in the **Command** field. +3. Enter the following command in the **Command** field. ``` until curl --fail --silent --max-time 1 http://localstack:4566/health; do sleep 2; done ``` +4. Expand the **Optional Configuration** section, click the **Container Registry** field and select your Docker Hub connector. +5. Enter `curlimages/curl:7.83.1` in the **Images** field. 6. Click **Apply Changes**. +:::note + +Notice that the `curl` command is able to reach the LocalStack service at `localstack:4566`. This is because both the service and step share the same Docker network. + +::: + ## Optional YAML Configuration If you switch from **Visual** to **YAML** in the Pipeline Studio, your pipeline should look similar to this: + ```yaml +pipeline: + name: my_pipeline + identifier: my_pipeline + projectIdentifier: My_Project + orgIdentifier: default + tags: {} stages: - stage: - name: my_stage + name: Stage 1 + identifier: Stage_1 + description: "" type: CI spec: - serviceDependencies: - - identifier: localstack - name: localstack - type: Service - spec: - connectorRef: my_connector - image: localstack/localstack:1.2.0 - envVariables: - LOCALSTACK_API_KEY: <+secrets.getValue("localstack-api-key")> + cloneCodebase: false + platform: + os: Linux + arch: Amd64 + runtime: + type: Cloud + spec: {} execution: steps: + - step: + type: Background + name: localstack + identifier: localstack + spec: + connectorRef: docker_hub + image: localstack/localstack:1.2.0 + shell: Sh + envVariables: + LOCALSTACK_API_KEY: <+secrets.getValue("localstack-api-key")> - step: type: Run name: localstack health @@ -81,9 +98,9 @@ If you switch from **Visual** to **YAML** in the Pipeline Studio, your pipeline 1. Click the **Save** button, then click **Run**. 2. Click **Run Pipeline** in the **Run Pipeline** dialogue window. 3. Observe your pipeline execution. -4. The `localstack` Service Dependency logs should show output similar to this: +4. The `localstack` step logs should show a line similar to this: ``` - INFO --- [ Thread-110] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) + Running on https://0.0.0.0:4566 (CTRL + C to quit) Ready. ``` -5. The `localstack health` step should complete successfully when the LocalStack service is healthy. \ No newline at end of file +5. The `localstack health` step should complete successfully when the LocalStack service is healthy. diff --git a/tutorials/manage-service-reliability/intro-java-exception-management.md b/tutorials/manage-service-reliability/intro-java-exception-management.md index 9ab416d2454..df6e756f2fd 100644 --- a/tutorials/manage-service-reliability/intro-java-exception-management.md +++ b/tutorials/manage-service-reliability/intro-java-exception-management.md @@ -4,87 +4,116 @@ description: Introducing Error Tracking - how to identify all exceptions and fix keywords: [java exceptions, uncaught, swallowed exception, java errors, error tracking] --- -# Finding and Fixing Java Exceptions -Every Java program has them, this article describes the various types of exceptions you need to look out for and how to make sure that Java exceptions don't cause production issues. +# Finding and fixing Java exceptions -## Background Information -What are exceptions? The easiest way to answer that question is to quote the [Oracle Java Documentation](https://docs.oracle.com/javase/tutorial/essential/exceptions/definition.html) - "An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions."" +This tutorial discusses different types of Java exceptions that you come across and how to fix them. -### 3 Types of Exceptions -_**Uncaught exceptions**_ are the exceptions that are not caught by the compiler but automatically caught and handled by the Java built-in exception handler. +## What are exceptions? -_**Caught exceptions**_ are handled in code via the `try` and `catch` keywords. +According to [Oracle Java Documentation](https://docs.oracle.com/javase/tutorial/essential/exceptions/definition.html), *"An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions."* -```java -try { - // Block of code to try -} -catch(Exception e) { - // Block of code to handle errors -} -``` -_**Swallowed exceptions**_ are caught in empty `catch` blocks. They don't show up in logs. These can be particularly nasty to identify and troubleshoot and are often the worst offenders of causing production incidents. +## Types of exceptions +Following are the types of Java exceptions: -## A Repeatable Process for Dealing With Exceptions -You can't get rid of them all (nor should you) but you need to identify the problematic exceptions and prevent them from making into production where they can negatively impact users. This simple 3 step process can help you minimize the chance of unleashing bad exceptions on the end users. Follow this 3-step IRP (Identify, Resolve, Prevent) rule. +- **Uncaught exceptions** - Exceptions that the compiler does not catch. Instead, the built-in exception handler in Java automatically catches and handles them. -### Identify Critical Issues -Proactively identify runtime exceptions and slowdowns in every release – including issues that otherwise would be missed (see "Swallowed Exceptions" above). It's important to know when new releases introduce new, increasing or resurfaced exceptions. +- **Caught exceptions** - Exceptions that are handled in code via the `try` and `catch` keywords. -### Resolve Using Complete Context -Reproduce any exception or slowdown with the complete source code, variables, DEBUG logs and environment state. This is difficult to do without using tooling that was created for this specific purpose (like Harness). When you have complete context for every exception you can fix them, even if they were never logged. -### Prevent Issues with Reliability Gates -Static analysis and testing can never uncover 100% of issues. Runtime code analysis identifies unknown issues, and prevents them from being deployed. You should automatically block unreliable software versions from being released by integrating Java exception tracking tooling with CI/CD pipelines. + ``` + try { + // Block of code to try + } + catch(Exception e) { + // Block of code to handle errors + } + ``` -### Integrated with CI/CD Pipelines or Standalone -Java exception tracking tooling can be used as a standalone capability or better yet, included as part of your CI/CD pipelines. Whenever code is exercised, you want to identify every exception and collected supporting details needed to fix them. This process can start as early as unit testing during CI and can continue through every other step of the software delivery lifecycle including in production. -Standalone exception tracking solutions can make it quicker to initially set up and get started tracking down and fixing exceptions. CI/CD integrated exception tracking solutions can take a little more time for initial setup but have greater potential for becoming a standard part of the software delivery process and adding more value to the organization overall. +- **Swallowed exceptions** - Exceptions that are caught in empty `catch` blocks. They don't show up in logs. They are difficult to identify and troubleshoot, resulting in production incidents. -## 2 Minute Demo Using Harness to IRP Java Exceptions - - +## Dealing with exceptions – a repeatable process + +Even though it is difficult to remove all the exceptions, it is important to identify the problematic exceptions and prevent them from escaping into production as they can negatively impact the users. You can use a three-step process to identify, resolve, and prevent bad exceptions. + + +### Identify critical issues + +Proactively identify runtime exceptions and slowdowns in every release including issues that swallowed exceptions. It's important to know when the new releases introduce new, increasing, or resurfaced exceptions. + +### Resolve using complete context + +Reproduce any exception or slowdown with the complete source code, variables, DEBUG logs, and environment state. You need purpose-built tools such as Harness Java Exception Tracker to do this. After you get the context, fix all the exceptions, even if they are not logged. + +### Prevent exceptions using a Java Exception Tracker + +You need to identify the exceptions every time the code is checked in and collect the logs required to fix the exceptions. However, static analysis and testing do not detect all the issues. You can use a Java Exception Tracker to perform runtime code analysis that identifies unknown issues and prevents them from being deployed. + +Set up reliability gates by integrating a Java Exception Tracker with your CI/CD pipelines. This automatically blocks deployment of unreliable software versions. Implement this throughout the software delivery life cycle, starting from unit testing to production. + +You can use the Java Exception Tracker as a standalone application or integrate it with your CI/CD pipelines. It is easy to set up and get started with standalone Java Exception Trackers. CI/CD-integrated Java exception tools are complex and take more time to install, but they have the potential to become an integral part of your software delivery process. + + +## Key features of a Java Exception Tracker + +It is critical to have the right Java Exception Tracker so that the exceptions don’t skip through your software delivery life cycle. Your Java Exception Tracker should have the following features. + +### Code fingerprinting -## Key Features of Java Exception Trackers -What are the important features that a Java exception tracking solution should have? Here's a short list. +The Java Exception Tracker should have the ability to analyze the code loaded into your Java Virtual Machine (JVM) during runtime and assign a unique fingerprint to each line in the code. At runtime, the tracker should correlate each error, drill down to its unique signature, identify anomalies, and capture the complete state. -### Code Fingerprinting -When the software analyzes code loaded into your JVM at runtime and assigns a unique fingerprint to each line of code. At runtime, exception tracking software should correlate each error and slowdown to its unique signature to identify anomalies and capture their complete state. -### Source Code & Variables -By analyzing and indexing the structure of the code, it's possible to collect the most relevant variable state information from running JVM whenever critical events take place. Data should be captured across the entire call stack up to 10 levels into heap. +### Source code and variables -### DEBUG/TRACE Logs -When implemented for speed, highly efficient cyclical in-memory buffers of log statements show you the DEBUG/TRACE state of your code at the moment of critical events, without increasing log verbosity or parsing log files from disk. +The tracker should have the ability to analyze and index the code structure. This helps you collect data on the most relevant variable state from the running JVM when an event occurs. The tracker should capture data across the call stack up to 10 levels into heap. -### Error Detection & Prioritization -Exception tracking software should correlate all code events at runtime to their unique fingerprint (even as code changes over time) to identify and prioritize new and spiking errors without relying on developer foresight to log and search for them. -### Host / Container State -Capture detailed system metrics (e.g. CPU, GC) of the host / container in which the code is executing, as well as complete snapshots of the JVM state (e.g. env vars, threading state) at the moment of critical errors and slowdowns. You might need this data to fix the exception. +### DEBUG/TRACE logs -### 256-bit AES Encryption -It's important that all source code and variable states collected are privately encrypted using a 256-bit AES Encryption key before leaving the production node. Secure all sensitive data at all times. +The tracker should have highly efficient cyclical in-memory buffers to store log statements and display the DEBUG/TRACE state of your code during the critical events, without increasing log verbosity or parsing log files from disk. -### PII (Personally Identifiable Information) Redaction -Secure all sensitive data at all times! Redact PII variable data before it leaves the user environment. Make sure variable state is redacted based on configurable variable value patterns and code symbology: variable, field and class names. + +### Error detection and prioritization +The Java Exception Tracker should have the capability to correlate all the code events at runtime to their unique fingerprint, even as code changes over time. It should identify and prioritize new and spiking errors without relying on the developer’s foresight to log and search them. -## Install the Harness Error Tracking Java Agent -There are multiple options for installing the Harness Java Exception Tracking Agent. Click the link below and follow the documentation for your preferred type of installation. -[Java Agent Installation Documentation](https://docs.harness.io/article/nx99xfcoxz-install-the-error-tracking-agent) +### Host/Container state -## 4-Step Interactive Walkthrough +The tracker should capture detailed system metrics, such as CPU, GC, and so on, of the host or container in which the code is executing. It should also capture complete snapshots of the JVM state, such as env vars and threading state, in the event of critical errors and slowdown. -The walkthrough below is the quickest way to see what Harness has to offer when it comes to Java exception tracking. There is not a self guided free trial today but that is in the works. For now you can [request a demo](https://www.harness.io/interest/error-tracking) and someone from Harness will contact you to show you the solution and get you set up with the software. + +### 256-bit AES encryption + +The tracker should have the ability to privately encrypt all the source code and the collected variable states using a 256-bit AES Encryption key before they leave the production node. + + +### Personally Identifiable Information (PII) redaction​ + +You must always secure sensitive data at all times. The Java Exception Tracker should provide you the ability to redact the PII variable data before it leaves the user environment. You should ensure that the variable state is redacted based on the configurable variable value patterns and code symbology, such as variable, field, and class names. + + +## Demo - identify, resolve, and prevent Java exceptions + +Here is a demo that shows you how to identify, resolve, and prevent Java exceptions using the Harness Error Tracking Java Agent. + + + + + +## Harness Error Tracking Java Agent + +Here is an interactive video that demonstrates the capabilities of the Harness Error Tracking Java Agent. Select the glowing circle to see the capabilities. To learn more about the Harness Error Tracking Java Agent and request a detailed demo, contact [Harness](https://www.harness.io/interest/error-tracking).
    + +## Install Harness Error Tracking Java Agent + +You can install the Harness Java Exception Tracking Agent in multiple ways. For more information, go to [Java Agent Installation Documentation](https://docs.harness.io/article/nx99xfcoxz-install-the-error-tracking-agent). \ No newline at end of file diff --git a/yarn.lock b/yarn.lock index 4a834f760af..8ee1dc62ffa 100644 --- a/yarn.lock +++ b/yarn.lock @@ -1342,6 +1342,21 @@ react-helmet-async "*" react-loadable "npm:@docusaurus/react-loadable@5.5.2" +"@docusaurus/plugin-client-redirects@^2.2.0": + version "2.2.0" + resolved "https://registry.yarnpkg.com/@docusaurus/plugin-client-redirects/-/plugin-client-redirects-2.2.0.tgz#f6228e5b2852520e22e0f4b89f870431aa975a90" + integrity sha512-psBoWi+cbc2I+VPkKJlcZ12tRN3xiv22tnZfNKyMo18iSY8gr4B6Q0G2KZXGPgNGJ/6gq7ATfgDK6p9h9XRxMQ== + dependencies: + "@docusaurus/core" "2.2.0" + "@docusaurus/logger" "2.2.0" + "@docusaurus/utils" "2.2.0" + "@docusaurus/utils-common" "2.2.0" + "@docusaurus/utils-validation" "2.2.0" + eta "^1.12.3" + fs-extra "^10.1.0" + lodash "^4.17.21" + tslib "^2.4.0" + "@docusaurus/plugin-content-blog@2.2.0": version "2.2.0" resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.2.0.tgz#dc55982e76771f4e678ac10e26d10e1da2011dc1" @@ -4500,6 +4515,15 @@ fs-extra@^10.1.0: jsonfile "^6.0.1" universalify "^2.0.0" +fs-extra@^11.1.0: + version "11.1.0" + resolved "https://registry.yarnpkg.com/fs-extra/-/fs-extra-11.1.0.tgz#5784b102104433bb0e090f48bfc4a30742c357ed" + integrity sha512-0rcTq621PD5jM/e0a3EJoGC/1TC5ZBCERW82LQuwfGnCa1V8w7dpYH1yNu+SLb6E5dkeCBzKEyLGlFrnr+dUyw== + dependencies: + graceful-fs "^4.2.0" + jsonfile "^6.0.1" + universalify "^2.0.0" + fs-extra@^9.0.0: version "9.1.0" resolved "https://registry.yarnpkg.com/fs-extra/-/fs-extra-9.1.0.tgz#5954460c764a8da2094ba3554bf839e6b9a7c86d"