Skip to content

Latest commit

 

History

History
232 lines (184 loc) · 16.3 KB

ecs-fargate-guide.md

File metadata and controls

232 lines (184 loc) · 16.3 KB

Falcon Container sensor for Linux in an ECS-Fargate cluster

Overview

The Falcon Container sensor for Linux extends runtime security to container workloads in ECS-Fargate clusters that don’t allow you to deploy the kernel-based Falcon sensor for Linux. The Falcon Container sensor runs in user space with no code running in the kernel of the worker node OS. This allows it to secure containers in tasks in clusters where it isn’t possible to deploy the kernel-based Falcon sensor for Linux on the worker node, as with AWS Fargate where organizations don’t have access to the kernel and where privileged containers are disallowed. The Falcon Container sensor can also secure container workloads on clusters where worker node security is managed separately from application security.

The Falcon Container sensor runs inside each application container in a task. It tracks activity in the application containers and sends telemetry to the CrowdStrike cloud. While the Falcon Container sensor is scoped to the container where it runs, its functionality is otherwise similar to that of the kernel-based Falcon sensor for Linux. In particular, it generates detections and performs prevention operations for activity in those application containers.

Installing the Falcon Container in an ECS-Fargate cluster is a manual process. A new task definition must be created for applications with the task definition json generated by the patching utility.

Note: The additional attribute SYS_PTRACE is added to every existing container in the task definition by the patching utility. This attribute is mandatory for running the falcon-sensor container. For more info about Falcon Container detections and prevention operations, see Container Security.

Falcon Container components

Falcon Container consists of these main components:

  • Falcon sensor: At runtime, the Falcon Container sensor is added into each container of the task and launched inside the application container. It uses unique technology to run in the application context.

Note: crowdstrike-falcon-init-container is a CrowdStrike-distinguished container name for the Falcon Container sensor for Linux. If you have an application container with this name in a monitored task, the deployment will fail.

  • Falcon patching utility: The Falcon patching utility runs offline and takes task definition json as an input to generate new task definition json. This patching utility updates the task definition to inject the falcon-sensor into each container and the falcon sensor entry point as the entry point of each application container.

If the command field is defined in the task definition, the Falcon patching utility uses it to inject the falcon sensor entry point as the container entry point. If a container in the task definition doesn’t have a command field, the Falcon patching utility can retrieve the EntryPoint and Command of a container image directly from the container registry and set the falcon sensor entry point as the container entry point.

For a private registry, the patching utility needs to access the registry using the image pull secrets specified in the pod.

ecs-fargate-diagram

Falcon Container sensor image components

The Falcon Container sensor image includes these major components:

  • Patching utility: Generates the patched task definition for the new task deployment
  • Sensor (falcon-sensor): The sensor application

Requirements

  • Subscription: Cloud Workload Protection (CWP)
  • Supported environments: You must have a running AWS ECS-Fargate cluster.

Prerequisites

Steps to Install

Step 1 - Create Falcon API Client and Keys

  1. Goto Support > Api client and keys
  2. Create API Client and keys with these scope : Falcon Image Download (read)

Step 2 - Set Environment Variables

export FALCON_CLIENT_ID=
export FALCON_CLIENT_SECRET=
export FALCON_CID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-YY # < Your cid with checksum, can be found in # Hosts Management > Sensor Downloads
export FALCON_CLOUD_REGION=us-2 # us-1 or us-2 or eu-1
export FALCON_CLOUD_API=api.us-2.crowdstrike.com # api.crowdstrike.com (us-1), api.us-2.crowdstrike.com (us-2), api.eu-1.crowdstrike.com (eu-1)

Step 3 - Get Private CrowdStrike Registry Credentials From API

get OAuth2 token Login using your API keys (client & secret) to obtain an OAuth2 bearer token to allow interaction with the CrowdStrike API.

export FALCON_API_BEARER_TOKEN=$(curl \
--silent \
--header "Content-Type: application/x-www-form-urlencoded" \
--data "client_id=${FALCON_CLIENT_ID}&client_secret=${FALCON_CLIENT_SECRET}" \
--request POST \
--url "https://$FALCON_CLOUD_API/oauth2/token" | \
jq -r '.access_token')

get CrowdStrike registry password Using the OAuth2 bearer token from the previous step, retrieve password FALCON_ART_PASSWORD to be used alongside FALCON_ART_USERNAME (next step) which will obtain access to the CrowdStrike private registry - which is necessary for reviewing repositories and pulling the image. Essentially docker login credentials.

export FALCON_ART_PASSWORD=$(curl --silent -X GET -H "authorization: Bearer ${FALCON_API_BEARER_TOKEN}" \
https://${FALCON_CLOUD_API}/container-security/entities/image-registry-credentials/v1 | \
jq -r '.resources[].token')

format username to login to CrowdStrike registry The format is based on your CID, except it's all lowercase, checksum is removed, and fc- is appended to the front. Example format fc-xxxxxxxxxxxxxxxxxxxxx .

export FALCON_ART_USERNAME="fc-$(echo $FALCON_CID | awk '{ print tolower($0) }' | cut -d'-' -f1)"

Step 4 - Get Latest Sensor Version

login to CrowdStrike registry & retrieve latest sensor version Obtain and utilize REGISTRYBEARER token to interact with the CrowdStrike private registry. Then get the latest sensor version. Finally, setting the FALCON_IMAGE_REPO variable where you will be pulling/deploying the image from and setting up the tag for it. Example tag: 6.35.0-13206.falcon-linux.x86_64.Release.US-1

export REGISTRYBEARER=$(curl -X GET -s -u "${FALCON_ART_USERNAME}:${FALCON_ART_PASSWORD}" "https://registry.crowdstrike.com/v2/token?=${FALCON_ART_USERNAME}&scope=repository:$SENSORTYPE/$FALCON_CLOUD_REGION/release/falcon-sensor:pull&service=registry.crowdstrike.com" | jq -r '.token')
 
export LATESTSENSOR=$(curl -X GET -s -H "authorization: Bearer ${REGISTRYBEARER}" "https://registry.crowdstrike.com/v2/falcon-container/${FALCON_CLOUD_REGION}/release/falcon-sensor/tags/list" | jq -r '.tags[-1]')
 
export FALCON_IMAGE_REPO="registry.crowdstrike.com/falcon-container/${FALCON_CLOUD_REGION}/release/falcon-sensor"
export FALCON_IMAGE_TAG=$LATESTSENSOR

Step 5 - Pull/Push Container Sensor Image to ECR

Pull the Falcon Container Image and save it to your ECR

### Pull image from Crowdstrike registry
echo $FALCON_ART_PASSWORD | docker login -u $FALCON_ART_USERNAME --password-stdin registry.crowdstrike.com
 
### Get sensor for DaemonSet
docker pull $FALCON_IMAGE_REPO:$FALCON_IMAGE_TAG
 
### Tag the image to point to your registry
docker tag $FALCON_IMAGE_REPO:$FALCON_IMAGE_TAG <AWSACCOUNTID>.dkr.ecr.<AWSREGION>.amazonaws.com/falcon-sensor/falcon-container:your_tag_value
 
### push the image to your registry
docker push <AWSACCOUNTID>.dkr.ecr.<AWSREGION>.amazonaws.com/falcon-sensor/falcon-container:your_tag_value

Step 6 - Running the ECS Task Definition Patching Utility

Creating a Pull Token The ECS patching utility needs to be able to query the customer defined images wherever they are hosted and uses a pull token to access them, the example below shows constructing a pull token for use with ECR private. Replace and with your required values. ##Note AWS ECR credentials are short lived so this will need to be refreshed periodically. Options to support utilities such as ecr-helper are being explored.

### For Mac:
export PULL_TOKEN=$(echo "{\"auths\":{\"<AWSACCOUNTID>.dkr.ecr.<AWSREGION>.amazonaws.com\":{\"auth\": \"$(echo AWS:$(aws ecr get-login-password)|base64)\"}}}" | base64)
 
### For Linux:
export PULL_TOKEN=$(echo "{\"auths\":{\"<AWSACCOUNTID>.dkr.ecr.<AWSREGION>.amazonaws.com\":{\"auth\": \"$(echo AWS:$(aws ecr get-login-password)|base64 -w 0)\"}}}" | base64 -w 0)      

Running the ECS Task Definition patcher The ECS Task Definition patcher is embedded within the Falcon Container image, run this locally or as part of your CICD process to patch your ecs task definitions prior to uploading to ECS and running. The example below is based on both the Customers container image + falcon container images being hosted within ECR.

docker run -v /path/to/ecs/taskspecfolder:/var/run/spec \
--rm <AWSACCOUNTID>.dkr.ecr.<AWSREGION>.amazonaws.com/falcon-sensor/falcon-container:your_tag_value \
-cid "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-YY" \
-image "<AWSACCOUNTID>.dkr.ecr.<AWSREGION>.amazonaws.com/falcon-sensor/falcon-container:your_tag_value" \
-pulltoken $PULL_TOKEN \
-ecs-spec-file /var/run/spec/taskdefinition.json > taskdefinitionwithfalcon.json

Flags explained:

-v <localpath>:<pathwithincontainer> Used to mount the directory holding your task definition locally into the falcon container as a volume in order for the ecs patcher to access  
--rm automatically clean up the container and remove the file system when the container exits  
<repo>:<tag> The repo/tag value of the container image to run containing the patching utility  
-cid <cidvalue> Customer ID - Used to direct falcon container sensors to report into your falcon cloud portal  
-image <repo>:<tag> represents the image path which the falcon init containers will spawn from (typically ECR).  
-pulltoken Used by the ECS patcher utility to query in ECR hosted images defined within the ECS task definition and file to inspect their entrypoints / commands if not defined in the customers ECS task definitions.  
-ecs-spec-file /var/run/spec/<taskdefintion.json>  
Lastly the output is redirected to a newfile. Alternatively remove this redirection and it will print to stdout.

Step 6 - Deploying on ECS Fargate

With your falcon modified task definition, upload it to ECS as normal and launch. You will now see that rather than a single container there is 2 containers, first a crowdstrike init container, which sets up a shared working directory and copies in the required libraries and binaries required for the falcon sensor. Then the application starts with a modified entrypoint to first start the crowdstrike sensor followed by the customers defined entrypoint.

### Upload the modified task definition and note the revision returned or capture the full ARN if launching via the CLI. Snippet of response provided below.
aws ecs register-task-definition --cli-input-json file://taskdefinitionwithfalcon.json
{
    "taskDefinition": {
        "taskDefinitionArn": "arn:aws:ecs:<region>:<awsaccountid>:task-definition/<task-family-name>:<RevisionID>",
}
 
### Run the task definition
aws ecs run-task \
--cluster <CLUSTER_NAME> \
--task-definition arn:aws:ecs:<region>:<awsaccountid>:task-definition/<task-family-name>:<RevisionID> \
--network-configuration "awsvpcConfiguration={subnets=[<subnet-1>,<subnet-2>],securityGroups=[<securitygroup-1>]}"

Optional Steps

Add repository credentials for crowdstrike-falcon-init-container

Use the secret-arn option to specify the AWS secret ARN, which references the credentials that can be used to pull a falcon container sensor image from a private registry in ECS. The secret is patched to the repositoryCredentials parameter of the crowdstrike-falcon-init-container in the task definition.

docker run -v /host/specs/:/var/run/spec --rm $FALCON_IMAGE_URI \
-cid $CID -image $FALCON_IMAGE_URI \
-ecs-spec-file /var/run/spec/taskdefinition-file.json \
-secret-arn arn:aws:secretsmanager::532730071073:secret:test

Note: If task definition input is from stdin instead of a file, you can use –ecs-spec option instead of -ecs-spec-file.

Disable Falcon Container sensor for specific containers

By default, the Falcon Container sensor is patched in all the containers within the task definition.To disable patching for a container in a task definition, add a label to the dockerLabels attribute in the container definition:

sensor.falcon-system.crowdstrike.com/injection=disabled

How to verify running state of falcon-sensor

The following methods show how to verify the running state of the falcon-sensor.

Note: Before trying the below methods please make sure you have valid CID, ECS VPC outbound rules are configured correctly and verify if the installation token is enabled in the UI then the installation token has been passed to the patching utility using falconctl options while running patching utility.

Run the task with tracing enabled for falcon-sensor

To enable tracing we have to run the patching utility by passing --trace=info to the patching utility using falconctl options, such as -falconctl-opts "--trace=info". For example:

docker run -v /host/specs/:/var/run/spec --rm $FALCON_IMAGE_URI \
-cid $CID -image $FALCON_IMAGE_URI \
-ecs-spec-file /var/run/spec/taskdefinition-file.json \
-falconctl-opts "--trace=info"

Using Amazon ECS Exec for debugging

With Amazon ECS Exec, you can directly interact with containers and run Linux process monitoring commands to verify falcon-sensor running status. For info on configuring Amazon ECS exec, see the Amazon article Using Amazon ECS Exec for debugging.

Known Issues

"Parameter validation failed" / "unknown parameter in input"

When uploading your modified task definition back into the AWS console if you receive 'parameter validation failed' or 'unknown parameter in input': 'paramName'. This is typically caused by the source task definition file being patched being downloaded either from the AWS Console or AWS CLI. When you upload a task definition, AWS adds a bunch of 'managed' parameters which are not accepted when re-uploaded. If you receive this error remove the following objects from the task definition json if they are present then re-run the patching utility.

taskDefinitionArn
requiresAttributes
status
revision
compatibilities
registeredAt
registeredBy
tags (if empty)

Example Error:

Parameter validation failed:
Unknown parameter in input: "compatibilities", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode, proxyConfiguration, inferenceAccelerators, ephemeralStorage
Unknown parameter in input: "registeredAt", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode, proxyConfiguration, inferenceAccelerators, ephemeralStorage
Unknown parameter in input: "registeredBy", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode, proxyConfiguration, inferenceAccelerators, ephemeralStorage
Unknown parameter in input: "requiresAttributes", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode, proxyConfiguration, inferenceAccelerators, ephemeralStorage
Unknown parameter in input: "revision", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode, proxyConfiguration, inferenceAccelerators, ephemeralStorage
Unknown parameter in input: "status", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode, proxyConfiguration, inferenceAccelerators, ephemeralStorage
Unknown parameter in input: "taskDefinitionArn", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode, proxyConfiguration, inferenceAccelerators, ephemeralStorage