This task copies specified Assets from Source STAC Item(s), uploads them to S3 and updates Item assets hrefs to point to the new location.
In order to run this task within Argo Workflows, follow the below instructions.
-
cd
into this directory. -
Create an image from the provided Dockerfile. If you are using Rancher Desktop to run your K8s cluster, you need to use
nerdctl
to build the image.
nerdctl build --namespace k8s.io -t copyassets .
This will create an image with the name & tag of copyassets:latest
.
-
Make sure Argo Workflows is installed on the K8s cluster (see instructions here).
-
Upload the
payload_workflow.json
file to object storage, such as S3. Change thepath_template
variable inupload_options
to match a path where you want to save the output Item assets of this task. For example, if you want to save the output Item assets inside theoutput
folder of a bucket namedcopy_results
and templated by the Item's collection and id, thepath_template
would bes3://copy_results/output/${collection}/${id}/
. -
Make the bucket publically accessible and get the object URL associated with the uploaded payload in step 4.
-
Create a secret named
my-s3-credentials
that contains your AWS credentials. The secret must have the keysaccess-key-id
,secret-access-key
, andsession-token
for authenticating to AWS. -
Run the Argo workflow in the same namespace where the Argo Workflow Controller is installed using:
argo submit -n <NAMESPACE>--watch <FULL PATH TO WORKFLOW YAML FILE>
substituting the appropriate values where needed.
You can either run the workflow_copyassets_with_template.yaml
file or the workflow_copyassets_no_template.yaml
file. If you run the workflow_copyassets_with_template.yaml
file, you need to first have the Workflow Template installed. You can do this with kubectl apply -n <NAMESPACE> -f <FULL PATH TO THE workflow-template.yaml file>
where <NAMESPACE>
is the namespace where the Argo Workflow Controller is installed and the path is the full path to the workflow-template.yaml file.