-
Notifications
You must be signed in to change notification settings - Fork 0
Conversation
closes #7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only the one thing that I think is out of scope on this PR (may be necessary but I hope not). Happy to look into this tomorrow. Otherwise, looks great!
postgresql/values.yaml
Outdated
@@ -64,7 +64,7 @@ persistence: | |||
## set, choosing the default provisioner. (gp2 on AWS, standard on | |||
## GKE, AWS & OpenStack) | |||
## | |||
storageClass: netapp-block-standard | |||
storageClass: netapp-file-standard |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only thing I want to get to the bottom of. I can take a poke at getting netapp-block-standard
working again tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm going to commit on your branch for a bit to try and resolve the netapp issue. Will comment here when handing back @dleard
AIRFLOW__ELASTICSEARCH__HOST: 'airflow' | ||
AIRFLOW__ELASTICSEARCH__LOG_ID_TEMPLATE: '{{dag_id}}-{{task_id}}-{{execution_date}}-{{try_number}}' | ||
AIRFLOW__ELASTICSEARCH__END_OF_LOG_MARK: 'end_of_log' | ||
AIRFLOW__ELASTICSEARCH__WRITE_STDOUT: 'true' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
C_FORCE_ROOT
should be moved out of extraEnv
too, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wenzowski I don't think so? It sounds to me like C_FORCE_ROOT needs to be an env variable in order to run celery as root, whereas variables like AIRFLOW_ELASTICSEARCH_HOST are being inserted into the ariflow.cfg, which is why they are under airflow:config:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done mucking; though now wondering if this is the right approach. By writing logs to a fake elasticsearch we can't read them back. Maybe we should be writing them to an object storage bucket instead? |
A look at the persistent volume claim annotations revealed that volume provisioning was stuck. Deleting the claim and re-provisioning solved the issue.
|
Docs: (under 'writing logs to elasticsearch')
https://github.com/apache/airflow/blob/1e3cdddcd87be3c0f11b43efea11cdbddaff4470/docs/howto/write-logs.rst