You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I suspect that this is related to this issue in helm as both loki and loki-distributed charts use loki.image variable in templates and to quote Helm docs:
Defined templates (templates created inside a {{ define }} directive) are globally accessible. That means that a chart and all of its subcharts will have access to all of the templates created with {{ define }}.
For that reason, all defined template names should be namespaced.
Because we use an ubrella chart, there's a conflict. I don't see any way around this other than code change, that's why I am creating this isssue. To Reproduce
Steps to reproduce the behavior:
Create a subchart with both loki and loki-distributed as dependencies i.e. our monitoring-stack.
Install with loki-distributed enabled and loki disabled - This is current state
Deploy with both subcharts enabled as per the guide (step 1)
Expected behavior
Both loki and loki distributed are deployed, we may continue to step 2 of the guide
Environment:
Infrastructure: Kubernetes
Deployment tool: helm
Screenshots, Promtail config, or terminal output
values before migration
kube-prometheus-stack:
kubeScheduler:
serviceMonitor:
insecureSkipVerify: truekubeControllerManager:
serviceMonitor:
insecureSkipVerify: true# I tried to manually enter global.registry wherever needed, still got the same errorglobal:
registry: https://hub.docker.com/loki:
enabled: falseloki-distributed:
global:
registry: https://hub.docker.com/registry: https://hub.docker.com/enabled: true# loki config omitted, may contain sensitive information
values-loki-migration.yaml
# Drop logs from loki promtail:
config:
snippets:
common:
- source_labels:
- "__meta_kubernetes_pod_label_app_kubernetes_io_component"regex: "(canary|read|write)"action: "drop"loki2:
registry: https://hub.docker.com/global:
registry: https://hub.docker.com/enabled: truemigrate:
fromDistributed:
enabled: truememberlistService: monitoring-loki-memberlistauth_enabled: trueminio:
enabled: falseloki:
storage:
bucketNames:
chunks: monitoring-loki-chunks # an example bucket name for chunksruler: monitoring-loki-ruler # an example bucket name for ruless3:
endpoint: https://minio:[email protected]/lokis3ForcePathStyle: truelimits_config:
retention_period: 0# infinite retentioncompactor:
retention_enabled: false # infinite retentionglobal:
registry: https://hub.docker.com/
The text was updated successfully, but these errors were encountered:
Describe the bug
I tried to migrate from loki-distributed to loki in our monitoring-stack using this guide
However i got this error.
I suspect that this is related to this issue in helm as both
loki
andloki-distributed
charts useloki.image
variable in templates and to quote Helm docs:Because we use an ubrella chart, there's a conflict. I don't see any way around this other than code change, that's why I am creating this isssue.
To Reproduce
Steps to reproduce the behavior:
loki
andloki-distributed
as dependencies i.e. our monitoring-stack.loki-distributed
enabled andloki
disabled - This is current stateExpected behavior
Both loki and loki distributed are deployed, we may continue to step 2 of the guide
Environment:
Screenshots, Promtail config, or terminal output
values before migration
values-loki-migration.yaml
The text was updated successfully, but these errors were encountered: