Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] guide on how exactly to use with helm including example of PV setup #71

Open
colin-riddell opened this issue May 15, 2023 · 4 comments
Assignees

Comments

@colin-riddell
Copy link

colin-riddell commented May 15, 2023

Hey hey.

Maybe I'm just being thick, but I'm not sure I understand how exactly to properly set this up correctly with PV claims etc. Does every service need a PV? I found myself writing out a very lengthy helm config by hand and after several dozen attempts still don't have all the properties configured.
it's not clear to me if the media should be shared on a created PV or via some other means with NFS or both.
An example of how to do that would be useful. Thanks in advance.
Happy to be told where to go, though 😅

@colin-riddell
Copy link
Author

colin-riddell commented May 22, 2023

Update on this on what I've found out.

  • you can either use a PV OR point it to an NFS. I found that mounting an NFS works well for my needs, so done that. Just set customVolume: true then under volumes set:
    nfs: 
         server: <nfs server ip>
         path: /some/nfs/path
  • I had issues with this and found out that the sub paths defined in subPaths need to already exist otherwise the pods won't run properly. Basically, subPath definitions need to be created and won't create themselves.
  • I also found ownership issues on my nfs, so had to mess around with chown etc. Ultimately found that setting up the NFS with no_root_squash was the trick (potentially insecure, but this is all in my home network)
  • I couldn't get it to work with traefik, and in the end just gave up and let MetalLB create an IP for plex on my network. Just change ClusterIP to LoadBalancer and it should register consistently in an IP range that MetalLB is allowed to access.
  • I couldn't get the nodeSelector override in each specific app to work. The nodeSelector config is in general works, but presumably that means all applications will be selected on that one node? Would be great to have that selector working on each app under the container definition.

@colin-riddell
Copy link
Author

Still working using this very handy mediaserver operator. I thought I'd share my current setup and config for anyone else in the same situation I was in when trying the project.
I've 4 nodes, with an NFS shared on node3 which is mounted on all other nodes in the same path.

---

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: download-tool-ingressroutes
  namespace: default
    #annotations:
    #traefik.ingress.kubernetes.io/router.provider: traefik
spec:
  entryPoints:
    - web
  routes:
    - match: Host(`192.168.0.200`) && PathPrefix(`/prowlarr`)
      kind: Rule
      services:
        - name: prowlarr
          port: 9696
    - match: Host(`192.168.0.200`) && PathPrefix(`/sonarr`)
      kind: Rule
      services:
        - name: sonarr
          port: 8989
    - match: Host(`192.168.0.200`) && PathPrefix(`/jackett`)
      kind: Rule
      services:
        - name: jackett
          port: 9117

---

apiVersion: charts.kubealex.com/v1
kind: K8SMediaserver
metadata:
  name: k8smediaserver-sample
spec:
  # Default values copied from <project_dir>/helm-charts/k8s-mediaserver/values.yaml
  general:
    nodeSelector:
      node: node3
    ingress_host: k8s-mediaserver.k8s.test
    plex_ingress_host: 192.168.0.201
    image_tag: latest
    # UID to run the process with
    puid: 1000
    # GID to run the process with
    pgid: 1000
    # Persistent storage selections and pathing
    storage:
      customVolume: true  # set to true if not using a PVC (must provide volume below)
      pvcName: mediaserver-pvc
      size: 5Gi
      accessMode: ""
      pvcStorageClass: ""
      # the path starting from the top level of the pv you're passing. If your share is server.local/share/, then tv is server.local/share/media/tv
      subPaths:
        tv: media/tv
        movies: media/movies
        downloads: downloads
        transmission: transmission
        sabnzbd: sabnzbd
        config: config
      volumes:
        nfs:
          server: 192.168.0.123
          path: /volume/raid
      #  hostPath:
      #    path: /mnt/share
    ingress:
      ingressClassName: ""

  sonarr:
    enabled: true
    container:
      image: docker.io/linuxserver/sonarr
      nodeSelector:
        node: node3
      port: 8989
    service:
      type: ClusterIP
      port: 8989
      nodePort:
      extraLBService: false
      # Defines an additional LB service, requires cloud provider service or MetalLB
      # extraLBService:
      #  annotations:
      #    my-annotation: my-value
    ingress:
      enabled: true
      annotations: {}
      path: /sonarr
      tls:
        enabled: false
        secretName: ""
    resources: {}
    volume: {}
    # name: pvc-sonarr-config
    # storageClassName: longhorn
    # annotations:
    #   my-annotation/test: my-value
    # labels:
    #   my-label/test: my-other-value
    # accessModes: ReadWriteOnce
    # storage: 5Gi
    # selector: {}

  radarr:
    enabled: false
    container:
      image: docker.io/linuxserver/radarr
      nodeSelector: 
         node: node2
      port: 7878
    service:
      type: ClusterIP
      port: 7878
      nodePort:
      extraLBService: false
      # Defines an additional LB service, requires cloud provider service or MetalLB
      # extraLBService:
      #  annotations:
      #    my-annotation: my-value
    ingress:
      enabled: true
      annotations: {}
      path: /radarr
      tls:
        enabled: false
        secretName: ""
    resources: {}
    volume: {}
    # name: pvc-radarr-config
    # storageClassName: longhorn
    # annotations: {}
    # labels: {}
    # accessModes: ReadWriteOnce
    # storage: 5Gi
    # selector: {}

  jackett:
    enabled: true
    container:
      image: docker.io/linuxserver/jackett
      nodeSelector: 
         node: node1
      port: 9117
    service:
      type: ClusterIP
      port: 9117
      nodePort:
      extraLBService: false
      # Defines an additional LB service, requires cloud provider service or MetalLB
      # extraLBService:
      #  annotations:
      #    my-annotation: my-value
    ingress:
      enabled: true
      annotations: {}
      path: /jackett
      tls:
        enabled: false
        secretName: ""
    resources: {}
    volume: {}
    #  name: pvc-jackett-config
    #  storageClassName: longhorn
    #  annotations: {}
    #  labels: {}
    #  accessModes: ReadWriteOnce
    #  storage: 5Gi
    #  selector: {}

  transmission:
    enabled: false
    container:
      image: docker.io/linuxserver/transmission
      nodeSelector:
          node: node2
      port:
        utp: 9091
        peer: 51413
    service:
      utp:
        type: ClusterIP
        port: 9091
        nodePort:
        # Defines an additional LB service, requires cloud provider service or MetalLB
        extraLBService: false
      peer:
        type: ClusterIP
        port: 51413
        nodePort:
        nodePortUDP:
        # Defines an additional LB service, requires cloud provider service or MetalLB
        extraLBService: false
        # extraLBService:
        #  annotations:
        #    my-annotation: my-value
    ingress:
      enabled: true
      annotations: {}
      path: /transmission
      tls:
        enabled: false
        secretName: ""
    config:
      auth:
        enabled: false
        username: ""
        password: ""
    resources: {}
    volume: {}
    #  name: pvc-transmission-config
    #  storageClassName: longhorn
    #  annotations: {}
    #  labels: {}
    #  accessModes: ReadWriteOnce
    #  storage: 5Gi
    #  selector: {}

  sabnzbd:
    enabled: false
    container:
      image: docker.io/linuxserver/sabnzbd
      nodeSelector: {}
      port:
        http: 8080
        https: 9090
    service:
      http:
        type: ClusterIP
        port: 8080
        nodePort:
        # Defines an additional LB service, requires cloud provider service or MetalLB
        extraLBService: false
      https:
        type: ClusterIP
        port: 9090
        nodePort:
        # Defines an additional LB service, requires cloud provider service or MetalLB
        extraLBService: false
        # extraLBService:
        #  annotations:
        #    my-annotation: my-value
    ingress:
      enabled: true
      annotations: {}
      path: /sabnzbd
      tls:
        enabled: false
        secretName: ""
    resources: {}
    volume: {}
    #  name: pvc-plex-config
    #  storageClassName: longhorn
    #  annotations: {}
    #  labels: {}
    #  accessModes: ReadWriteOnce
    #  storage: 5Gi
    #  selector: {}

  prowlarr:
    enabled: true
    container:
      image: docker.io/linuxserver/prowlarr
      tag: develop
      nodeSelector: {}
      port: 9696
    service:
      type: ClusterIP
      port: 9696
      nodePort:
      extraLBService: false
    ingress:
      enabled: true
      annotations: {}
      path: /prowlarr
      tls:
        enabled: false
        secretName: ""
    resources: {}
    volume: {}

  plex:
    enabled: true
    claim: "SOMECLAIMTOKEN"
    replicaCount: 1
    container:
      image: docker.io/linuxserver/plex
      nodeSelector:
        plex: worker
      port: 32400
    service:
      type: LoadBalancer
      port: 32400
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
    ingress:
      enabled: false
      annotations:
        metallb.universe.tf/allow-shared-ip: "192.168.0.201"
      tls:
        enabled: false
        secretName: ""
    resources: {}
    #  limits:
    #    cpu: 100m
    #    memory: 100Mi
    #  requests:
    #    cpu: 100m
    #    memory: 100Mi
    volume: {}
    #  name: pvc-plex-config
    #  storageClassName: longhorn
    #  annotations: {}
    #  labels: {}
    #  accessModes: ReadWriteOnce
    #  storage: 5Gi
    #  selector: {}

Notes:

  • I found plex needs it's own entire IP, and works better if you expose it over it's port rather than forward to 80/443, so I set that to LoadBalancer to let metalLb take care of that. Note the metalLB annotation to specify the IP, otherwise DNS makes it the same every time.. metallb.universe.tf/allow-shared-ip: "192.168.0.201"
  • sonarr, jacket etc work well just on a route, without the need for a whole domain/ip, so just create a traefik ingressroute to point to those services. Works pretty well.
  • I cannot for the life of me get the individual nodeSelector directives to work for the individual deployment/service sections. This is really driving me nuts, since I really don't want all applications scheduled on the same node. Any advice on how to get this to work greatly recieved @kubealex

@kubealex
Copy link
Owner

hey @colin-riddell, thank you so much for your considerations!

Regarding the nodeSelector, in the initial idea it was meant to have all the pods on the same node to avoid requiring RWX PVs, so by default the nodeSelector is taken from the general settings.

A possible workaround would be having a general nodeselector as a default and override it in the single resources.
Of course this would require a bit of tuning on the templates, but definitely something that can be discussed/worked on.

@colin-riddell
Copy link
Author

hey @kubealex - yeah keen to help poc that. I tried messing with the template and couldn't get it to work. I was editing plex-resource.yaml section to point to the specific node selector configured in k8s-mediaserver.yaml.

      {{- with .Values.plex.container.nodeSelector }}    # pointing to specific config for plex container?
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}

Couldn't get this to work. I'm not using helm though, so I'm basing this on the assumption that the CRD still uses these templating values under the hood when it's applied? (I couldn't see where else it would be configured).

I also tried editing the plex-resource.yaml to hardcode a selector with no joy.. I don't know much about how CRD's work TBF, so apologies for that. If you'd point out where the configs are picked up when applying the CRD, I'll happily have another stab.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants