You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have noticed that the current implementation of the Milvus Operator does not support headless services. Headless services are essential for certain stateful applications that require direct access to individual pods without load balancing.
From the above, we can see that my-release-milvus does not have a ClusterIP as a headless service for nodes.
Could you please confirm if there are any plans to add support for headless services in the Milvus Operator?
Thank you for your attention to this matter.
The text was updated successfully, but these errors were encountered:
Hi @qchenzi, thank you for bring this out. My question is the proxies of Milvus are considered stateless. There seems no need for headless service. Could you describe your specific case when headless service is needed?
Thank you for your response and for considering the pull request. In our specific case, We are using CoreDNS to configure wildcard DNS entries to dynamically resolve the IPs of the pods. This setup is essential for our internal service discovery and communication needs, where each pod needs to be accessible individually by a predictable DNS name.
While Milvus proxies are stateless, enabling headless services provides significant operational benefits for our specific use case.
Hi everyone,
I have noticed that the current implementation of the Milvus Operator does not support headless services. Headless services are essential for certain stateful applications that require direct access to individual pods without load balancing.
From the above, we can see that my-release-milvus does not have a ClusterIP as a headless service for nodes.
Could you please confirm if there are any plans to add support for headless services in the Milvus Operator?
Thank you for your attention to this matter.
The text was updated successfully, but these errors were encountered: