You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the sc4s version ?
30.26.1
Which operating system (including its version) are you using for hosting SC4S?
RHEL 9.3
Which runtime (Docker, Podman, Docker Swarm, BYOE, MicroK8s) are you using for SC4S?
podman
Is there a pcap available? If so, would you prefer to attach it to this issue or send it to Splunk support?
no Is the issue related to the environment of the customer or Software related issue?
wi dont know Is it related to Data loss, please explain ?
currently the vmware logs are not forwarded to splunk
Protocol? Hardware specs?
Last chance index/Fallback index?
Is the issue related to local customization?
no Do we have all the default indexes created?
yes Describe the bug
we recently connected our sc4s to vmware.
the sc4s logs reported everything is fine, but the syslog data from the sc4s stoped being received by our splunk cloud.
we noticed during that our syslog-ng-0000*.qf's are filling up
our connection to splunk cloud is through an edge processor
To Reproduce
please contact me and i will show you on our systems
The text was updated successfully, but these errors were encountered:
@liorbubynet looks like the EP endpoint is intermittently available to SC4S , the data buffer files are only getting full if SC4S endpoints are not available, i will request you to create a support ticket as well @ikheifets-splunk Can you please check this once as well
@rjha-splunk
it is our belief there aren't an gaps in availability between the 2 servers.
as they are continue to operate fine right now with q sizes between 56- 30 MB, once vmwere is added q size spike to 1 GB in a matter of minutes . regarding networks we are looking at 50 mb without vmwere with it should be around 150 mb based on the traffic that is incoming.
all around those are low number compare to the 10gb connection between them.
splunk case 3573453
Was the issue replicated by support?
What is the sc4s version ?
30.26.1
Which operating system (including its version) are you using for hosting SC4S?
RHEL 9.3
Which runtime (Docker, Podman, Docker Swarm, BYOE, MicroK8s) are you using for SC4S?
podman
Is there a pcap available? If so, would you prefer to attach it to this issue or send it to Splunk support?
no
Is the issue related to the environment of the customer or Software related issue?
wi dont know
Is it related to Data loss, please explain ?
currently the vmware logs are not forwarded to splunk
Protocol? Hardware specs?
Last chance index/Fallback index?
Is the issue related to local customization?
no
Do we have all the default indexes created?
yes
Describe the bug
we recently connected our sc4s to vmware.
the sc4s logs reported everything is fine, but the syslog data from the sc4s stoped being received by our splunk cloud.
we noticed during that our syslog-ng-0000*.qf's are filling up
our connection to splunk cloud is through an edge processor
To Reproduce
please contact me and i will show you on our systems
The text was updated successfully, but these errors were encountered: