You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We decided to focus on the Sentry part first (postponing InfluxDB). Maybe, the OOM kill is related to the buffer of Sentry events awaiting transmission to the relay. Thus, it could be well connected to the high memory threshold experience during network outages.
Are we using the wrong metric in watchMemoryAndAlert? HeapAlloc vs HeapInuse. [1], [2]
Only 751991664 Bytes (752 MB) have been used by Poseidon (HeapInuse). Why has it got OOM killed?
Is the Sentry issue unrelated to the OOM and the Influx Connection Issues because Sentry is just marshaling some JSON with an ok-ish heap usage?
If not the heap usage, what else led Poseidon to reach its 4 GB RAM? Did it actually or is this issue just based on the Sentry warning? (No data due to retention policy)
We agreed to postpone this issue until the next occurrence.
Today, we had another occurrence of Poseidon being OOM killed.
Stack Trace
At the same time, Influx also showed erroneous behaviour:
Stack Trace
The text was updated successfully, but these errors were encountered: