You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Type: inuse_space
Time: Jan 29, 2023 at 9:20am (-05)
Showing nodes accounting for 2064.64MB, 100% of 2064.64MB total
----------------------------------------------------------+-------------
flat flat% sum% cum cum% calls calls% + context
----------------------------------------------------------+-------------
1952.78MB 100% | rogchap.com/v8go.(*Value).String \go\pkg\mod\rogchap.com\[email protected]\value.go:244 (inline)
1952.78MB 94.58% 94.58% 1952.78MB 94.58% | rogchap.com/v8go._Cfunc_GoStringN _cgo_gotypes.go:572
----------------------------------------------------------+-------------
Call graph:
We have 12 nodes each running a pool of 128 isolates.
The service uses v8go to process events at a rate of ~ 40 req/s per node.
We are reliably closing contexts after every event.
We are reliably disposing of each isolate after it processes ~100 events or when its heap exceeds 20MB.
We call runtime.GC() manually after every isolate disposal.
The new GOMEMLIMIT parameter has no effect on memory growth.
Memory growth remains unbounded, leading nodes to be OOMKilled by Kubernetes.
Calling String() on
info.Args()
in callback causes[email protected]
to leak memory.pprof output:
Call graph:
We have 12 nodes each running a pool of 128 isolates.
The service uses v8go to process events at a rate of ~ 40 req/s per node.
We are reliably closing contexts after every event.
We are reliably disposing of each isolate after it processes ~100 events or when its heap exceeds 20MB.
We call
runtime.GC()
manually after every isolate disposal.The new
GOMEMLIMIT
parameter has no effect on memory growth.Memory growth remains unbounded, leading nodes to be OOMKilled by Kubernetes.
Downgrading to
[email protected]
does not solve the issue.Has anyone else experienced this behavior before?
The text was updated successfully, but these errors were encountered: