You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have postgres hosted on RDS with db.t4g.large instance which has 8 GB memory, which provides around 900 max connections.
Service is hosted on kubernetes with 10 pods running
HikariCP has default max pool size and min idle. So each pod is supposed to have 10 connections and total of 100 connections active all time (since min Idle and max pool size is default 10)
java.sql.SQLTransientConnectionException: Main - Connection is not available, request timed out after 10000ms (total=0, active=0, idle=0, waiting=0)
at c.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:686)
at c.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:179)
at c.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:144)
at c.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:99)
at c.s.f.p.p.PostgresDataSourceFactory$createDataSource$1.getConnection(PostgresDataSourceFactory.kt:50)
at jdk.proxy2/jdk.proxy2.$Proxy167.countUsersActivelyProcessing(Unknown Source) [26 skipped]
at j.i.r.GeneratedMethodAccessor52.invoke(Unknown Source)
at j.b.i.r.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at j.base/java.lang.reflect.Method.invoke(Method.java:568)
at c.s.f.p.i.SummaryLogRepositoryAspect.recordQueryExecution(SummaryLogRepositoryAspect.kt:48) [6 skipped]
Wrapped by: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: Main - Connection is not available, request timed out after 10000ms (total=0, active=0, idle=0, waiting=0)
at jdk.proxy2/jdk.proxy2.$Proxy167.countUsersActivelyProcessing(Unknown Source) [24 skipped]
at j.i.r.GeneratedMethodAccessor52.invoke(Unknown Source)
at j.b.i.r.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at j.base/java.lang.reflect.Method.invoke(Method.java:568)
at c.s.f.p.i.SummaryLogRepositoryAspect.recordQueryExecution(SummaryLogRepositoryAspect.kt:48) [6 skipped]
at j.i.r.GeneratedMethodAccessor38.invoke(Unknown Source)
at j.b.i.r.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at j.base/java.lang.reflect.Method.invoke(Method.java:568)
at c.s.f.p.i.RetryOnDeadlockAspect.recordQueryExecution(RetryOnDeadlockAspect.kt:30) [7 skipped]
at j.i.r.GeneratedMethodAccessor37.invoke(Unknown Source)
Interesting part is (total=0, active=0, idle=0, waiting=0) which should not be possible. This issue was happening in only 1 out of 10 pods.
Looks like suddenly Hikari is not setting up correct config of pool size, minIdle etc. To verify this analysis I also checked the RDS monitoring console for max connections ad seems like there were 92 connections active at the time. This proves that 9 pods were behaving right and only 1 was suddenly affected with hikari failing to setup proper config.
I see this issue opened long back and surfed through it but looks like there was no definitive conclusion on the problem and some fixes mentioned are different than the issue we are facing.
The text was updated successfully, but these errors were encountered:
I have postgres hosted on RDS with db.t4g.large instance which has 8 GB memory, which provides around 900 max connections.
Service is hosted on kubernetes with 10 pods running
HikariCP has default max pool size and min idle. So each pod is supposed to have 10 connections and total of 100 connections active all time (since min Idle and max pool size is default 10)
I get the following error:
Interesting part is
(total=0, active=0, idle=0, waiting=0)
which should not be possible. This issue was happening in only 1 out of 10 pods.Looks like suddenly Hikari is not setting up correct config of pool size, minIdle etc. To verify this analysis I also checked the RDS monitoring console for max connections ad seems like there were 92 connections active at the time. This proves that 9 pods were behaving right and only 1 was suddenly affected with hikari failing to setup proper config.
I see this issue opened long back and surfed through it but looks like there was no definitive conclusion on the problem and some fixes mentioned are different than the issue we are facing.
The text was updated successfully, but these errors were encountered: