You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ponder's docs currently recommend deploying on Railway, with the service and backing Postgres DB both living there. This works great, and redeploys of the indexer turn healthy quickly since historical data can be pulled from the cache and any indexing functions can quickly be run against that data. There's super low latency between the service and the DB.
However, if we try to use a DB that's "remote" relative to the service (eg. another Postgres provider, such as AWS, Neon, Fly, etc) the latency to the DB makes the service nearly unusable-- redeploys take significantly longer to respond as healthy. In a sample case for us, even with all historical data cached, re-indexing took over 3 hours.
I understand there are some mid- and long-term fixes on the roadmap; I wanted to open this issue to better track the progress on workarounds and solutions 🫡
The text was updated successfully, but these errors were encountered:
Quick note: looks like v0.4.15 introduced a nice QoL improvement where restarts of a service don't rerun indexing if none of the config has changed. This could be a workaround in some cases.
Unfortunately, redeploys (not restarts) on railway result in the config changing, since the internal schema config value will update, resulting in re-running the indexing functions.
Ponder's docs currently recommend deploying on Railway, with the service and backing Postgres DB both living there. This works great, and redeploys of the indexer turn healthy quickly since historical data can be pulled from the cache and any indexing functions can quickly be run against that data. There's super low latency between the service and the DB.
However, if we try to use a DB that's "remote" relative to the service (eg. another Postgres provider, such as AWS, Neon, Fly, etc) the latency to the DB makes the service nearly unusable-- redeploys take significantly longer to respond as healthy. In a sample case for us, even with all historical data cached, re-indexing took over 3 hours.
I understand there are some mid- and long-term fixes on the roadmap; I wanted to open this issue to better track the progress on workarounds and solutions 🫡
The text was updated successfully, but these errors were encountered: