Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PS-33] updating the list of URI on broacaster-side takes an abnormally long time. #2765

Open
FranckUltima opened this issue Mar 6, 2023 · 10 comments
Assignees
Labels
linear need: more info need more info from the requester to proceed status: core contributors working on it in progress

Comments

@FranckUltima
Copy link

FranckUltima commented Mar 6, 2023

after changing server and updating the URI via livepeer_cli, after more than 3 days the new server still only receives test streams, but jobs from livepeer.inc still arrive only on the old server.

It seems that updating the list of URIs at the broadcaster level takes an abnormally long time.

PS-33

@github-actions github-actions bot added the status: triage this issue has not been evaluated yet label Mar 6, 2023
@FranckUltima FranckUltima changed the title updating the list of broacaster-side URIs takes an abnormally long time. updating the list of URI on broacaster-side takes an abnormally long time. Mar 7, 2023
@thomshutt
Copy link
Contributor

@FranckUltima could we please have the old and new URIs to investigate?

@thomshutt thomshutt added the need: more info need more info from the requester to proceed label Mar 7, 2023
@Wendy-Utopia
Copy link

old URI : https://81.0.246.88:8935
New URI : https://utopia-node.xyz:8953

Updated Mars 3.

Old URI still receiving work, new one only test streams.

Thank for your Help @thomshutt

@Wendy-Utopia
Copy link

Update.
Since 3 weeks, some livepeer.inc nodes still not updated their URI list.
MDW is up to date (receiving job on new server)
LON and FRA still not updated. (Receiving job on old server)
I m using only one server. no geodns.

@FranckUltima
Copy link
Author

problem seems to have been solved after the redeploy of the broadcaster's nodes this afternoon.
But there is always an issue.
it s abnormal that broadcaster's nodes need to be restarted to update there orch URI list.

@thomshutt
Copy link
Contributor

@FranckUltima Agreed, we're going to be prioritising this and other O issues over the next month

@leszko leszko added status: core contributors working on it in progress and removed status: triage this issue has not been evaluated yet labels Mar 21, 2023
@adamsoffer adamsoffer added linear and removed linear labels Mar 22, 2023
@adamsoffer adamsoffer changed the title updating the list of URI on broacaster-side takes an abnormally long time. [PS-33] updating the list of URI on broacaster-side takes an abnormally long time. Mar 22, 2023
@0xVires
Copy link

0xVires commented May 9, 2023

Any update on this issue?

I think I'm currently experiencing this with my testing O: Changed the URI (more specifically the port) 3 days ago and I'm not getting any work since then besides the test streams.

@thomshutt
Copy link
Contributor

@0xVires I suspect the Broadcaster cache isn't updating properly in the case of URI changes (which is why we see a Broadcaster restart causing an update). I did a bit of digging on Friday and didn't come up with anything, but will try to figure it out this week

@yondonfu
Copy link
Member

A few things worth keeping in mind while investigating this issue:

  • If B is using on-chain discovery (the default), then the relevant code is here. The ServiceRegistryWatcher should be responsible for monitoring the ServiceRegistry contract for updates to an O's service URI which is then cached in B's DB.
  • If B is using webhook discovery (configured using -orchWebhookUrl), then the relevant code is here. The O webhook discovery implementation within Studio can be found here and it returns cached responses from the Livepeer subgraph

Worth looking into whether there are any issues with the caching logic used in any of the code paths for on-chain/webhook discovery.

@ad-astra-video
Copy link
Collaborator

My test broadcaster just experienced this. Pon changed his port and my broadcaster did not look to the new port until restarted.

@FranckUltima
Copy link
Author

I have a question: if the update on the broadcasters' side of the URI lists needs to wait for a server restart, which could result in a delay of several weeks before an orchestrator that has changed its URI starts receiving new jobs (apart from test streams), is this also the case for stakes? I mean, if an orchestrator with a stake of 2000 LPT receives a new stake of 50000 LPT, should we anticipate a potential delay of several weeks while waiting for the broadcasters to reboot their servers? Or is this independent of the URI update issue, and the new stake is immediately taken into account?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear need: more info need more info from the requester to proceed status: core contributors working on it in progress
Projects
None yet
Development

No branches or pull requests

8 participants