Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dmsg servers are unaccounted intermediaries in dmsg transports - multihop route traffic may transit the same server multiple times #1826

Open
0pcom opened this issue May 26, 2024 · 1 comment
Labels
bug Something isn't working deployment production deployment issue dmsg issue of dmsg client, connection, or dmsghttp-config public visor transports

Comments

@0pcom
Copy link
Collaborator

0pcom commented May 26, 2024

From tests conducted by @ersonp it's evident that the connection between two dmsg clients is fully mediated by the dmsg server and is not maintained in the absence of the connection to a dmsg server.

Hence the dmsg server is an intermediary in this connection which is not currently accounted for. By specification, a route should not transit the same public key twice, however, the public key of the dmsg server is not accounted for in the transports. Dmsg servers do not directly or wittingly take part in transports, seemingly by design.

Hence it is crucial that any attempt to create a route through the same dmsg server multiple times shall be thwarted by some means. Yet other important implications also arise.

Network Architecture - Public Visors & Dmsg Servers

Currently, dmsg servers are handling bandwidth for dmsg transports. The issue arises from the lack of recognition of the dmsg server in the transport entry. The fact is, as well, dmsg servers are not run from the visor and so do not connect to the same services such as transport discovery.

It should be possible to implement so that a visor may run a dmsg server with the same key. If this were possible, it would make sense to upgrade the dmsg servers on the production deployment to run a public visor with dmsg server. That would effectively mitigate the issue essentially by replacing dmsg transports with stcpr transports established by the public auto-connect. These transports would correctly account for the dmsg server as the other end of the transport and intermediary in multihop routes.

It is inevitable with the current setup that multi hop dmsg route traffic may overload dmsg servers with very little bandwidth because packets may transit the same server multiple times.

Dmsg Parallel Connection Optimization

Currently, there are no fine-grained controls for which dmsg servers are connected to by a client other than setting the minimum dmsg sessions. Configuring a dmsg client to use a specific dmsg server is not implemented for regular dmsg clients as it is for the visor with the dmsghttp-config.json. Even knowing which server is currently in use is not easily determined except in an attempt to connect to that client.

In the instance that dmsg clients are connected to more than one dmsg server by default, it should either be possible to take advantage of both the possible paths at once or a better fallback behavior on inability to connect should be established so that an attempt is made to connect to the other client via another dmsg server.

Dmsg Servers -to- Server Connection to Overcome Scaling Limitations

In a technical sense it is clear that not every dmsg client and not every visor may connect to every other over dmsg currently. This is due to the fact that every visor would need to be connected to every other visor via the same dmsg server. The scaling limit was increased somewhat by the visor connecting to 2 dmsg serves by default, however, it is currently the case that if both dmsg servers you are connected to have no available sessions, other clients will no longer be able to connect to your client that can't connect to at least one of the same dmsg servers.

It would make sense in light of this, to have dmsg servers connect to each other and via that connection, to be able to connect clients which are connected to different dmsg servers thereby overcoming the aforementioned scaling limitation.

Dmsg Servers need to help a local deployment - IP based reserved sessions

A dmsg server is basically a public server. While it benefits the network, there is no current mechanism by which running a dmsg server may specifically benefit the operator without the handicapping lack of interoperability of running one's own dmsg discovery in addition to that.

Hence, I suggest the ability to have an ip-based whitelist of clients which connect to a dmsg server. This would allow for running a dmsg server publicly with very few or no public sessions which would help local clients connect to each other - and possibly facilitate the connection to clients which are connected to other dmsg servers externally.

@0pcom 0pcom added bug Something isn't working dmsg issue of dmsg client, connection, or dmsghttp-config public visor transports deployment production deployment issue labels May 26, 2024
@0pcom 0pcom pinned this issue May 26, 2024
@0pcom
Copy link
Collaborator Author

0pcom commented Feb 12, 2025

It should be possible to implement so that a visor may run a dmsg server with the same key.

Even without that integration, this suggestion is still valid.

  • We should run public visors alongside dmsg servers in the production deployment

This will effectively upgrade transports to these servers to be of stcpr type which correctly represent the edges of the transport, instead of dmsg transports ; as well, this should catalyze the automatic creation of sufficient transports to enable multihop routes via the public autoconnect mechanism.

  • we should consider not registering dmsg transports in TPD because those transports should not be used in multi-hop routes

From my analysis, it doesn't make sense to have any transports in the transport discovery which are not useful for multihop routes. Since a dmsg transport will always have one (or more) dmsg server(s) as an intermediary, we should not use dmsg transports in a multihop skywire route because the transport itself is multihop - as mediated by the dmsg server. If the edges of a transport are not correctly represented or fully accounted for, routes which include dmsg transports may break routing rules, and will suffer performance issues which may actually impact the stability of the dmsg servers in the deployment.

On a similar but more nuanced note ; if a sudph transport exists between two visors on the same LAN, that transport should not be used in a route by visors outside of that LAN either. But this is more a consideration for potential bandwidth and latency-based rewards. We would not want to reward based on the bandwidth of a transport that goes between two visors on the same LAN. But for efficiency of routing, one should not have hops in a multi-hop route which are inside the same local network.

Another way to think of this.. the routing for dmsg is (or should be) implicit, and basically anonymous. The ip address of one client will never be known to another client. The dmsg server handles routing, implicitly. All possible connections will be known to the intermediary dmsg server. We don't need transport discovery for that.

With skywire p2p transport types (sudph, stcpr), we do need these registered in the transport discovery because there is no other way to tell what transports between which visors exist - precisely because these transports are p2p.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working deployment production deployment issue dmsg issue of dmsg client, connection, or dmsghttp-config public visor transports
Projects
None yet
Development

No branches or pull requests

1 participant