You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Move away from internal Pipe based encoding, which involves heavy thread use and blocking to one that is more delegate based with DispatchSources and DispatchIO.
Make two new protocols, DiscordOpusDataSource and DiscordPCMDataSource, which each require a nextAudio (or something, that name sounds kind of bad) method that throws or returns a opus/PCM encoded Data as well as an optional finishEncoding method. These will be identical in requirement, separate only to make it obvious what type of data any given conforming class can give out.
Make DiscordOpusEncoder conform to DiscordOpusDataSource and give it an optional DiscordPCMDataSource input
Add a (non-iOS) DiscordPipedAudio class which conforms to both DataSources and manages audio from another process, has a convenience init that also includes an ffmpeg instance
Make DiscordVoiceEncoder conform to DiscordOpusDataSource, while trying to keep its user-facing features as similar as possible to the old DiscordVoiceEncoder (things like Read will have to go, but hopefully no one wanted to call Read from the client side)
Replace the encoder variable in DiscordVoiceEngine with a DiscordOpusDataSource, which will get its nextAudio method called every 20 milliseconds using a Dispatch Timer.
Replace callbacks like voiceEngineNeedsEncoder with voiceEngineNeedsDataSource (might as well keep the old one around to reduce change requirements)
The text was updated successfully, but these errors were encountered:
@TellowKrinkle I like this idea. It's a little more along the lines with what I had originally before I went to the current way when I was trying to get voice to work on Linux. (It actually might have been unrelated to Dispatch being crappy on linux and just misuse of vapor's WebSockets.)
My concerns:
Do you really need two source types if you're going to change DiscordVoiceEncoder to conform to DiscordOpusDataSource? With this approach (assuming the end-api for the current middleware stays mostly the same, and it still outputs raw PCM to for encoding --my preference--) the other way that this was being used is feeding straight PCM into the engine, bypassing the middleware. Would you be providing some default wrapper around PCM data, basically a middleware for PCM Data? I missed "... give it an optional DiscordPCMDataSource input."
How do you plan on handling the UDP socket and all the blocking it does now? I would rather keep Vapor around (or find something that works everywhere that is just as easy,) than trying to manage a raw socket ourselves. I think using using something like DispatchSource and DispatchIO might be helpful.
I wouldn't worry too much about the Decoder, it's more of a toy. (Which is why I added a config to bypass it.)
Move away from internal Pipe based encoding, which involves heavy thread use and blocking to one that is more delegate based with DispatchSources and DispatchIO.
From @TellowKrinkle
The text was updated successfully, but these errors were encountered: