Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitoring Signals from different threads of varying rates. #88

Open
mitchmindtree opened this issue Nov 15, 2017 · 9 comments
Open

Monitoring Signals from different threads of varying rates. #88

mitchmindtree opened this issue Nov 15, 2017 · 9 comments

Comments

@mitchmindtree
Copy link
Member

mitchmindtree commented Nov 15, 2017

I often come across the desire to monitor a Signal that is running on the audio thread, from the GUI thread. Normally the kinds of signals I want to monitor are control signals (like Peak or Rms) generated by an adaptor around some signal.

I normally end up designing a custom Signal adaptor to do this depending on the task.

I often want to sample the signal I'm monitoring at a much lower rate than the audio sample rate. E.g. if I want to monitor a 44100hz audio signal in a GUI that is running at 60hz, I only want a "snapshot" of the audio signal every 735 audio frames as I can't physically see the monitored values faster than this anyway.

This rate issue is a large contributor to the awkwardness involved with these monitoring implementations and I think could possibly be addressed at a lower level. That is, it would be nice to have abstractions for safely and predictably pulling values from a Signal at varying rates.

Signal::bus could be considered a step in this direction, but has caveats. Firstly, Bus is designed for use on a single thread (it uses Rc and RefCell internally for sharing the ring buffer between output nodes). Secondly, when requesting from a Bus's Output nodes at different rates, the internal ring buffer will grow infinitely large with values that have not yet been collected by the slower Output.

One could work around the second issue by reducing the sample rate (e.g. calling Signal::from_hz_to_hz) on the slower Output node to the rate at which it should be called. This would go a long way to fixing the issue, but the same problem will occur if the rate is inconsistent/unreliable or if some drift occurs (which will almost always be the case if frames are being requested from different threads). This same problem applies when a user requires outputting a single audio signal to more than one audio hardware device.

@mitchmindtree
Copy link
Member Author

I can think of a couple different behaviours that one might want when sharing the output of a signal across multiple threads:

  • Main + Monitors - having one main output (normally on the audio thread) with n other outputs at varying rates designed for monitoring only (normally on the gui thread). The main output should have priority in terms of request consistency - that is, only the main output should be able to request new frames from the signal, whereas monitors should only be able to yield frames that have already been yielded. Monitors may use a tiny sample delay to
  • Equal Priority - multiple threads can request frames at different rates (e.g. different audio hardware outputs). The thread that happens to request frames before the others causes the inner signal to yield frames. All other threads attempt to "catch up" using very subtle rate interpolation.

@mitchmindtree
Copy link
Member Author

mitchmindtree commented Nov 16, 2017

Requirements

  1. Thread-safe communication - some kind of SPSC channel-style ringbuffer. Ideally it would be possible to query the number of frames stored within the buffer at any point in time, as the number of buffered frames could make for a good indicator of whether or not sample rate "catch-up" is required due to drift. If the channel doesn't offer this, we might be able to track this using an AtomicUsize, though we'd have to make sure we get the Ordering right to avoid messing up the count (e.g. A stores a frame, A reads counter, B reads a frame, B reads counter, B stores counter-1, A stores counter+1. In this case the counter ends up incorrectly offset by 2+).
  2. Sample rate conversion from thread A (origin) -> thread B (desired target).
  3. A "catch-up" sample rate interpolator that dynamically adjusts the target rate to the rate at which frames are actually being requested. It's possible that this is the only rate interpolator that should be required, and the "desired target" (from point 2.) is just the initial target rate or used as a hint.
  4. A "correct" time source. In most cases it should be fine to use the highest sample rate audio output for this and compare other rates to it.

I imagine the signal chain might look something like this across the threads:

THREAD_A => SIGNAL_A ---> MONITOR_TX ---> SIGNAL_A
                                     \
                                      \
THREAD_B =>                            -> RX -> RATE_ADJUSTER -> SIGNAL_B

The API might look similar to the way that Bus does atm but with considerations for threading and rate variance. An example (placeholder method names, etc):

let signal = signal.monitor(source_rate);
let output = signal.send(rate_interpolator, initial_target_rate, channel_buffer_size);

// Requesting frames on audio thread
signal.next()

// Other thread
output.next()

where:

  • rate_interpolator is the Interpolator type used to adjust the rate of SIGNAL_B.
  • initial_target_rate is the initial target rate of the inner interpolate::Converter.
  • channel_buffer_size is the max size of the channel's inner buffer before frames will be dropped.

The target sample rate could be dynamically calculated by counting the number of frames that are requested from SIGNAL_A for each frame requested by SIGNAL_B and then dividing the known sample rate of SIGNAL_A by the result. This frame "count" should probably be averaged over some window size before dividing the original sample rate in order to avoid fluctuations in the case that either SIGNAL_A or B are buffered and occasionally request many frames at once.

Questions

  • How to smooth the adjustment of the target rate? Perhaps we could provide a param for a linear adjustment rate? Should the adjustment rate accelerate/decelerate to avoid having the target sample rate jump around? Should this acceleration/deceleration be generic over its interpolation?
  • What channel type to use?

@mitchmindtree
Copy link
Member Author

This paper seems to solve this problem but w.r.t. mapping time from "sample time" to "system time".

@quatrezoneilles
Copy link

Well I can't answer all your questions right now, but at least I can say that you can implement a very simple bus-like method that inputs a Signal S and produces a pair (Sampled(S), Aux) of Signals, where Aux contains an Rc<RefCell<S::Frame>>, and Sampled's next() method just writes S.next() to that RefCell and then outputs it. Hmm, maybe you don't even need to wrap it in an Rc, I'm not sure. That is, you have created a buffer of length one; then you can read Aux at any rate you want; it's guaranteed to give the "current", or "last" value of S::Frame. This is what I call an Auxiliary signal in my framework.

This forces you to think more about issues of synchrony when combining signals, but I think this is a good discipline.

@mitchmindtree
Copy link
Member Author

mitchmindtree commented Nov 21, 2017 via email

@quatrezoneilles
Copy link

Well I have a lot of Signal structs that have &'d structs inside of them, and the only bother is that your refs always have to come "from outside". This forces you to use macros, for example if you want to construct a complex UGen by assembling several smaller ones, or if you want to feed a bundle of several refs to a complex Signal struct, which is something you want to do in one fell swoop, not all by hand.

@andrewcsmith
Copy link
Contributor

Hey, chiming in way late -- what I usually do is almost exactly what @quatrezoneilles described, except I just use Signal::map to write to an Rc<Cell> and then I read from that Cell somewhere else. (Actually I did this so much that I don't even use Signal::map anymore, but a custom adaptor.) When I want multiple threads, my audio thread just sends them over a bounded_spsc_queue (ring buffer, basically) and then the gui thread grabs them and does whatever. The gui thread doesn't have to "keep up," but if it really falls behind then frames get dropped. But the important thing here is that the gui thread does all the resampling based on its own local ring buffer, which means that it can allocate or analyze at will, without bothering the audio thread.

Anyway -- I'm a little sketchy on whether you're looking to send all of the frames from one thread to another, or if you'd rather just poll the latest frame whenever. My preference would be to have something like bounded_spsc_queue within sample (so that we can ensure that it's stable), and just handle all the resampling and analysis on the receiver's end. But I could also not totally be understanding the efficiencies you might be envisioning here.

@mitchmindtree
Copy link
Member Author

mitchmindtree commented Jun 6, 2018

Thanks for the input @andrewcsmith! I've also been doing something similar for GUI monitoring but using a crossbeam queue instead with a bit of wrapper code that recycles the buffers between sender and receiver.

I guess the thing I'd really like to solve is how to read from a signal from two different threads, where both signals do not drop any data. E.g. the two threads involved might be two different audio devices where samples are requested from two separate callbacks. Although we might be able to set the two devices to the same sample rate, I'd prefer not to rely on this as the physical clocks will likely drift from one another over time. We can't drop frames in this case as this will cause glitching in the output of the late device.

As an alternative to dropping frames, I'd like to work out some nice way of using adaptive sample rate interpolation to synchronise the rate at which samples are requested from the signal, while still ensuring that the two (or more) outputs receive as many samples as they request. Further, I was thinking that perhaps if we could solve this in a robust, "generalised" manner (e.g. supporting widely varying sample rates) we might also be able to use this solution as an alternative way of monitoring the audio thread from the GUI thread.

@andrewcsmith
Copy link
Contributor

Ah, okay, so just so I understand, the sample rate difference might not even matter. We could be talking about the same sample rate with a different block size, correct?

I wonder if you could make a ring buffer-like queue (spsc) but with more than one read_index. The downside that I can see is that the write_index would need to loop through all the read_indexes in order to see whether it's safe to write. Perhaps, though, this can be done in the reader thread rather than the writer thread, since that's the one that is most sensitive. Also, of course, the ring buffer would have to be large enough to accommodate the slowest of all the readers.

I'll phone a friend and get back to you...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants