Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streams library "sieve" block producing primes is NOT a Sieve of Eratosthenes... #3412

Open
GordonBGood opened this issue Oct 27, 2024 · 15 comments

Comments

@GordonBGood
Copy link

In the help notes for this "sieve" block included in the demos for the Streams library, you say "It's called SIEVE because the algorithm it uses is the Sieve of
Eratosthenes: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
", yet when one reads the linked Wikipedia article (at least as updated over the last 15 years or so), one finds the following statement: "The widely known 1975 functional sieve code by David Turner[13] is often presented as an example of the sieve of Eratosthenes[7] but is actually a sub-optimal trial division sieve.[2]".

That David Turner sieve is exactly what you have implemented, as expressed in Haskell as follows:

primes = sieve [2..]
sieve (p : xs) = p : sieve [x | x <− xs, x `mod` p > 0]

or more as you have written it in Snap! (using Haskell's "filter" instead of the equivalent Snap! "keep"):

primes = sieve [2..]
sieve (p : xs) = p : sieve (filter (\ c -> c `mod` p > 0) xs)

This has much worse performance than a real functional "incremental" Sieve of Eratosthenes (SoE) so as to be almost unusable for your Snap! version, taking over 10 seconds on a fairly high end desktop computer to find the primes just to a thousand; it is slow because rather than using SoE sieving by only incrementing the culled composites per base prime by addition of a constant span, it uses trial division (via the "mod" function) to test ALL remaining prime candidates for even division by all found primes - thus, it has an exponential computational complexity rather than the O(n log n (log log n)) computational complexity for the true incremental SoE.

Using the work from the Wikipedia article referenced article, one could write a true incremental SoE using a Priority Queue as O'Neill preferred but that would require the extra work of implementing the Priority Queue using either a binary tree structure or using Snap!'s List's in imperative mode (not functional linked-list mode). In the Epilogue of the article, reference is made to a "Richard Bird list-based version" of the SoE, which is much easier to implement. However, at that time, this sieve still hadn't been optimized to use "infinite tree folding" to reduce the complexity of the base primes composites merging to a computational complexity of about O(n log n) rather than O(n sqrt n) without the tree folding, which is a very significant difference as prime ranges get larger. A refined version of this sieve (sieves odds-only) in Haskell is as follows:

primes = 2 : oddprimes() where
  merge xs@(x : xtl) ys@(y : ytl) =
    case compare x y of
      EQ -> x : merge xtl ytl
      GT ->  y : merge xs ytl
      LT ->  x : merge xtl ys
  composites ((hd : tl) : rest) = hd : merge tl (composites $ pairs rest) where
    pairs (f : s : rest) = merge f s : pairs rest
  testprmfrm n cs@(c : rest) =
    if n >= c then testprmfrm (n + 2) rest else n : testprmfrm (n + 2) cs
  oddprimes() = (3 :) $ testprmfrm 5 $ composites $ map (\ bp -> [bp * bp, bp * bp + bp + bp .. ]) $ oddprimes()

In contrast to the David Turner sieve, instead of progressively sieving composite numbers from the input list by filtering the entire remaining list, this tree folding algorithm rather builds a merged (and therefore ascending ordered) lazy sequence of all of the composite values based on the recursively determined "oddprimes" function (non-sharing) and then the final output sequence of prime numbers is all of the (in this case odd) numbers that aren't in the composites sequence. It is relatively easy to arrange the merged sequence of composites in ascending order because each of the sub lists of culls from each of the base primes is in order but also because each new sub sequence based on the next base prime will start at a new increased value from the last (starting at the square of the base prime). For this fully functional "incremental" algorithm, there is a cost in merging for all of the sub sequences, but the binary tree depth is limited: for instance, when finding the primes up to a million, the base primes sequence only goes up to the square root or a thousand, and there are only 167 odd primes up to a thousand, meaning that the binary tree only has a depth of about eight (two to the power of eight is 256) in the worst case (some merges won't need to traverse to the bottom of the tree)

Now, as all of these sequence states don't refer back to previous states, there is no need to memoize the states as in a "lazy list" or as in your "streams" library, and a simpler Co-Inductive Stream (CIS) can be used, as follows in Haskell:

data CIS a = CIS !a !(() -> CIS a)
primes = CIS 2 $! oddprimes where
  merge xs@(CIS x xtl) ys@(CIS y  ytl) =
    case compare x y of
      EQ -> CIS x $! \ () -> merge (xtl()) (ytl())
      GT ->  CIS y $! \ () -> merge xs (ytl())
      LT ->  CIS x $! \ () -> merge (xtl()) ys
  composites (CIS (CIS hd  tl) rest) = CIS hd $! \ () -> merge (tl()) (composites $! pairs (rest())) where
    pairs (CIS f ftl) = let (CIS s  rest) = ftl() in CIS (merge f s) $! \ () -> pairs (rest())
  allmults (CIS bp bptl) = CIS (cullfrmstp (bp * bp) (bp + bp)) $! \ () -> allmults (bptl()) where
    cullfrmstp c adv = CIS c $! \ () -> cullfrmstp (c + adv) adv
  testprmfrm n cs@(CIS c rest) =
    if n >= c then testprmfrm (n + 2) (rest())
    else CIS n $! \ () -> testprmfrm (n + 2) cs
  oddprimes() = CIS 3 $! \ () -> testprmfrm 5 $! composites $! allmults $! oddprimes()

The above Haskell code is written so as to better understand as this "CIS" implementation is written in non-lazy languages as it doesn't take advantage of Haskell's built-in laziness and bypasses the compiler "strictness analyzer" so may be less efficient that the lazy list version above in spite of not doing the extra work to do memoization of each sequence value...

The above has been translated to Snap! as per this attached file and can be tested with "listncis 168 primes" to show the list of the primes up to a thousand in about a second, and works in a little over ten seconds showing primes up to about ten thousand; it thus has more of a linear execution time with increasing ranges...

I recommend you fix your Streams demo to use an implementation of this algorithm, although as noted above, the algorithm doesn't really required the overhead of the Streams memoization...

@brianharvey
Copy link
Collaborator

I'm pretty sure Eratosthenes didn't have a computer, and the odds are he did have slaves to do the actual crossing out of numbers, so he was less interested in algorithmic efficiency than you are.
And if you think about what a real-life sieve is, it works by filtering. I'll reread the Wikipedia article, though.

But in any case, if I wanted a finite number of primes, I'd do it iteratively like everyone else, not with streams. The whole point of the demo is that you can get all the primes if you have lazy lists. It would defeat the (pedagogic) object of the exercise to use a more complicated algorithm that hides the beauty of this one.

I do wish the streams library implemented lazy lists more efficiently. If you have any words of wisdom about that, it'd be great!

@GordonBGood
Copy link
Author

GordonBGood commented Oct 28, 2024

I'm pretty sure Eratosthenes didn't have a computer, and the odds are he did have slaves to do the actual crossing out of numbers, so he was less interested in algorithmic efficiency than you are. And if you think about what a real-life sieve is, it works by filtering. I'll reread the Wikipedia article, though.

Ah, but Eratosthenes was interested in algorithmic efficiency, which is why he came up with his Sieve algorithm that didn't require dividing each and every current prime candidate by each and every of the found base primes (which is what the David Turner and your Snap! version does, and just instructed his slaves to cull by advancing the culling by a constant span from the square of the base prime for each base prime up to the square root of the current range. His sieve didn't work by filtering, but yes, the usual implementation does call all the composites up to a range and then processes the remaining primes, but there is no reason that it can't be done incrementally, which is what functional incremental versions do. Given that Eratosthenes had one slave per base prime (increasing his slaves as the sieved range increased), his algorithm upon finding a new base prime would be to assign the square of that base prime and an increment of that base prime to the slave, tell each to advance by the increment to just higher than the current candidate, which would be prime if less than the current cull value held by all slaves, where one new slave would be added when the last added slave starts to need to advance. In that way, each slave only has to remember two numbers, their current culling place and how much to advance when their current place is passed. On top of that, the infinite tree folding algorithm is a way of merging which composite number is next for consideration by organizing the slaves in pairs, with each result forwarded upward to pairs of pairs, to pairs of pairs of pairs, etc.

This algorithm does kind of work by filtering in that the "testPrimeFrom" block compares all prime candidates in turn (only odd in this implementation) and keeps only the ones that aren't in the stream of composite culls. This is exactly how an efficient incremental version of the SoE works...

But in any case, if I wanted a finite number of primes, I'd do it iteratively like everyone else, not with streams. The whole point of the demo is that you can get all the primes if you have lazy lists. It would defeat the (pedagogic) object of the exercise to use a more complicated algorithm that hides the beauty of this one.

Your David Turner algorithm is beautiful only if you consider short as beautiful with no consideration of the computational complexity. In terms of computational complexity, it is extremely ugly with its exponential computational complexity, and the incremental SoE doesn't need lazy lists, as non-memoizing Co-Inductive Streams are all that are necessary, which are much easier (and faster by a constant factor) to implement. Even when compiled on very fast computers, the David Turner sieve is never practical for finding primes of much more than say a hundred thousand, which is a trivial range, and functional algorithms such as this can't be multi-threaded as each new prime depends on the state of previous computations...

All of the primes of an infinite stream is not really a consideration in that, even if possible with infinite memory, one isn't going to use them for anything; one usually just wants them to be able to count them, find and count patterns such as pairs of "K-tuples", working with slices of them, etc.; most prime processing processes streams of primes while letting the beginning of the stream evaporate when it is no longer necessary for say the pattern being looked for...

Also, I think you missed the point of my alternate implementation: it also produces an infinite stream of primes as it advances, just that it doesn't memoize any primes other than the base primes captured by the closures used in merging the individual base prime culling streams. I converted it to a finite sized list only in order to be able to easily display the results.

I do wish the streams library implemented lazy lists more efficiently. If you have any words of wisdom about that, it'd be great!

There is a way of implementing laziness only using closures which I didn't present here as memoization isn't necessary for the tree-folding-primes algorithm (also not using memoization makes it faster by a constant factor). However, there is an implementation for a "lazy" type just by using functional closures to capture the internal states of whether a computation has been done yet or not, the value when computed, and the "thunk" to be executed once in order to obtain that value. The following are images of block definitions to do that, which may be faster than the list manipulations you are doing now:
MakeLazy
GetLazyValue

Once one has defined the lazy type handling, one can combine it into my current CIS non-memoizing stream in order to get a memoizing lazy list type, as in the following images:
MakeLazyList
HeadLazyList
TailLazyList

It might be slightly faster to inline the make/get lazy blocks into the making of the lazy list block and the obtaining of the tail block, here separated for clarity...

I haven't compared the speed of using a memoized lazy list stream as compared to my non-memoizing CIS stream for the tree folding incremental primes algorithm, which would be an interesting test to see how much memoization costs...

In algorithms that actually need memoization such as a Hamming/Smooth Numbers infinite stream, this may make the performance a little better, although it will never be that fast or powerful as these implementations in Snap! are limited to interpreting scripts in JavaScript...

@GordonBGood
Copy link
Author

@brianharvey,

I did some timing and the difference in performance between the current Stream implementation and my new one where the states are buried inside closures isn't all that much, where it takes something about 12 seconds to list a stream of 1000 numbers; this may be due to JavaScript forming closures being particularly slow, in particular in Google Chrome and Firefox and even twice as slow with Safari, which will make either technique slow as both depend on "thunk" closures...

Using my primes algorithm, one will still be able to find the primes up to ten thousand in perhaps about this amount of time as for for when using non-memoizations as forming closures is likely the bottleneck. For the David Turner algorithm, your VM will time out finding primes to this range as it would take hundreds of seconds to complete...

@GordonBGood
Copy link
Author

@brianharvey,

I've done a bit more thinking about my last post, and the reason that the "ringified"/delayed "thunks" are so slow must be more than just a JavaScript issue (although it is known that JavaScript doesn't handle creating closure functions very efficiently), as one can write JavaScript equivalents to this code, including streams, that runs with direct JavaScript (takes about a second to show a hundred million numbers), as compared to your straight lists (takes about a second to show ten million numbers, including display, so only maybe ten times or less slower than straight JavaScript), as compared to your (or my) streams implementation (takes over ten seconds to convert to a list of only a thousand numbers!).

But, whoa, on the way of doing this testing, I found a performance bug in your imperative looping when using your "repeat" and "for" blocks (all of them) that you use in an imperative implementation for your "list ___ items of stream" and "item ___ of stream" blocks that takes most of the time to loop for a thousand times (over 10 seconds). Rewriting these to using recursive loops results in about a twenty times speed up so a hundred thousand numbers in about thirteen seconds. This still isn't fast as it is still about five hundred times slower than straight JavaScript, but at least it makes streams somewhat usable...

This would seem to indicate that the problem isn't with the streams laziness implementation (indeed, non-memoizing laziness is about the same speed as with the laziness by either method, with a slight advantage to your current Streams method), so therefore the problem lies in how Snap! is implementing "ringify". Without digging too far into your code, for this kind of slow down, it would seem that "ringify" may be deferring the evaluation of the "ringified" block until the code is "called" (or "run") and then evaluated when it is "called", which would be quite slow...

The following images are of some test blocks that show the relative speed of a simple recursive loop as a base case versus the speed of the same recursive loop that includes a call to the same "ringified" function/operator use:
SimpleRecursiveLoop
SimpleRecursiveLoopRingifiedCall

The simple recursive loop runs about a hundred thousand loops in about a second where including the call to the ringified function takes about five times as long or 150,000 CPU clock cycles per "ringify" call. This is the best current speed for my non-memoizing CIS implementation, where the tree folding primes algorithm requires about 26,149 calls to the "ringified" function to calculate the primes to 10,000 in about three seconds or about 600,000 CPU clock cycles per "ringify" call, but that isn't so surprising as the "ringified" blocks are doing quite a lot more than just doing a simple addition - as in creating new list's containing their own "ringified" blocks. If I had used your memoized streams for the same algorithm, it would take about twice as long due to the extra processing of the memoization...

I don't know that you will be able to do much about these speeds due to the limitations of running scripts on top of JavaScript, which seems to be in the order of a thousand times slower than running the same algorithms directly compiled to JavaScript - in my mind and given that Snap! is primarily a learning tool for learning about language paradigms such as OOP, imperative programming, and functional/declarative programming, while playing with a primarily graphical interface, I am not too concerned with the speed (once you fix the imperative loop slowness, which might give some hints for better performance in other areas). If a user graduates to want to push past these limitations, then they may as well play with a language like Elm or Fable which compiles directly to JavaScript, yet one can still manipulate graphics through HTML and SVG packages...

@brianharvey
Copy link
Collaborator

Ah, thanks! We'll look into the FOR thing.

About rings, of course we have to wait until the procedure is called to run it; Snap! isn't a purely functional language, and the result of running the code depends on its environment. We could, I guess, compile the code in the ring to JS, but apart from a couple of experiments, we don't compile the code even when you run it; Snap! is an interpreter. As with any interpreter, we pay a huge performance price for that, but it's necessary to support "liveliness": the user can drag blocks into or out of the ring while it's running, or edit an argument slot.

@GordonBGood
Copy link
Author

@brianharvey,

About rings, of course we have to wait until the procedure is called to run it; Snap! isn't a purely functional language, and the result of running the code depends on its environment. We could, I guess, compile the code in the ring to JS, but apart from a couple of experiments, we don't compile the code even when you run it; Snap! is an interpreter. As with any interpreter, we pay a huge performance price for that, but it's necessary to support "liveliness": the user can drag blocks into or out of the ring while it's running, or edit an argument slot.

That being the case, other than fixing the imperative loops I don't think you can do much about the speed of using streams; once the list-of-n-stream and nth-item-of-stream blocks are fixed, at least when using a proper SoE your users will be able to calculate the primes functionally to ten thousand or a little more in the allowed run time per block. Other than using an imperative list-as-an-array according to the usual algorithms one might be able to push the SoE range a little higher until there are memory constraints...

@brianharvey
Copy link
Collaborator

in the allowed run time per block

By the way, there's no limit on the runtime of a block or script. The fact that the browser time slices us against other tabs doesn't matter; we pick up where we left off the next time we run.

@GordonBGood
Copy link
Author

@brianharvey,

in the allowed run time per block

By the way, there's no limit on the runtime of a block or script. The fact that the browser time slices us against other tabs doesn't matter; we pick up where we left off the next time we run.

The limited run time seems to be related to when the browser starts to give a "Aw Snap Error: Sigill" message for long running processes that take longer than about 30 to 60 seconds (on Google Chrome version 130 at least); it doesn't seem to have the same problem on Firefox and I haven't tried it on Safari but there's a good chance it doesn't have a problem either, although on Firefox it also seems to time out after a period and not produce a report even though it doesn't crash...

Back to the subject of this issue, for practical purposes, your David Turner sieve has the following number of operations for sieving the ranges as follows:

  1. For a range of 100 there are 436 operations.
  2. For a range of 1,000 there are 15,813 operations.
  3. For a range of 10,000 there are 777,890 operations (can't be run on Snap! without crashing).
  4. For much higher ranges than this, this algorithm brings any run time system to its knees...

In contrast, the tree folding SoE has the following number of operations:

  1. For a range of 100 there are 156 operations.
  2. For a range of 1,000 there are 2,168 operations.
  3. For a range of 10,000 there are 26,349 operations.
  4. For a range of 100,000 there are 348,304 operations (can't be run on Snap! without crashing).
  5. For a range of 1,000,000 there are 4,377,383 operations (can't be run on Snap! without crashing).

Note that with increasing range, the incremental SoE gets closer and closer to being linear with range because the log n factor increases more and more slowly as n increases...

These results were obtained by running the algorithms on a compiled language. Just from practical considerations, you should be able to see the immense difference here, with the article from Melissa E. O'Neill explaining the difference, where the David Turner sieve has computational complexity of about O(n ^ 2) and the incremental SoE's having about O(n log n) - obviously as n approaches infinity. Practically, the log n term is log base 2, so for instance, the "extra" factor for ranges to a million (1e6) is about 20 and for a billion (1e9) is about 30, so the incremental SoE increases from being linear over this quite large range to only a factor of 1.5 of the linear relationship, whereas the David Turner sieve would have a factor of a thousand!!!...

As to the complexity of the tree folding SoE, your current Stream merge function isn't very efficient in using a sort, and could be made faster by at least a constant factor by being implemented as in my merge function that just adds the least of the heads of the input streams to the produced stream, and then your flatten function would be much more efficient if it used tree folding (or an ordered-flatten-streams block were provided) and with the benefit that the result would be guaranteed to be in ascending order if all of the input streams were in ascending order and not just interleaved, although the algorithm does require a merge function that eliminates duplicates in the output stream (although the logic could be tweaked slightly to handle duplicates with an advance-tail-until-condition sub block). Then the primes stream could be defined as follows:
SoEPrimes
InternalPrimeFilter
Note that the second internal block is just filtering the stream of all odd composite numbers from the stream of all odd numbers, except that, rather than create a stream of the "heads" of all odd numbers, it just produces the "heads" by adding two to the previous value for each successive value...

While it's true that this true SoE sieve is slightly more complex than the David Turner sieve, in light of the computational complexity, it would seem to be worth it...

@brianharvey
Copy link
Collaborator

The limited run time seems to be related to when the browser starts to give a "Aw Snap Error: Sigill" message for long running processes that take longer than about 30 to 60 seconds (on Google Chrome version 130 at least)

I suspect that what you're seeing is the result of a memory leak, rather than a timeout problem. Google does seem to enjoy introducing bugs to Chrome that mess up Snap!, but afaik there isn't one that prevents long-running programs.

As for the rest, c'mon, you don't have to teach me about asymptotic analysis of algorithms. If I were doing crypto or Bitcoin mining or something, I'd be really worried about runtime for thousands of primes (and I'd be using bignums, so it'd be even slower). But what I'm doing is teaching kids about lazy lists, and for my purposes the only reason for choosing primes as the stream to generate is to make the point that you can apply them to a serious-ish computation, not just the stream of multiples of three or other such trivial computations. It's the same reason there's that paragraph in the TeXBook with an embedded prime generator TeX macro. (And of course if I were doing crypto or Bitcoin mining I wouldn't be doing it in Snap!.)

The example, of course, comes from SICP, the best computer science book ever written, so you have a large hill to climb to convince me that it isn't the best way to introduce streams to CS 1 students. :)

@GordonBGood
Copy link
Author

GordonBGood commented Oct 29, 2024

@brianharvey,

If I were doing crypto or Bitcoin mining or something, I'd be really worried about runtime for [large numbers - GBG] of primes... But what I'm doing is teaching kids about lazy lists...

Yes, that's true and I see your point.

(and I'd be using bignums, so it'd be even slower).

Why on earth would you be using bignum's when this sieve would take eons to even sieve to the signed 32-bit number range of a little over two billion (2e9), even if it didn't run out of memory for the huge heap of closures of all the preceding prime number divisions from the filtering? Even about the fastest SoE in the world, Kim Walisch's multi-threaded "primesieve" written in C++, would take in the order of months on a very high end desktop computer to compute just the count of all the primes to about 2e19, the unsigned 64-bit integer number range. Therefore, he doesn't use bignum's but only unsigned 64 bit integers, which are native registers on most CPU's. Although the most advanced sieve algorithms such as his do use larger SIMD registers on modern CPU's, that code mostly provides a boost for the small end of the range that can take advantage of this technology.

Just for point of reference, it takes an optimized build of the GHC Haskell version almost a minute to compute the 78,498 primes to a million on my fairly high end desktop computer, and about 40% of the time (and increasing with range) is spent doing garbage collection for the high use of the heap due to the stacked filter closures...

In turn, I guess my main point is that you call it the Sieve of Eratosthenes, and I wouldn't have even reacted if you hadn't done that, also mentioning somewhere that you got it from SICP (implying that must make it true); that brings me to my next related point:

The example, of course, comes from SICP, the best computer science book ever written,...

I'll grant you that SICP was a very good learning resource for its time and still holds up in many ways; however, that doesn't mean that the author's are never wrong, and in this particular case very wrong. Again, including this simple trial division prime number sieve wouldn't have been wrong if they had just said that was what it was and even better, taken the opportunity to point out its limitations and why its performance is so poor, but calling it a Sieve of Eratosthenes was very wrong as is now well known, especially after the publishing and peer review of Melissa E. O'Neill's article that I linked in the opening post, which carefully explains why it can't be called a Sieve of Eratosthenes, both based on its computational complexity and on comparing its algorithm as compared to a true incremental (functional) Sieve of Eratosthenes. It is regrettable that they made that statement because it has led to generations upon generations who drew much of their knowledge of computer science from SICP as their "holy book" believing that it is an SoE, just as you do.

so you have a large hill to climb to convince me that it isn't the best way to introduce streams to CS 1 students. :)

I have never said I am trying to "climb that hill", just pointing out this single error in the book (and in your reference to the algorithm as Sieve of Eratosthenes)...

I really don't have more to say until you've read at least the introduction of the O'Neill article, which it doesn't seem you have...

@jmoenig
Copy link
Owner

jmoenig commented Oct 30, 2024

could you perhaps take this discussion to the Snap! forum: https://forum.snap.berkeley.edu/t/streams-library-2-0-development-part-2/16664/214

@qw-23
Copy link

qw-23 commented Oct 30, 2024

I wrote much of Snap! ’s current Streams library, and I’m curious after @GordonBGood ‘s proposal for a recursive list [number] items of [stream] block definition.

@GordonBGood
Copy link
Author

@qw-23,

"New users are limited to one reply per thread"!!! And in trying to edit my post, I deleted it and can't seem to get it back!!!!

@Qw23, @bh,

I think the issue to be discussed here isn't about primes, but about your proposal to replace some iterative stream library procedures with recursive versions.

In thinking about this problem, I am wondering whether the imperative loops are so slow because they are doing constant conversion between imperative and linked-list versions of the list values used. So in testing I bypassed the use of the imperative loop blocks entirely by using functional recursive "loops"...

any suggestions for reducing runtime a/o boosting stability of “infrastructural” blocks, like list [number] items of [stream] . I suspect you devised and tested a recursive solution - if so, will you share it so I can try it out?

Sure, but as my needs were only for an infinite stream version, I haven't done much testing for a stream that might terminate with an empty stream. Thus, the following isn't completely tested:

BTW I don’t have any working command of Haskell.

Ah, sorry - I assumed anyone working on functional algorithms such as streams would also know Haskell; I'll only post Snap! images of blocks then as I don't care for LISP and exported blocks in .xml format are so ugly...

What format works for posting blocks in this forum? I tried posting images, but the forum doesn't seem to allow that?

Ah, I discovered Cloud sharing and publishing...

NewListNofStream
InternalListNofStream
ReverseList
InternalReverseList
ItemNofStream

@bh
Copy link

bh commented Oct 30, 2024

Why I have been mentioned?

@jmoenig
Copy link
Owner

jmoenig commented Oct 30, 2024

somebody might have mistaken your Github handle with @brianharvey 's usual nick, sorry!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants