-
-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Future direction for memory management? #489
Comments
Related: do you know if/how well Arraymancer works with orc/arc? |
According to all my local testing Arraymancer works very well with ARC since #420 / #477. There are nimble test tasks defined here: These are not run in CI yet, because I didn't want to modify the CI at the same time as #477. I'll open a PR "soon" to replace travis by Github Actions and will include the test suite using ARC. |
Arraymancer will keep reference semantics. Let's take a training loop as example: # Learning loop
for epoch in 0..10000:
for batch_id in 0..<100:
# minibatch offset in the Tensor
let offset = batch_id * 32
let x = x_train[offset ..< offset + 32, _]
let target = y[offset ..< offset + 32, _]
# Building the network
let n1 = relu linear(x, layer_3neurons)
let n2 = linear(n1, classifier_layer)
let loss = n2.sigmoid_cross_entropy(target)
# Compute the gradient (i.e. contribution of each parameter to the loss)
loss.backprop()
# Correct the weights now that we have the gradient information
optim.update() Copy-on-write
Value semantics:
Reference semantics:
In terms of implementation complexity.
|
Obviously this is your project so I'm not demanding you change it on my account or anything. However, I'm still not clear on what is the (performance) value of reference semantics as opposed to using value semantics and then let a user choose a Furthermore, wouldn't it be possible to have routines like Regardless of that choice, how do you envision Arraymancer working with Nim's new destructors and sink/lent annotations? I suppose there isn't really that much benefit to sink/lent them if the array data (other than metadata) is stored with reference semantics anyway. Would the new destructors have any use or do you use manual reference counting? |
I was hoping to clarify what your long-term goal is for memory management. #150 suggests that you eventually want to add a
=sink
operator, but that seems to conflict with the idea of using reference semantics. You had toyed with moving away from that (#157) but ultimately decided against it.Your current approach does seem to conflict with the idiomatic way of doing things in Nim and makes it difficult, verging on meaningless, to use
sink
/lent
annotations with custom containing a Tensor. It would seem to me that the most idiomatic way to handle this in Nim would be to make Tensors use value semantics by default, using Nim's=destroy
operator for destruction and=sink
operator for optimisation. If a user then needs reference semantics then they can just use aref Tensor
type.Of course, I fully appreciate that this would be a significant (and breaking) change, so I don't expect you to suddenly make it on account of this feature request. Still, I thought it was worth raising.
The text was updated successfully, but these errors were encountered: