You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your implementation with loss_desc using triplet loss, you compute dot_product, which has a shape of Bs, 30, 40, 30, 40, C. It is quite difficult to use a large batch size and a big C due to the out of memory issue. Do you have any plans to reimplement this to be more GPU friendly?
The text was updated successfully, but these errors were encountered:
Hi, no there is no active development for this repo unfortunately. But for me, it was possible to train it with a batch size of 2 or 3, and this was enough to get a reasonable training.
In your implementation with loss_desc using triplet loss, you compute dot_product, which has a shape of Bs, 30, 40, 30, 40, C. It is quite difficult to use a large batch size and a big C due to the out of memory issue. Do you have any plans to reimplement this to be more GPU friendly?
The text was updated successfully, but these errors were encountered: