You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dataset_argoverse.py L527, why the code vectors.append(vector) not in the end of this for loop?
decoder.py L339 loss[i] += F.nll_loss(pred_probs[i].unsqueeze(0), torch.tensor([argmin], device=device)), should the second parameter be torch.tensor([1], device=device)?
dataset_argoverse.py L428 should add:
if len(focal_track.object_states) != 110:
return None
because some object_states of focal_track is less than 110, or the code won't run.
argoverse2 didn't train the set_predictor, have you try to train set_predictor? If the performance could be closet to that of offline optimization?
The text was updated successfully, but these errors were encountered:
Maybe there is someting wrong with my downloaded data, there are dozens of samples the object_states of focal_track is less than 110. for example these 2 situations: 02ecafff-012c-45f8-bb6e-5ace2c6e3d88 and 014c02b0-3315-4faf-aca5-b239f5b7f0ca, actually 105 frames data.
I read the code on branch argoverse2, could you help me answer some questions about the code?
vectors.append(vector)
not in the end of this for loop?loss[i] += F.nll_loss(pred_probs[i].unsqueeze(0), torch.tensor([argmin], device=device))
, should the second parameter betorch.tensor([1], device=device)
?The text was updated successfully, but these errors were encountered: