Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dyngen Tutorial Missing Sample #1 (PHATE change?) #9

Open
whitsettbe opened this issue Dec 31, 2024 · 0 comments
Open

Dyngen Tutorial Missing Sample #1 (PHATE change?) #9

whitsettbe opened this issue Dec 31, 2024 · 0 comments

Comments

@whitsettbe
Copy link
Collaborator

In [Tutorial] Dyngen, loading the data (cell 4) produces the below plot indicating the presence of only four sample categories. The MIOFlow paper indicates that there should be five such categories. This causes issues in cell 13, where the training regimen is initiated. The stack trace (also included below) reveals an IndexError when ToyODE.forward is called at time t=5, since the relevant object only has scale parameters for 4 samples. This prevents the example from training the model.

I suspect the issue may result from updates to PHATE since this paper was published, such that the new coordinates assigned to the Dyngen dataset put >20% of the data within the region df['d1'] <= -2.0 trimmed by the function make_dyngen_data. Perhaps widening the cutoffs to reflect the up-to-date PHATE coordinate would resolve the issue. It should also be noted that the new coordinate is reversed from the one suggested by Figure 3 of the paper.

image

IndexError                                Traceback (most recent call last)
Cell In[14], line 2
      1 start_time = time.time()
----> 2 local_losses, batch_losses, globe_losses = training_regimen(
      3     # local, global, local train structure
      4     n_local_epochs=n_local_epochs, 
      5     n_epochs=n_epochs, 
      6     n_post_local_epochs=n_post_local_epochs,
      7     
      8     # where results are stored
      9     exp_dir=exp_dir, 
     10 
     11     # BEGIN: train params
     12     model=model, df=df, groups=groups, optimizer=optimizer, 
     13     criterion=criterion, use_cuda=use_cuda,
     14     
     15     hold_one_out=hold_one_out, hold_out=hold_out,
     16     
     17     use_density_loss=use_density_loss, 
     18     lambda_density=lambda_density,
     19     
     20     autoencoder=autoencoder, use_emb=use_emb, use_gae=use_gae, 
     21     
     22     sample_size=sample_size, logger=logger,
     23     reverse_schema=reverse_schema, reverse_n=reverse_n,
     24     # END: train params
     25 
     26     plot_every=5,
     27     n_points=n_points, n_trajectories=n_trajectories, n_bins=n_bins, 
     28     #local_losses=local_losses, batch_losses=batch_losses, globe_losses=globe_losses
     29 )
     30 run_time = time.time() - start_time + run_time_geo if use_emb or use_gae else time.time() - start_time
     31 logger.info(f'Total run time: {np.round(run_time, 5)}')

File c:\users\benja\my stuff\2025\yale\mioflow\MIOFlow\train.py:505, in training_regimen(n_local_epochs, n_epochs, n_post_local_epochs, exp_dir, model, df, groups, optimizer, n_batches, criterion, use_cuda, hold_one_out, hold_out, hinge_value, use_density_loss, top_k, lambda_density, autoencoder, use_emb, use_gae, sample_size, sample_with_replacement, logger, add_noise, noise_scale, use_gaussian, use_penalty, lambda_energy, steps, plot_every, n_points, n_trajectories, n_bins, local_losses, batch_losses, globe_losses, reverse_schema, reverse_n)
    503 for epoch in tqdm(range(n_epochs), desc='Epoch'):
    504     reverse = True if reverse_schema and epoch % reverse_n == 0 else False
--> 505     l_loss, b_loss, g_loss = train(
    506         model, df, groups, optimizer, n_batches, 
    507         criterion = criterion, use_cuda = use_cuda,
    508         local_loss=False, global_loss=True, apply_losses_in_time=True,
    509         hold_one_out=hold_one_out, hold_out=hold_out, 
    510         hinge_value=hinge_value,
    511         use_density_loss = use_density_loss,       
    512         top_k = top_k, lambda_density = lambda_density, 
    513         autoencoder = autoencoder, use_emb = use_emb, use_gae = use_gae, sample_size=sample_size, 
    514         sample_with_replacement=sample_with_replacement, logger=logger, 
    515         add_noise=add_noise, noise_scale=noise_scale, use_gaussian=use_gaussian,
    516         use_penalty=use_penalty, lambda_energy=lambda_energy, reverse=reverse
    517     )
    518     for k, v in l_loss.items():  
    519         local_losses[k].extend(v)

File c:\users\benja\my stuff\2025\yale\mioflow\MIOFlow\train.py:253, in train(model, df, groups, optimizer, n_batches, criterion, use_cuda, sample_size, sample_with_replacement, local_loss, global_loss, hold_one_out, hold_out, apply_losses_in_time, top_k, hinge_value, use_density_loss, lambda_density, autoencoder, use_emb, use_gae, use_gaussian, add_noise, noise_scale, logger, use_penalty, lambda_energy, reverse)
    251     data_ti = [autoencoder.encoder(data) for data in data_ti]
    252 # prediction
--> 253 data_tp = model(data_ti[0], time, return_whole_sequence=True)
    254 if autoencoder is not None and use_emb:        
    255     data_tp = [autoencoder.encoder(data) for data in data_tp]

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
   1734     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1735 else:
-> 1736     return self._call_impl(*args, **kwargs)

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1747, in Module._call_impl(self, *args, **kwargs)
   1742 # If we don't have any hooks, we want to skip the rest of the logic in
   1743 # this function, and just call forward.
   1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1745         or _global_backward_pre_hooks or _global_backward_hooks
   1746         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747     return forward_call(*args, **kwargs)
   1749 result = None
   1750 called_always_called_hooks = set()

File c:\users\benja\my stuff\2025\yale\mioflow\MIOFlow\models.py:209, in ToyModel.forward(self, x, t, return_whole_sequence)
    207         self.norm.append(torch.linalg.norm(self.func(time,x)).pow(2))
    208 if self.atol is None and self.rtol is None:
--> 209     x = odeint(self.func,x ,t, method=self.method)
    210 elif self.atol is not None and self.rtol is None:
    211     x = odeint(self.func,x ,t, method=self.method, atol=self.atol)

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\adjoint.py:206, in odeint_adjoint(func, y0, t, rtol, atol, method, options, event_fn, adjoint_rtol, adjoint_atol, adjoint_method, adjoint_options, adjoint_params)
    203 state_norm = options["norm"]
    204 handle_adjoint_norm_(adjoint_options, shapes, state_norm)
--> 206 ans = OdeintAdjointMethod.apply(shapes, func, y0, t, rtol, atol, method, options, event_fn, adjoint_rtol, adjoint_atol,
    207                                 adjoint_method, adjoint_options, t.requires_grad, *adjoint_params)
    209 if event_fn is None:
    210     solution = ans

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\autograd\function.py:575, in Function.apply(cls, *args, **kwargs)
    572 if not torch._C._are_functorch_transforms_active():
    573     # See NOTE: [functorch vjp and autograd interaction]
    574     args = _functorch.utils.unwrap_dead_wrappers(args)
--> 575     return super().apply(*args, **kwargs)  # type: ignore[misc]
    577 if not is_setup_ctx_defined:
    578     raise RuntimeError(
    579         "In order to use an autograd.Function with functorch transforms "
    580         "(vmap, grad, jvp, jacrev, ...), it must override the setup_context "
    581         "staticmethod. For more details, please see "
    582         "https://pytorch.org/docs/main/notes/extending.func.html"
    583     )

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\adjoint.py:24, in OdeintAdjointMethod.forward(ctx, shapes, func, y0, t, rtol, atol, method, options, event_fn, adjoint_rtol, adjoint_atol, adjoint_method, adjoint_options, t_requires_grad, *adjoint_params)
     21 ctx.event_mode = event_fn is not None
     23 with torch.no_grad():
---> 24     ans = odeint(func, y0, t, rtol=rtol, atol=atol, method=method, options=options, event_fn=event_fn)
     26     if event_fn is None:
     27         y = ans

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\odeint.py:80, in odeint(func, y0, t, rtol, atol, method, options, event_fn)
     77 solver = SOLVERS[method](func=func, y0=y0, rtol=rtol, atol=atol, **options)
     79 if event_fn is None:
---> 80     solution = solver.integrate(t)
     81 else:
     82     event_t, solution = solver.integrate_until_event(t[0], event_fn)

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\solvers.py:114, in FixedGridODESolver.integrate(self, t)
    112 dt = t1 - t0
    113 self.func.callback_step(t0, y0, dt)
--> 114 dy, f0 = self._step_func(self.func, t0, dt, t1, y0)
    115 y1 = y0 + dy
    117 while j < len(t) and t1 >= t[j]:

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\fixed_grid.py:29, in RK4._step_func(self, func, t0, dt, t1, y0)
     27 def _step_func(self, func, t0, dt, t1, y0):
     28     f0 = func(t0, y0, perturb=Perturb.NEXT if self.perturb else Perturb.NONE)
---> 29     return rk4_alt_step_func(func, t0, dt, t1, y0, f0=f0, perturb=self.perturb), f0

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\rk_common.py:115, in rk4_alt_step_func(func, t0, dt, t1, y0, f0, perturb)
    113 k2 = func(t0 + dt * _one_third, y0 + dt * k1 * _one_third)
    114 k3 = func(t0 + dt * _two_thirds, y0 + dt * (k2 - k1 * _one_third))
--> 115 k4 = func(t1, y0 + dt * (k1 - k2 + k3), perturb=Perturb.PREV if perturb else Perturb.NONE)
    116 return (k1 + 3 * (k2 + k3) + k4) * dt * 0.125

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
   1734     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1735 else:
-> 1736     return self._call_impl(*args, **kwargs)

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1747, in Module._call_impl(self, *args, **kwargs)
   1742 # If we don't have any hooks, we want to skip the rest of the logic in
   1743 # this function, and just call forward.
   1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1745         or _global_backward_pre_hooks or _global_backward_hooks
   1746         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747     return forward_call(*args, **kwargs)
   1749 result = None
   1750 called_always_called_hooks = set()

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\misc.py:197, in _PerturbFunc.forward(self, t, y, perturb)
    194 else:
    195     # Do nothing.
    196     pass
--> 197 return self.base_func(t, y)

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
   1734     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1735 else:
-> 1736     return self._call_impl(*args, **kwargs)

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1747, in Module._call_impl(self, *args, **kwargs)
   1742 # If we don't have any hooks, we want to skip the rest of the logic in
   1743 # this function, and just call forward.
   1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1745         or _global_backward_pre_hooks or _global_backward_hooks
   1746         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747     return forward_call(*args, **kwargs)
   1749 result = None
   1750 called_always_called_hooks = set()

File ~\miniconda3\envs\mioflow\lib\site-packages\torchdiffeq\_impl\misc.py:197, in _PerturbFunc.forward(self, t, y, perturb)
    194 else:
    195     # Do nothing.
    196     pass
--> 197 return self.base_func(t, y)

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
   1734     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1735 else:
-> 1736     return self._call_impl(*args, **kwargs)

File ~\miniconda3\envs\mioflow\lib\site-packages\torch\nn\modules\module.py:1747, in Module._call_impl(self, *args, **kwargs)
   1742 # If we don't have any hooks, we want to skip the rest of the logic in
   1743 # this function, and just call forward.
   1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1745         or _global_backward_pre_hooks or _global_backward_hooks
   1746         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747     return forward_call(*args, **kwargs)
   1749 result = None
   1750 called_always_called_hooks = set()

File c:\users\benja\my stuff\2025\yale\mioflow\MIOFlow\models.py:61, in ToyODE.forward(self, t, x)
     59 if self.alpha is not None:
     60     z = torch.randn(x.size(),requires_grad=False).cuda() if x.is_cuda else torch.randn(x.size(),requires_grad=False)
---> 61 dxdt = x + z*self.alpha[int(t-1)] if self.alpha is not None else x
     62 return dxdt

IndexError: index 4 is out of bounds for dimension 0 with size 4```
@whitsettbe whitsettbe changed the title Dyngen Tutorial Missing Sample #1 (PHATE change) Dyngen Tutorial Missing Sample #1 (PHATE change?) Dec 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant