Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clean up viscy cli display #73

Open
mattersoflight opened this issue Mar 21, 2024 · 0 comments
Open

clean up viscy cli display #73

mattersoflight opened this issue Mar 21, 2024 · 0 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@mattersoflight
Copy link
Member

viscy --help prints a useful and succinct help message.

But, viscy subcommand --help prints a lot of lightning CLI info that is not relevant, e.g.,

viscy preprocess --help prints:

  --lr_scheduler CONFIG | CLASS_PATH_OR_NAME | .INIT_ARG_NAME VALUE
                        One or more arguments specifying "class_path" and
                        "init_args" for any subclass of {torch.optim.lr_schedu
                        ler.LRScheduler,lightning.pytorch.cli.ReduceLROnPlatea
                        u}. (type: Union[LRScheduler, ReduceLROnPlateau],
                        known subclasses:
                        torch.optim.lr_scheduler.LRScheduler,
                        monai.optimizers.LinearLR,
                        monai.optimizers.ExponentialLR,
                        torch.optim.lr_scheduler.LambdaLR,
                        monai.optimizers.WarmupCosineSchedule,
                        torch.optim.lr_scheduler.MultiplicativeLR,
                        torch.optim.lr_scheduler.StepLR,
                        torch.optim.lr_scheduler.MultiStepLR,
                        torch.optim.lr_scheduler.ConstantLR,
                        torch.optim.lr_scheduler.LinearLR,
                        torch.optim.lr_scheduler.ExponentialLR,
                        torch.optim.lr_scheduler.SequentialLR,
                        torch.optim.lr_scheduler.PolynomialLR,
                        torch.optim.lr_scheduler.CosineAnnealingLR,
                        torch.optim.lr_scheduler.ChainedScheduler,
                        torch.optim.lr_scheduler.ReduceLROnPlateau,
                        lightning.pytorch.cli.ReduceLROnPlateau,
                        torch.optim.lr_scheduler.CyclicLR,
                        torch.optim.lr_scheduler.CosineAnnealingWarmRestarts,
                        torch.optim.lr_scheduler.OneCycleLR,
                        torch.optim.swa_utils.SWALR,
                        lightning.pytorch.cli.ReduceLROnPlateau)

Compute dataset statistics before training or testing for normalization:
  --data_path DATA_PATH
                        (required, type: <class 'Path'>)
  --channel_names CHANNEL_NAMES, --channel_names+ CHANNEL_NAMES
                        (type: Union[list[str], Literal[-1]], default: -1)
  --num_workers NUM_WORKERS
                        (type: int, default: 1)
  --block_size BLOCK_SIZE
                        (type: int, default: 32)

@ziw-liu please fix this.

@ziw-liu ziw-liu added the documentation Improvements or additions to documentation label Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants