Skip to content

Commit

Permalink
Merge pull request #43 from ENSTA-U2IS/dev
Browse files Browse the repository at this point in the history
🐛 Fix docs & Add docs check before merge
  • Loading branch information
o-laurent authored Aug 29, 2023
2 parents 3173398 + eff523e commit fd8bdf0
Show file tree
Hide file tree
Showing 5 changed files with 17 additions and 9 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ on:
push:
branches:
- main
pull_request:
branches:
- main
schedule:
- cron: "00 12 * * 0" # Every Sunday noon (preserve the cache folders)
workflow_dispatch:
Expand Down Expand Up @@ -61,6 +64,7 @@ jobs:
- name: Deploy
uses: peaceiris/actions-gh-pages@v3
if: ${{ github.event_name != 'pull_request' }}
with:
deploy_key: ${{ secrets.ACTIONS_DEPLOY_KEY }}
external_repository: torch-uncertainty/torch-uncertainty.github.io
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ on:
- main
- dev
pull_request:
branches:
- main
- dev
schedule:
- cron: "42 7 * * 0"
workflow_dispatch:
Expand Down
7 changes: 4 additions & 3 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,10 @@ def optim_lenet(model: nn.Module) -> dict:
# We mock the arguments for the trainer
with ArgvContext(
"file.py",
"--max_epochs 1",
"--enable_progress_bar=False",
"--verbose=False",
"--max_epochs",
"1",
"--enable_progress_bar",
"False",
):
args = init_args(datamodule=MNISTDataModule)

Expand Down
11 changes: 6 additions & 5 deletions auto_tutorials_source/tutorial_scaler.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,15 +71,16 @@
#
# When computing the ECE, you need to provide the likelihoods associated with the inputs.
# To do this, just call PyTorch's softmax.
#
# To avoid lengthy computations (without GPU), we restrict the calibration computation to a subset
# of the test set.

from torch.utils.data import DataLoader, random_split

# Split datasets
dataset = dm.test
cal_dataset, test_dataset = random_split(dataset, [1000, len(dataset) - 1000])
cal_dataloader, test_dataloader = DataLoader(cal_dataset, batch_size=32), DataLoader(
test_dataset, batch_size=32
)
cal_dataset, test_dataset, other = random_split(dataset, [1000, 1000, len(dataset) - 2000])
test_dataloader = DataLoader(test_dataset, batch_size=32)

# Initialize the ECE
ece = CalibrationError(task="multiclass", num_classes=100)
Expand All @@ -105,7 +106,7 @@

# Fit the scaler on the calibration dataset
scaler = TemperatureScaler()
scaler = scaler.fit(model=model, calib_loader=cal_dataloader)
scaler = scaler.fit(model=model, calibration_set=cal_dataset)

# %%
# 6. Iterating Again to Compute the Improved ECE
Expand Down
1 change: 0 additions & 1 deletion docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,6 @@ Metrics
BrierScore
Disagreement
Entropy
JensenShannonDivergence
MutualInformation
NegativeLogLikelihood

Expand Down

0 comments on commit fd8bdf0

Please sign in to comment.