Skip to content

Commit

Permalink
Deploy GitHub Pages
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Apr 15, 2024
0 parents commit da44c58
Show file tree
Hide file tree
Showing 193 changed files with 23,864 additions and 0 deletions.
Empty file added .nojekyll
Empty file.
430 changes: 430 additions & 0 deletions AUTHORS.html

Large diffs are not rendered by default.

460 changes: 460 additions & 0 deletions CHANGELOG.html

Large diffs are not rendered by default.

638 changes: 638 additions & 0 deletions README-copy.html

Large diffs are not rendered by default.

553 changes: 553 additions & 0 deletions _autosummary/torchsurv.loss.cox.html

Large diffs are not rendered by default.

624 changes: 624 additions & 0 deletions _autosummary/torchsurv.loss.momentum.html

Large diffs are not rendered by default.

680 changes: 680 additions & 0 deletions _autosummary/torchsurv.loss.weibull.html

Large diffs are not rendered by default.

838 changes: 838 additions & 0 deletions _autosummary/torchsurv.metrics.auc.html

Large diffs are not rendered by default.

807 changes: 807 additions & 0 deletions _autosummary/torchsurv.metrics.brier_score.html

Large diffs are not rendered by default.

765 changes: 765 additions & 0 deletions _autosummary/torchsurv.metrics.cindex.html

Large diffs are not rendered by default.

502 changes: 502 additions & 0 deletions _autosummary/torchsurv.stats.ipcw.html

Large diffs are not rendered by default.

584 changes: 584 additions & 0 deletions _autosummary/torchsurv.stats.kaplan_meier.html

Large diffs are not rendered by default.

Binary file added _images/benchmark_auc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/benchmark_brier_score.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/benchmark_cox.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/benchmark_harrell_cindex.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/benchmark_uno_cindex.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/benchmark_weibull.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/notebooks_introduction_19_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/notebooks_introduction_38_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/notebooks_momentum_9_0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/table_python_benchmark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/table_python_benchmark_legend.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions _sources/AUTHORS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Contributors
============

* Thibaud Coroller <[email protected]>
* Mélodie Monod <[email protected]>
* Peter Krusche <[email protected]>
* Qian Cao <[email protected]>
15 changes: 15 additions & 0 deletions _sources/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
Change log
=========

Version 0.1.1
-------------

Added metrics classes (AUC, Cindex, Brier score)
Added Kaplan Meier
Created Sphinx documentation
Added R benchmark comparison

Version 0.1.0
-------------

Initial release of CoxPH and Weibull classes.
214 changes: 214 additions & 0 deletions _sources/README-copy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,214 @@
# Survival analysis made easy

> :warning: :construction: **We are still working on the publication of this project and appreciate your patience until everything will be ready.** :construction: :warning:
`TorchSurv` is a Python package that serves as a companion tool to perform deep survival modeling within the `PyTorch` environment. Unlike existing libraries that impose specific parametric forms on users, `TorchSurv` enables the use of custom `PyTorch`-based deep survival models. With its lightweight design, minimal input requirements, full `PyTorch` backend, and freedom from restrictive survival model parameterizations, `TorchSurv` facilitates efficient survival model implementation, particularly beneficial for high-dimensional input data scenarios.

# TL;DR

Our idea is to **keep things simple**. You are free to use any model architecture you want! Our code has 100% PyTorch backend and behaves like any other functions (losses or metrics) you may be familiar with.

Our functions are designed to support you, not to make you jump through hoops. Here's a pseudo code illustrating how easy is it to use `TorchSurv` to fit and evaluate a Cox proportional hazards model:

```python
from torchsurv.loss import cox
from torchsurv.metrics.cindex import ConcordanceIndex

# Pseudo training loop
for data in dataloader:
x, event, time = data
estimate = model(x) # shape = torch.Size([64, 1]), if batch size is 64
loss = cox.neg_partial_log_likelihood(estimate, event, time)
loss.backward() # native torch backend

# You can check model performance using our evaluation metrics, e.g, the concordance index with
cindex = ConcordanceIndex()
cindex(estimate, event, time)

# You can obtain the confidence interval of the c-index
cindex.confidence_interval()

# You can test whether the observed c-index is greater than 0.5 (random estimator)
cindex.p_value(method="noether", alternative="two_sided")

# You can even compare the metrics between two models (e.g., vs. model B)
cindex.compare(cindexB)
```

# Installation

First, install the package:

```bash
pip install torchsurv
```

or for local installation (from package root)

```bash
pip install -e .
```

If you use Conda, you can install requirements into a conda environment
using the `environment.yml` file included in the `dev` subfolder of the source repository.

# Getting started

We recommend starting with the [introductory guide](https://opensource.nibr.com/torchsurv/notebooks/introduction.html), where you'll find an overview of the package's functionalities.

## Survival data

We simulate a random batch of 64 subjects. Each subject is associated with a binary event status (= `True` if event occured), a time-to-event or censoring and 16 covariates.

```python
>>> import torch
>>> _ = torch.manual_seed(52)
>>> n = 64
>>> x = torch.randn((n, 16))
>>> event = torch.randint(low=0, high=2, size=(n,)).bool()
>>> time = torch.randint(low=1, high=100, size=(n,)).float()
```

## Cox proportional hazards model

The user is expected to have defined a model that outputs the estimated *log relative hazard* for each subject. For illustrative purposes, we define a simple linear model that generates a linear combination of the covariates.

```python
>>> from torch import nn
>>> model_cox = nn.Sequential(nn.Linear(16, 1))
>>> log_hz = model_cox(x)
>>> print(log_hz.shape)
torch.Size([64, 1])
```

Given the estimated log relative hazard and the survival data, we calculate the current loss for the batch with:

```python
>>> from torchsurv.loss.cox import neg_partial_log_likelihood
>>> loss = neg_partial_log_likelihood(log_hz, event, time)
>>> print(loss)
tensor(4.1723, grad_fn=<DivBackward0>)
```

We obtain the concordance index for this batch with:

```python
>>> from torchsurv.metrics.cindex import ConcordanceIndex
>>> with torch.no_grad(): log_hz = model_cox(x)
>>> cindex = ConcordanceIndex()
>>> print(cindex(log_hz, event, time))
tensor(0.4872)
```

We obtain the Area Under the Receiver Operating Characteristic Curve (AUC) at a new time t = 50 for this batch with:

```python
>>> from torchsurv.metrics.auc import Auc
>>> new_time = torch.tensor(50.)
>>> auc = Auc()
>>> print(auc(log_hz, event, time, new_time=50))
tensor([0.4737])
```

## Weibull accelerated failure time (AFT) model

The user is expected to have defined a model that outputs for each subject the estimated *log scale* and optionally the *log shape* of the Weibull distribution that the event density follows. In case the model has a single output, `TorchSurv` assume that the shape is equal to 1, resulting in the event density to be an exponential distribution solely parametrized by the scale.

For illustrative purposes, we define a simple linear model that estimate two linear combinations of the covariates (log scale and log shape parameters).

```python
>>> from torch import nn
>>> model_weibull = nn.Sequential(nn.Linear(16, 2))
>>> log_params = model_weibull(x)
>>> print(log_params.shape)
torch.Size([64, 2])
```

Given the estimated log scale and log shape and the survival data, we calculate the current loss for the batch with:

```python
>>> from torchsurv.loss.weibull import neg_log_likelihood
>>> loss = neg_log_likelihood(log_params, event, time)
>>> print(loss)
tensor(82931.5078, grad_fn=<DivBackward0>)
```

To evaluate the predictive performance of the model, we calculate subject-specific log hazard and survival function evaluated at all times with:

```python
>>> from torchsurv.loss.weibull import log_hazard
>>> from torchsurv.loss.weibull import survival_function
>>> with torch.no_grad(): log_params = model_weibull(x)
>>> log_hz = log_hazard(log_params, time)
>>> print(log_hz.shape)
torch.Size([64, 64])
>>> surv = survival_function(log_params, time)
>>> print(surv.shape)
torch.Size([64, 64])
```

We obtain the concordance index for this batch with:

```python
>>> from torchsurv.metrics.cindex import ConcordanceIndex
>>> cindex = ConcordanceIndex()
>>> print(cindex(log_hz, event, time))
tensor(0.4062)
```

We obtain the AUC at a new time t = 50 for this batch with:

```python
>>> from torchsurv.metrics.auc import Auc
>>> new_time = torch.tensor(50.)
>>> log_hz_t = log_hazard(log_params, time=new_time)
>>> auc = Auc()
>>> print(auc(log_hz_t, event, time, new_time=new_time))
tensor([0.3509])
```

We obtain the integrated brier-score with:

```python
>>> from torchsurv.metrics.brier_score import BrierScore
>>> brier_score = BrierScore()
>>> bs = brier_score(surv, event, time)
>>> print(brier_score.integral())
tensor(0.4447)
```

# Related Packages

The table below compares the functionalities of `TorchSurv` with those of
[auton-survival](https://proceedings.mlr.press/v182/nagpal22a.html),
[pycox](http://jmlr.org/papers/v20/18-424.html),
[torchlife](https://sachinruk.github.io/torchlife//index.html),
[scikit-survival](https://jmlr.org/papers/v21/20-729.html),
[lifelines](https://joss.theoj.org/papers/10.21105/joss.01317), and
[deepsurv](https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-018-0482-1).
While several libraries offer survival modelling functionalities, no existing library provides the flexibility to use a custom PyTorch-based neural networks to define the survival model parameters.

The outputs of both the log-likelihood functions and the evaluation metrics functions have undergone thorough comparison with benchmarks generated using Python packages and R packages. The comparisons are summarised in the [Related packages summary](https://opensource.nibr.com/torchsurv/benchmarks.html).

![Survival analysis libraries in Python](source/table_python_benchmark.png)
![Survival analysis libraries in Python](source/table_python_benchmark_legend.png)

# Contributing

We value contributions from the community to enhance and improve this project. If you'd like to contribute, please consider the following:

1. Create Issues: If you encounter bugs, have feature requests, or want to suggest improvements, please create an [issue](https://github.com/Novartis/torchsurv/issues) in the GitHub repository. Make sure to provide detailed information about the problem, including code for reproducibility, or enhancement you're proposing.

2. Fork and Pull Requests: If you're willing to address an existing issue or contribute a new feature, fork the repository, create a new branch, make your changes, and then submit a pull request. Please ensure your code follows our coding conventions and include tests for any new functionality.

By contributing to this project, you agree to license your contributions under the same license as this project.

# Contact

If you have any questions, suggestions, or feedback, feel free to reach out to [us](https://opensource.nibr.com/torchsurv/AUTHORS.html).

# Cite

If you use this project in academic work or publications, we appreciate citing it using the following BibTeX entry:

TEMPORARY PLACEHOLDER.
29 changes: 29 additions & 0 deletions _sources/_autosummary/torchsurv.loss.cox.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
torchsurv.loss.cox
==================

.. automodule:: torchsurv.loss.cox







.. rubric:: Functions

.. autosummary::

neg_partial_log_likelihood













29 changes: 29 additions & 0 deletions _sources/_autosummary/torchsurv.loss.momentum.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
torchsurv.loss.momentum
=======================

.. automodule:: torchsurv.loss.momentum











.. rubric:: Classes

.. autosummary::

Momentum









32 changes: 32 additions & 0 deletions _sources/_autosummary/torchsurv.loss.weibull.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
torchsurv.loss.weibull
======================

.. automodule:: torchsurv.loss.weibull







.. rubric:: Functions

.. autosummary::

cumulative_hazard
log_hazard
neg_log_likelihood
survival_function













29 changes: 29 additions & 0 deletions _sources/_autosummary/torchsurv.metrics.auc.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
torchsurv.metrics.auc
=====================

.. automodule:: torchsurv.metrics.auc











.. rubric:: Classes

.. autosummary::

Auc









29 changes: 29 additions & 0 deletions _sources/_autosummary/torchsurv.metrics.brier_score.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
torchsurv.metrics.brier\_score
==============================

.. automodule:: torchsurv.metrics.brier_score











.. rubric:: Classes

.. autosummary::

BrierScore









Loading

0 comments on commit da44c58

Please sign in to comment.