From 4ddd1e0e18648d5c62206f3d52481867927a711d Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Mon, 12 Aug 2024 11:01:46 +0000 Subject: [PATCH] build based on 7f7ffe7 --- dev/.documenter-siteinfo.json | 2 +- dev/DeepAR/index.html | 2 +- dev/Examples/index.html | 2 +- dev/Gans/index.html | 2 +- dev/index.html | 12 ++++++------ 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 714480b..6d87ba6 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-02T16:26:11","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-12T11:01:22","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/DeepAR/index.html b/dev/DeepAR/index.html index aed95cc..2398533 100644 --- a/dev/DeepAR/index.html +++ b/dev/DeepAR/index.html @@ -17,4 +17,4 @@ # Perform forecasting t₀, τ = 100, 20 -predictions = forecasting_DeepAR(model, collect(loaderXtrain)[1], t₀, τ; n_samples=100)

References

+predictions = forecasting_DeepAR(model, collect(loaderXtrain)[1], t₀, τ; n_samples=100)

References

diff --git a/dev/Examples/index.html b/dev/Examples/index.html index 008c0c2..96058aa 100644 --- a/dev/Examples/index.html +++ b/dev/Examples/index.html @@ -151,4 +151,4 @@ ) plot(prediction[1:τ]) plot!(xtest[1:τ]) -end

Example Image

Example Image

+end

Example Image

Example Image

diff --git a/dev/Gans/index.html b/dev/Gans/index.html index 50d00f6..9b54137 100644 --- a/dev/Gans/index.html +++ b/dev/Gans/index.html @@ -1,2 +1,2 @@ -GANs · ISL

Generative Adversarial Networks (GANs) Module Overview

This repository includes a dedicated folder that contains implementations of different Generative Adversarial Networks (GANs), showcasing a variety of approaches within the GAN framework. Our collection includes:

  • Vanilla GAN: Based on the foundational GAN concept introduced in "Generative Adversarial Nets" by Goodfellow et al. This implementation adapts and modifies the code from FluxGAN repository to fit our testing needs.

  • WGAN (Wasserstein GAN): Implements the Wasserstein GAN as described in "Wasserstein GAN" by Arjovsky et al., providing an advanced solution to the issue of training stability in GANs. Similar to Vanilla GAN, we have utilized and slightly adjusted the implementation from the FluxGAN repository.

  • MMD-GAN (Maximum Mean Discrepancy GAN): Our implementation of MMD-GAN is inspired by the paper "MMD GAN: Towards Deeper Understanding of Moment Matching Network" by Li et al. Unlike the previous models, the MMD-GAN implementation has been rewritten in Julia, transitioning from the original Python code provided by the authors.

Objective

The primary goal of incorporating these GAN models into our repository is to evaluate the effectiveness of ISL (Invariant Statistical Learning) methods as regularizers for GAN-based solutions. Specifically, we aim to address the challenges presented in the "Helvetica scenario," exploring how ISL methods can enhance the robustness and generalization of GANs in generating high-quality synthetic data.

Implementation Details

For each GAN variant mentioned above, we have made certain adaptations to the original implementations to ensure compatibility with our testing framework and the objectives of the ISL method integration. These modifications range from architectural adjustments to the optimization process, aiming to optimize the performance and efficacy of the ISL regularizers within the GAN context.

We encourage interested researchers and practitioners to explore the implementations and consider the potential of ISL methods in improving GAN architectures. For more detailed insights into the modifications and specific implementation choices, please refer to the code and accompanying documentation within the respective folders for each GAN variant.

+GANs · ISL

Generative Adversarial Networks (GANs) Module Overview

This repository includes a dedicated folder that contains implementations of different Generative Adversarial Networks (GANs), showcasing a variety of approaches within the GAN framework. Our collection includes:

  • Vanilla GAN: Based on the foundational GAN concept introduced in "Generative Adversarial Nets" by Goodfellow et al. This implementation adapts and modifies the code from FluxGAN repository to fit our testing needs.

  • WGAN (Wasserstein GAN): Implements the Wasserstein GAN as described in "Wasserstein GAN" by Arjovsky et al., providing an advanced solution to the issue of training stability in GANs. Similar to Vanilla GAN, we have utilized and slightly adjusted the implementation from the FluxGAN repository.

  • MMD-GAN (Maximum Mean Discrepancy GAN): Our implementation of MMD-GAN is inspired by the paper "MMD GAN: Towards Deeper Understanding of Moment Matching Network" by Li et al. Unlike the previous models, the MMD-GAN implementation has been rewritten in Julia, transitioning from the original Python code provided by the authors.

Objective

The primary goal of incorporating these GAN models into our repository is to evaluate the effectiveness of ISL (Invariant Statistical Learning) methods as regularizers for GAN-based solutions. Specifically, we aim to address the challenges presented in the "Helvetica scenario," exploring how ISL methods can enhance the robustness and generalization of GANs in generating high-quality synthetic data.

Implementation Details

For each GAN variant mentioned above, we have made certain adaptations to the original implementations to ensure compatibility with our testing framework and the objectives of the ISL method integration. These modifications range from architectural adjustments to the optimization process, aiming to optimize the performance and efficacy of the ISL regularizers within the GAN context.

We encourage interested researchers and practitioners to explore the implementations and consider the potential of ISL methods in improving GAN architectures. For more detailed insights into the modifications and specific implementation choices, please refer to the code and accompanying documentation within the respective folders for each GAN variant.

diff --git a/dev/index.html b/dev/index.html index ac6c086..2756b1d 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,11 +1,11 @@ -Home · ISL

ISL.jl Documentation Guide

Note

This repository contains the Julia Flux implementation of the Invariant Statistical Loss (ISL) proposed in the paper Training Implicit Generative Models via an Invariant Statistical Loss, published in the AISTATS 2024 conference.

Welcome to the documentation for ISL.jl, a Julia package designed for Invariant Statistical Learning. This guide provides a systematic overview of the modules, constants, types, and functions available in ISL.jl. Our documentation aims to help you quickly find the information you need to effectively utilize the package.

ISL.ISLModule

The ISL repository is organized into several directories that encapsulate different aspects of the project, ranging from the core source code and custom functionalities to examples demonstrating the application of the project's capabilities, as well as testing frameworks to ensure reliability.

Source Code (src/)

  • CustomLossFunction.jl: This file contains implementations of the ISL custom loss function tailored for the models developed within the repository.

  • ISL.jl: Serves as the main module file of the repository, this file aggregates and exports the functionalities developed in CustomLossFunction.jl.

Examples (examples/)

  • time_series_predictions/: This subdirectory showcases how the ISL project's models can be applied to time series prediction tasks.

  • Learning1d_distribution/: Focuses on the task of learning 1D distributions with the ISL.

Testing Framework (test/)

  • runtests.jl: This script is responsible for running automated tests against the ISL.jl module.
source
ISL.AutoISLParamsType
AutoISLParams

Hyperparameters for the method invariant_statistical_loss

@with_kw struct AutoISLParams
+Home · ISL

ISL.jl Documentation Guide

Note

This repository contains the Julia Flux implementation of the Invariant Statistical Loss (ISL) proposed in the paper Training Implicit Generative Models via an Invariant Statistical Loss, published in the AISTATS 2024 conference.

Welcome to the documentation for ISL.jl, a Julia package designed for Invariant Statistical Learning. This guide provides a systematic overview of the modules, constants, types, and functions available in ISL.jl. Our documentation aims to help you quickly find the information you need to effectively utilize the package.

ISL.ISLModule

The ISL repository is organized into several directories that encapsulate different aspects of the project, ranging from the core source code and custom functionalities to examples demonstrating the application of the project's capabilities, as well as testing frameworks to ensure reliability.

Source Code (src/)

  • CustomLossFunction.jl: This file contains implementations of the ISL custom loss function tailored for the models developed within the repository.

  • ISL.jl: Serves as the main module file of the repository, this file aggregates and exports the functionalities developed in CustomLossFunction.jl.

Examples (examples/)

  • time_series_predictions/: This subdirectory showcases how the ISL project's models can be applied to time series prediction tasks.

  • Learning1d_distribution/: Focuses on the task of learning 1D distributions with the ISL.

Testing Framework (test/)

  • runtests.jl: This script is responsible for running automated tests against the ISL.jl module.
source
ISL.AutoISLParamsType
AutoISLParams

Hyperparameters for the method invariant_statistical_loss

@with_kw struct AutoISLParams
     samples::Int64 = 1000
     epochs::Int64 = 100
     η::Float64 = 1e-3
     max_k::Int64 = 10
     transform = Normal(0.0f0, 1.0f0)
-end;
source
ISL.HyperParamsTSType
HyperParamsTS

Hyperparameters for the method ts_adaptative_block_learning

Base.@kwdef mutable struct HyperParamsTS
+end;
source
ISL.HyperParamsTSType
HyperParamsTS

Hyperparameters for the method ts_adaptative_block_learning

Base.@kwdef mutable struct HyperParamsTS
     seed::Int = 42                              # Random seed
     dev = cpu                                   # Device: cpu or gpu
     η::Float64 = 1e-3                           # Learning rate
@@ -13,13 +13,13 @@
     noise_model = Normal(0.0f0, 1.0f0)          # Noise to add to the data
     window_size = 100                           # Window size for the histogram
     K = 10                                      # Number of simulted observations
-end
source
ISL.ISLParamsType
ISLParams

Hyperparameters for the method adaptative_block_learning

@with_kw struct ISLParams
+end
source
ISL.ISLParamsType
ISLParams

Hyperparameters for the method adaptative_block_learning

@with_kw struct ISLParams
     samples::Int64 = 1000               # number of samples per histogram
     K::Int64 = 2                        # number of simulted observations
     epochs::Int64 = 100                 # number of epochs
     η::Float64 = 1e-3                   # learning rate
     transform = Normal(0.0f0, 1.0f0)    # transform to apply to the data
-end;
source
ISL.invariant_statistical_lossFunction
invariant_statistical_loss(model, data, hparams)

Custom loss function for the model. model is a Flux neuronal network model, data is a loader Flux object and hparams is a HyperParams object.

Arguments

  • nn_model::Flux.Chain: is a Flux neuronal network model
  • data::Flux.DataLoader: is a loader Flux object
  • hparams::HyperParams: is a HyperParams object
source
ISL.auto_invariant_statistical_lossFunction
auto_invariant_statistical_loss(model, data, hparams)

Custom loss function for the model.

This method gradually adapts K (starting from 2) up to max_k (inclusive). The value of K is chosen based on a simple two-sample test between the histogram associated with the obtained result and the uniform distribution.

To see the value of K used in the test, set the logger level to debug before executing.

Arguments

  • model::Flux.Chain: is a Flux neuronal network model
  • data::Flux.DataLoader: is a loader Flux object
  • hparams::AutoAdaptativeHyperParams: is a AutoAdaptativeHyperParams object
source
ISL.ts_invariant_statistical_loss_one_step_predictionFunction
ts_invariant_statistical_loss_one_step_prediction(rec, gen, Xₜ, Xₜ₊₁, hparams) -> losses

Compute the loss for one-step-ahead predictions in a time series using a recurrent model and a generative model.

Arguments

  • rec: The recurrent model that processes the input time series data Xₜ to generate a hidden state.
  • gen: The generative model that, based on the hidden state produced by rec, predicts the next time step in the series.
  • Xₜ: Input time series data at time t, used as input to the recurrent model.
  • Xₜ₊₁: Actual time series data at time t+1, used for calculating the prediction loss.
  • hparams: A struct of hyperparameters for the training process, which includes:
    • η: Learning rate for the optimizers.
    • K: The number of noise samples to generate for prediction.
    • window_size: The segment length of the time series data to process in each training iteration.
    • noise_model: The model to generate noise samples for the prediction process.

Returns

  • losses: A list of loss values computed for each iteration over the batches of data.

Description

This function iterates over batches of time series data, utilizing a sliding window approach determined by hparams.window_size to process segments of the series. In each iteration, it computes a hidden state using the recurrent model rec, generates predictions for the next time step with the generative model gen based on noise samples and the hidden state, and calculates the loss based on these predictions and the actual data Xₜ₊₁. The function updates both models using the Adam optimizer with gradients derived from the loss.

Example

# Define your recurrent and generative models
+end;
source
ISL.invariant_statistical_lossFunction
invariant_statistical_loss(model, data, hparams)

Custom loss function for the model. model is a Flux neuronal network model, data is a loader Flux object and hparams is a HyperParams object.

Arguments

  • nn_model::Flux.Chain: is a Flux neuronal network model
  • data::Flux.DataLoader: is a loader Flux object
  • hparams::HyperParams: is a HyperParams object
source
ISL.auto_invariant_statistical_lossFunction
auto_invariant_statistical_loss(model, data, hparams)

Custom loss function for the model.

This method gradually adapts K (starting from 2) up to max_k (inclusive). The value of K is chosen based on a simple two-sample test between the histogram associated with the obtained result and the uniform distribution.

To see the value of K used in the test, set the logger level to debug before executing.

Arguments

  • model::Flux.Chain: is a Flux neuronal network model
  • data::Flux.DataLoader: is a loader Flux object
  • hparams::AutoAdaptativeHyperParams: is a AutoAdaptativeHyperParams object
source
ISL.ts_invariant_statistical_loss_one_step_predictionFunction
ts_invariant_statistical_loss_one_step_prediction(rec, gen, Xₜ, Xₜ₊₁, hparams) -> losses

Compute the loss for one-step-ahead predictions in a time series using a recurrent model and a generative model.

Arguments

  • rec: The recurrent model that processes the input time series data Xₜ to generate a hidden state.
  • gen: The generative model that, based on the hidden state produced by rec, predicts the next time step in the series.
  • Xₜ: Input time series data at time t, used as input to the recurrent model.
  • Xₜ₊₁: Actual time series data at time t+1, used for calculating the prediction loss.
  • hparams: A struct of hyperparameters for the training process, which includes:
    • η: Learning rate for the optimizers.
    • K: The number of noise samples to generate for prediction.
    • window_size: The segment length of the time series data to process in each training iteration.
    • noise_model: The model to generate noise samples for the prediction process.

Returns

  • losses: A list of loss values computed for each iteration over the batches of data.

Description

This function iterates over batches of time series data, utilizing a sliding window approach determined by hparams.window_size to process segments of the series. In each iteration, it computes a hidden state using the recurrent model rec, generates predictions for the next time step with the generative model gen based on noise samples and the hidden state, and calculates the loss based on these predictions and the actual data Xₜ₊₁. The function updates both models using the Adam optimizer with gradients derived from the loss.

Example

# Define your recurrent and generative models
 rec = Chain(RNN(1 => 3, relu), RNN(3 => 3, relu))
 gen = Chain(Dense(4, 10, identity), Dense(10, 1, identity))
 
@@ -31,7 +31,7 @@
 hparams = HyperParamsTS(; seed=1234, η=1e-2, epochs=2000, window_size=1000, K=10)
 
 # Compute the losses
-losses = ts_invariant_statistical_loss_one_step_prediction(rec, gen, Xₜ, Xₜ₊₁, hparams)
source
ISL.ts_invariant_statistical_lossFunction
ts_invariant_statistical_loss(rec, gen, Xₜ, Xₜ₊₁, hparams)

Train a model for time series data with statistical invariance loss method.

Arguments

  • rec: The recurrent neural network (RNN) responsible for encoding the time series data.
  • gen: The generative model used for generating future time series data.
  • Xₜ: An array of input time series data at time t.
  • Xₜ₊₁: An array of target time series data at time t+1.
  • hparams::NamedTuple: A structure containing hyperparameters for training. It should include the following fields:
    • η::Float64: Learning rate for optimization.
    • window_size::Int: Size of the sliding window used during training.
    • K::Int: Number of samples in the generative model.
    • noise_model: Noise model used for generating random noise.

Returns

  • losses::Vector{Float64}: A vector containing the training loss values for each iteration.

Description

This function train a model for time series data with statistical invariance loss method. It utilizes a recurrent neural network (rec) to encode the time series data at time t and a generative model (gen) to generate future time series data at time t+1. The training process involves optimizing both the rec and gen models.

The function iterates through the provided time series data (Xₜ and Xₜ₊₁) in batches, with a sliding window of size window_size.

source
ISL.ts_invariant_statistical_loss_multivariateFunction
ts_invariant_statistical_loss_multivariate(rec, gen, Xₜ, Xₜ₊₁, hparams) -> losses

Calculate the time series invariant statistical loss for multivariate data using recurrent and generative models.

Arguments

  • rec: The recurrent model to process input time series data Xₜ.
  • gen: The generative model that works in conjunction with rec to generate the next time step predictions.
  • Xₜ: The input time series data at time t.
  • Xₜ₊₁: The actual time series data at time t+1 for loss calculation.
  • hparams: A struct containing hyperparameters for the model. Expected fields include:
    • η: Learning rate for the Adam optimizer.
    • K: The number of samples to draw from the noise model.
    • window_size: The size of the window to process the time series data in chunks.
    • noise_model: The statistical model to generate noise samples for the generative model.

Returns

  • losses: An array containing the loss values computed for each batch in the dataset.

Description

This function iterates over the provided time series data Xₜ and Xₜ₊₁, processing each batch through the recurrent model rec to generate a state s, which is then used along with samples from noise_model to generate predictions with gen. The loss is calculated based on the difference between the generated predictions and the actual data Xₜ₊₁, and the models are updated using the Adam optimizer.

Example

# Define your recurrent and generative models here
+losses = ts_invariant_statistical_loss_one_step_prediction(rec, gen, Xₜ, Xₜ₊₁, hparams)
source
ISL.ts_invariant_statistical_lossFunction
ts_invariant_statistical_loss(rec, gen, Xₜ, Xₜ₊₁, hparams)

Train a model for time series data with statistical invariance loss method.

Arguments

  • rec: The recurrent neural network (RNN) responsible for encoding the time series data.
  • gen: The generative model used for generating future time series data.
  • Xₜ: An array of input time series data at time t.
  • Xₜ₊₁: An array of target time series data at time t+1.
  • hparams::NamedTuple: A structure containing hyperparameters for training. It should include the following fields:
    • η::Float64: Learning rate for optimization.
    • window_size::Int: Size of the sliding window used during training.
    • K::Int: Number of samples in the generative model.
    • noise_model: Noise model used for generating random noise.

Returns

  • losses::Vector{Float64}: A vector containing the training loss values for each iteration.

Description

This function train a model for time series data with statistical invariance loss method. It utilizes a recurrent neural network (rec) to encode the time series data at time t and a generative model (gen) to generate future time series data at time t+1. The training process involves optimizing both the rec and gen models.

The function iterates through the provided time series data (Xₜ and Xₜ₊₁) in batches, with a sliding window of size window_size.

source
ISL.ts_invariant_statistical_loss_multivariateFunction
ts_invariant_statistical_loss_multivariate(rec, gen, Xₜ, Xₜ₊₁, hparams) -> losses

Calculate the time series invariant statistical loss for multivariate data using recurrent and generative models.

Arguments

  • rec: The recurrent model to process input time series data Xₜ.
  • gen: The generative model that works in conjunction with rec to generate the next time step predictions.
  • Xₜ: The input time series data at time t.
  • Xₜ₊₁: The actual time series data at time t+1 for loss calculation.
  • hparams: A struct containing hyperparameters for the model. Expected fields include:
    • η: Learning rate for the Adam optimizer.
    • K: The number of samples to draw from the noise model.
    • window_size: The size of the window to process the time series data in chunks.
    • noise_model: The statistical model to generate noise samples for the generative model.

Returns

  • losses: An array containing the loss values computed for each batch in the dataset.

Description

This function iterates over the provided time series data Xₜ and Xₜ₊₁, processing each batch through the recurrent model rec to generate a state s, which is then used along with samples from noise_model to generate predictions with gen. The loss is calculated based on the difference between the generated predictions and the actual data Xₜ₊₁, and the models are updated using the Adam optimizer.

Example

# Define your recurrent and generative models here
 rec = Chain(RNN(1 => 3, relu), RNN(3 => 3, relu))
 gen = Chain(Dense(4, 10, identity), Dense(10, 1, identity))
 
@@ -43,4 +43,4 @@
 hparams = HyperParamsTS(; seed=1234, η=1e-2, epochs=2000, window_size=1000, K=10)
 
 # Calculate the losses
-losses = ts_invariant_statistical_loss_multivariate(rec, gen, Xₜ, Xₜ₊₁, hparams)
source
+losses = ts_invariant_statistical_loss_multivariate(rec, gen, Xₜ, Xₜ₊₁, hparams)
source