Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebuttal #5

Merged
merged 2 commits into from
Feb 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/src/gan.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

In this repository, we have included a folder with different generative adversarial networks, GANs: [vanilla GAN](https://arxiv.org/pdf/1406.2661.pdf), [WGAN](https://arxiv.org/pdf/1701.07875.pdf), [MMD-GAN](https://arxiv.org/pdf/1705.08584.pdf).

In the first two cases, we have used the implementation from this [repoistory](https://github.com/AdarshKumar712/FluxGAN), with some minor changes. In the last case, we have rewritten the original [code](https://github.com/OctoberChang/MMD-GAN) written in Python to Julia.
In the first two cases, we have used the implementation from this [repository](https://github.com/AdarshKumar712/FluxGAN), with some minor changes. In the last case, we have rewritten the original [code](https://github.com/OctoberChang/MMD-GAN) written in Python to Julia.

The goal is to test that the AdapativeBlockLearning methods can work as regularizers for the solutions proposed by the GANs, providing a solution to the Helvetica scenario.
29 changes: 29 additions & 0 deletions examples/CostFunction.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,35 @@
using ThreadsX
using Plots

"""
This script defines two Julia functions, `proxi_cost_function` and `real_cost_function`,
intended for computing the cost function of a model in relation to a target function.
Both functions calculate the cost for varying parameters `m` (slope) and `b` (intercept)
across a grid defined by a mesh of these parameters.

- `proxi_cost_function` takes a meshgrid, a model function, a target function, a number of
parameter combinations (`K`), and a number of samples for Monte Carlo integration
(`n_samples`). It returns a vector of losses for each combination of mesh parameters.
The function estimates the loss by generating samples, applying the model and target
functions to these samples, and then computing a loss based on the divergence of the
model's output from the target's output.

- `real_cost_function` is similar to `proxi_cost_function` but calculates the cost based on
a direct comparison of the model's output against the target function over the specified
mesh. It also involves counting occurrences within a specified window to compute the loss.

Both functions illustrate a method for evaluating the performance of a model function
against a target, useful in optimization and machine learning contexts to adjust model
parameters (`m` and `b`) to minimize the loss.

Additionally, the script includes a demonstration of how to use these functions with a
simple linear model (`model(x; m, b) = m * x + b`) and a predefined `real_model`. It
prepares a mesh of parameters around an initial guess for `m` and `b`, computes losses
using both the proxy and real cost functions, and plots the resulting cost function
landscape to visualize areas of minimum loss, facilitating the understanding of how well
the model approximates the target function across different parameter combinations.
"""

"""
proxi_cost_function(mesh, model, target, K, n_samples)

Expand Down
Loading