Skip to content

Commit

Permalink
Cleaning the Todos (#13)
Browse files Browse the repository at this point in the history
* Cleaning the Todos

---------

Co-authored-by: tmigot <[email protected]>
  • Loading branch information
farhadrclass and tmigot authored Jul 12, 2023
1 parent 54a840d commit d2cf7b9
Show file tree
Hide file tree
Showing 6 changed files with 13 additions and 22 deletions.
4 changes: 1 addition & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://JuliaSmoothOptimizers.github.io/FluxNLPModels.jl/dev)
[![Build Status](https://github.com/JuliaSmoothOptimizers/FluxNLPModels.jl/workflows/CI/badge.svg)](https://github.com/JuliaSmoothOptimizers/FluxNLPModels.jl/actions)
[![Codecov](https://codecov.io/gh/JuliaSmoothOptimizers/FluxNLPModels.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/JuliaSmoothOptimizers/FluxNLPModels.jl)
<!-- TODO check the links -->

This package serves as an NLPModels interface to the [Flux.jl](https://github.com/FluxML/Flux.jl) deep learning framework. It enables seamless integration between Flux's neural network architectures and NLPModels' optimization tools for natural language processing tasks.

Expand All @@ -13,8 +12,7 @@ This package serves as an NLPModels interface to the [Flux.jl](https://github.co
To use FluxNLPModels, add the package in the Julia package manager:

```julia
# pkg> add FluxNLPModels
pkg> add https://github.com/JuliaSmoothOptimizers/FluxNLPModels.jl.git
pkg> add FluxNLPModels
```

## How to Use
Expand Down
1 change: 0 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
#TODO redo this section
using Documenter, FluxNLPModels

makedocs(
Expand Down
16 changes: 5 additions & 11 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,18 @@
#TODO redo this section
# FluxNLPModels.jl

## Compatibility
Julia ≥ 1.6.

## How to install
TODO: this section needs work since our package is not yet register

This module can be installed with the following command:
```julia
# pkg> add FluxNLPModels
# pkg> test FluxNLPModels
pkg> add FluxNLPModels
```

## Synopsis
FluxNLPModels exposes neural network models as optimization problems conforming to the NLPModels.jl API. FluxNLPModels is an interface between [Flux.jl](https://github.com/FluxML/Flux.jl)'s classification neural networks and [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl.git).

FluxNLPModels exposes neural network models as optimization problems conforming to the [NLPModels API](https://github.com/JuliaSmoothOptimizers/NLPModels.jl). FluxNLPModels is an interface between [Flux.jl](https://github.com/FluxML/Flux.jl)'s classification neural networks and [NLPModels.jl](https://github.com/JuliaSmoothOptimizers/NLPModels.jl).

A `FluxNLPModel` gives the user access to:
- The values of the neural network variables/weights `w`;
Expand All @@ -25,12 +24,7 @@ In addition, it provides tools to:
- Retrieve the current minibatch ;
- Measure the neural network's loss at the current `w`.

## How to use
Check the [tutorial](https://jso.dev/FluxNLPModels.jl/dev/tutorial/).

# Bug reports and discussions

If you think you found a bug, feel free to open an [issue](https://github.com/JuliaSmoothOptimizers/FluxNLPModels.jl/issues). TODO: add repo link
Focused suggestions and requests can also be opened as issues. Before opening a pull request, please start an issue or a discussion on the topic.
If you encounter any bugs or have suggestions for improvement, please open an [issue](https://github.com/JuliaSmoothOptimizers/FluxNLPModels.jl/issues). For general questions or discussions related to this repository and the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers) organization, feel free to start a discussion [here](https://github.com/JuliaSmoothOptimizers/Organization/discussions).

If you have a question that is not suited for a bug report, feel free to start a discussion [here](#TODO). This forum is for general discussion about this repository and the [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers). Questions about any of our packages are welcome.
6 changes: 3 additions & 3 deletions src/FluxNLPModels.jl
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ mutable struct FluxNLPModel{T, S, C <: Chain, F <: Function} <: AbstractFluxNLPM
chain::C
counters::Counters
loss_f::F
size_minibatch::Int #TODO remove this
training_minibatch_iterator #TODO remove this, right now we pass the data
test_minibatch_iterator #TODO remove this
size_minibatch::Int
training_minibatch_iterator
test_minibatch_iterator
current_training_minibatch
current_test_minibatch
rebuild # this is used to create the rebuild of flat function
Expand Down
2 changes: 1 addition & 1 deletion src/utils.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Sets the vaiables and rebuild the chain
"""
function set_vars!(nlp::AbstractFluxNLPModel{T, S}, new_w::AbstractVector{T}) where {T <: Number, S} #TODO test T
function set_vars!(nlp::AbstractFluxNLPModel{T, S}, new_w::AbstractVector{T}) where {T <: Number, S}
nlp.w .= new_w
nlp.chain = nlp.rebuild(nlp.w)
end
Expand Down
6 changes: 3 additions & 3 deletions test/runtests.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
using Test
using FluxNLPModels
using CUDA, Flux, NLPModels
using Flux.Data: DataLoader #TODO update this
using Flux.Data: DataLoader
using Flux: onehotbatch, onecold, @epochs
using Flux.Losses: logitcrossentropy
using Base: @kwdef
Expand All @@ -24,7 +24,7 @@ function getdata(args)
# One-hot-encode the labels
ytrain, ytest = onehotbatch(ytrain, 0:9), onehotbatch(ytest, 0:9)

# Create DataLoaders (mini-batch iterators) #TODO it is passed down
# Create DataLoaders (mini-batch iterators)
train_loader = DataLoader((xtrain, ytrain), batchsize = args.batchsize, shuffle = true)
test_loader = DataLoader((xtest, ytest), batchsize = args.batchsize)

Expand All @@ -44,7 +44,7 @@ end

args = Args() # collect options in a struct for convenience

device = cpu #TODO should we test on GPU?
device = cpu

@testset "FluxNLPModels tests" begin

Expand Down

0 comments on commit d2cf7b9

Please sign in to comment.