Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hack week planning #83

Closed
67 of 73 tasks
fchollet opened this issue May 4, 2023 · 13 comments
Closed
67 of 73 tasks

Hack week planning #83

fchollet opened this issue May 4, 2023 · 13 comments

Comments

@fchollet
Copy link
Contributor

fchollet commented May 4, 2023

Participants:

See proposed assignments below.

Some of the assignment buckets may not last the week -- we will reshuffle assignments as we make progress. If we reach a sufficient degree of feature completeness mid-week, we can move on to backwards compatibility testing.

@fchollet

  • plot_model / model_to_dot
  • explore modernization of plot_model
  • stack trace filtering
  • add support for legacy h5 weights loading
  • JAX rnn function
  • FeatureSpace

Layers:

  • Wrapper
  • Normalization
  • Rescaling
  • CenterCrop
  • Resizing
  • RNN / RNNCell
  • StackedRNNCells
  • SimpleRNN
  • LSTM
  • GRU
  • Lambda
  • ConvLSTM1D/2D/3D
  • Bidirectional
  • TimeDistributed

Wrapped layers:

  • Discretization
  • TextVectorization
  • Hashing
  • StringLookup
  • IntegerLookup

@chenmoneygithub

  • Conv1D/2D/3D
  • SeparableConv1D/2D
  • DepthwiseConv1D/2D
  • Conv1DTranspose/2D/3D

@sampathweb

  • Set up code coverage CI test (e.g. codecov)
  • Identify missing tests and major TODOs

Callbacks:

  • RemoteMonitor
  • TerminateOnNaN
  • CSVLogger
  • ModelCheckpoint
  • BackupAndRestore

@qlzh727

@ianstenbit

  • Confusion metrics
  • Learning rate schedules (keras_core/optimizers/schedules/)

Callbacks:

  • LearningRateScheduler
  • ReduceLROnPlateau

@divyashreepathihalli

Layers:

  • Elu
  • ReLU
  • LeakyReLU
  • PReLU
  • Softmax

Wrapped layers:

  • RandomCrop
  • RandomFlip
  • RandomTranslation
  • RandomRotation
  • RandomZoom

@jbischof

  • Implement keras_core/backend/pytorch/numpy.py

@rchao

Layers:

  • UpSampling1D/2D/3D

@mattdangerw

Layers:

  • Attention
  • AdditiveAttention
  • MultiHeadAttention

@hertschuh

Layers:

  • Flatten
  • Reshape
  • RepeatVector
  • Permute
  • Cropping1D/2D/3D
  • ZeroPadding1D/2D/3D

@grasskin

Metrics

  • R2Score
  • F1Score
  • FBetaScore
  • IoU metrics

Callbacks:

  • TensorBoard

@nkovela1

Layers:

  • UnitNormalization
  • LayerNormalization
  • GroupNormalization
  • SpectralNormalization
  • CategoryEncoding
  • RandomContrast
  • RandomBrightness

@haifeng-jin

  • Implement step fusing in TF Trainer and JAX Trainer
@AakashKumarNain
Copy link
Collaborator

Not sure about the best option for explore modernization of plot_model but we can use Rich for modernization of progress bars and all. It's also used by pip now

@jbischof
Copy link
Contributor

jbischof commented May 7, 2023

@fchollet I might need more details for the pytorch backend. Should I just try to replicate backend/tensorflow/numpy.py line for line? The entire backend/tensorflow folder?

I also don't see any unit tests (at least in the backend/ folder. How do we verify the deliverables?

@fchollet
Copy link
Contributor Author

fchollet commented May 7, 2023

I might need more details for the pytorch backend. Should I just try to replicate backend/tensorflow/numpy.py line for line?

Just pytorch/numpy.py for now. Yes, all the same functions as in tensorflow/numpy.py need to be implemented. Ramesh is also taking a look at pytorch/nn.py.

I also don't see any unit tests (at least in the backend/ folder. How do we verify the deliverables?

This is indirectly tested by unit tests in keras_core/operations/. I think you should be able to set the backend as "pytorch" (we'll need to edit backend/__init__.py to make it work) and run those tests.

We could also add a keras_core/backend/tests/ directory to cover cross-backend function tests without relying on any other Keras functionality. It would probably be a good move since it makes it easier for folks to add new backends.

@ianstenbit
Copy link
Contributor

ianstenbit commented May 7, 2023 via email

@fchollet
Copy link
Contributor Author

@AakashKumarNain are you still working on the PyTorch dataloader?

@AakashKumarNain
Copy link
Collaborator

@fchollet didn't get much time to work during weekdays. Was planning to work on it over the weekend but if someone else wants to pick it up right aways, I am okay with it

@fchollet
Copy link
Contributor Author

It's still yours if you want to do it!

@AakashKumarNain
Copy link
Collaborator

I will work on it and will update you on the status on Monday

@AakashKumarNain
Copy link
Collaborator

@fchollet now that torch dataloader adapter has been merged, is there any unassigned task that I can work on in the coming weeks?

@fchollet
Copy link
Contributor Author

@fchollet now that torch dataloader adapter has been merged, is there any unassigned task that I can work on in the coming weeks?

Yes, thanks for offering! The next big sprint is going to be porting Keras Applications. We have 12 separate architectures to port. You could be working on e.g. ResNet, ResNetV2, MobileNet, etc. I will port the first application (likely VGG) within a few days.

@AakashKumarNain
Copy link
Collaborator

Thanks. I will start with ResNet

@fchollet
Copy link
Contributor Author

Thanks a lot for the help! 👍

@AakashKumarNain
Copy link
Collaborator

No problem. This project is way too close to my heart 🥂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants