Skip to content

A series of training of deep neural networks on natural language processing

Notifications You must be signed in to change notification settings

uf-hobi-informatics-lab/NLP-ML-training

Repository files navigation

NLP-ML-training

This is a series of training on deep learning in natural language processing (NLP). You will learn the mathematical theories of machine learning and optimization, design of neural network architectures, and most importantly, implementation of them in computer languages. This training will cover mainstream deep learning models for NLP from 2010 to 2016, including convolutional neural network, recurrent neural network (LSTM), and transformers (e.g., BERT).

Prerequisite:

  1. Mathematics, including function, convex function; calculus, derivatives, partial derivatives
  2. Machine learning classifiers, optimization theory
  3. Programming skills such as C++, Java, Python, TensorFlow, Pytorch.

Please read the following papers and try your best to figure out the algroithms:

  1. CNN model : https://www.jmlr.org/papers/volume12/collobert11a/collobert11a.pdf
  2. RNN model : https://arxiv.org/abs/1603.01360 Source code: https://github.com/uf-hobi-informatics-lab/DeepDeID
  3. BERT model : https://arxiv.org/abs/1810.04805

About

A series of training of deep neural networks on natural language processing

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published