Skip to content
Hesham El Abd edited this page Nov 2, 2019 · 2 revisions

Welcome to the SelfAttentionLangModel wiki :)

Introduction

The SelfAttentionLangModel is python based library for automating language Modeling tasks, for example, creating a language model using a text corpus, sequence labeling, and sequence classification. The library provides a clear and easy-to-use interface for doing these tasks by abstraction through specific APIs for creating Transformer-Encoder based models along with scripts for training and inference.

Currently, the library supports three models; The Modeler, which can be used for language modeling tasks. The Annotator, which can be used for sequence labeling and sequence classification tasks. The AnnotatorGRU, which is an experimental model that augment the self-attention models with a GRU network, this model is also used for sequence annotation and sequence labeling.

These models are based on tf.keras.Model and hence they support methods like fit or predict, this also means that they can be included as a part of more complex tf.keras.Models.

Clone this wiki locally