Skip to content
/ BERT_QA Public

Accelerating the development of question-answering systems based on BERT and TF 2.0

License

Notifications You must be signed in to change notification settings

artitw/BERT_QA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BERT-QA

Build question-answering systems using state-of-the-art pre-trained contextualized language models, e.g. BERT. We are working to accelerate the development of question-answering systems based on BERT and TF 2.0!

Background

This project is based on our study: Question Generation by Transformers.

Citation

To cite this work, use the following BibTeX citation.

@article{question-generation-transformers@2019,
  title={Question Generation by Transformers},
  author={Kriangchaivech, Kettip and Wangperawong, Artit},
  journal={arXiv preprint arXiv:1909.05017},
  year={2019}
}

Requirements

TensorFlow 2.0 will be installed if not already on your system

Installation

pip install bert_qa

Example usage

Run Colab demo notebook here.

download pre-trained models and SQuAD data

wget -q https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12.tar.gz
tar -xvzf uncased_L-12_H-768_A-12.tar.gz
mv -f home/hongkuny/public/pretrained_models/keras_bert/uncased_L-12_H-768_A-12 .

download SQuAD data

wget -q https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json
wget -q https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json

import, initialize, pre-process data, finetune, and predict!

from bert_qa import squad
qa = squad.SQuAD()
qa.preprocess_training_data()
qa.fit()
predictions = qa.predict()

evaluate

import json
pred_data = json.load(open('model/predictions.json'))
dev_data = json.load(open('dev-v1.1.json'))['data']
qa.evaluate(dev_data, pred_data)

Advanced usage

Model type

The default model is an uncased Bidirectional Encoder Representations from Transformers (BERT) consisting of 12 transformer layers, 12 self-attention heads per layer, and a hidden size of 768. Below are all models currently supported that you can specify with hub_module_handle. We expect that more will be added in the future. For more information, see TensorFlow's BERT GitHub.

Contributing

BERT-QA is an open-source project founded and maintained to better serve the machine learning and data science community. Please feel free to submit pull requests to contribute to the project. By participating, you are expected to adhere to BERT-QA's code of conduct.

Questions?

For questions or help using BERT-QA, please submit a GitHub issue.

Releases

No releases published

Packages

No packages published