This repository contains a hand-curated of great machine (deep) learning resources for Natural Language Processing (NLP) with a focus on Bidirectional Encoder Representations from Transformers (BERT), attention mechanism, Transformer architectures/networks, and transfer learning in NLP.
Table of Contents
Expand Table of Contents
- Official Implementations
- Other Implementations
- Transfer Learning in NLP
- Other Resources
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
- Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, William W. Cohen, Jaime Carbonell, Quoc V. Le and Ruslan Salakhutdinov.
- Uses smart caching to improve the learning of long-term dependency in Transformer. Key results: state-of-art on 5 language modeling benchmarks, including ppl of 21.8 on One Billion Word (LM1B) and 0.99 on enwiki8. The authors claim that the method is more flexible, faster during evaluation (1874 times speedup), generalizes well on small datasets, and is effective at modeling short and long sequences.
- Conditional BERT Contextual Augmentation by Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han and Songlin Hu.
- SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering by Chenguang Zhu, Michael Zeng and Xuedong Huang.
- Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever.
- The Evolved Transformer by David R. So, Chen Liang and Quoc V. Le.
- They used architecture search to improve Transformer architecture. Key is to use evolution and seed initial population with Transformer itself. The architecture is better and more efficient, especially for small size models.
BERT and Transformer
- Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing from Google AI.
- The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning).
- Dissecting BERT by Miguel Romero and Francisco Ingham - Understand BERT in depth with an intuitive, straightforward explanation of the relevant concepts.
- A Light Introduction to Transformer-XL.
- Generalized Language Models by Lilian Weng, Research Scientist at OpenAI.
- The Annotated Transformer by Harvard NLP Group - Further reading to understand the "Attention is all you need" paper.
- Attention? Attention! - Attention guide by Lilian Weng from OpenAI.
- Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) by Jay Alammar, an Instructor from Udacity ML Engineer Nanodegree.
- The Transformer blog post.
- The Illustrated Transformer by Jay Alammar, an Instructor from Udacity ML Engineer Nanodegree.
- Watch Łukasz Kaiser’s talk walking through the model and its details.
- Transformer-XL: Unleashing the Potential of Attention Models by Google Brain.
- Generative Modeling with Sparse Transformers by OpenAI - an algorithmic improvement of the attention mechanism to extract patterns from sequences 30x longer than possible previously.
OpenAI Generative Pre-Training Transformer (GPT) and GPT-2
- Better Language Models and Their Implications.
- Improving Language Understanding with Unsupervised Learning - this is an overview of the original GPT model.
🦄How to build a State-of-the-Art Conversational AI with Transfer Learning by Hugging Face.
- How to Build OpenAI's GPT-2: "The AI That's Too Dangerous to Release".
- OpenAI’s GPT2 - Food to Media hype or Wake Up Call?
- google-research/bert - TensorFlow code and pre-trained models for BERT.
- huggingface/pytorch-pretrained-BERT - A PyTorch implementation of Google AI's BERT model with script to load Google's pre-trained models by Hugging Face.
- codertimo/BERT-pytorch - Google AI 2018 BERT pytorch implementation.
- innodatalabs/tbert - PyTorch port of BERT ML model.
- kimiyoung/transformer-xl - Code repository associated with the Transformer-XL paper.
- dreamgonfly/BERT-pytorch - PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding".
- dhlee347/pytorchic-bert - Pytorch implementation of Google BERT
- Separius/BERT-keras - Keras implementation of BERT with pre-trained weights.
- CyberZHG/keras-bert - Implementation of BERT that could load official pre-trained models for feature extraction and prediction.
- guotong1988/BERT-tensorflow - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
- kimiyoung/transformer-xl - Code repository associated with the Transformer-XL paper.
- soskek/bert-chainer - Chainer implementation of "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding".
Transfer Learning in NLP
As Jay Alammar put it:
The year 2018 has been an inflection point for machine learning models handling text (or more accurately, Natural Language Processing or NLP for short). Our conceptual understanding of how best to represent words and sentences in a way that best captures underlying meanings and relationships is rapidly evolving. Moreover, the NLP community has been putting forward incredibly powerful components that you can freely download and use in your own models and pipelines (It's been referred to as NLP's ImageNet moment, referencing how years ago similar developments accelerated the development of machine learning in Computer Vision tasks).
One of the latest milestones in this development is the release of BERT, an event described as marking the beginning of a new era in NLP. BERT is a model that broke several records for how well models can handle language-based tasks. Soon after the release of the paper describing the model, the team also open-sourced the code of the model, and made available for download versions of the model that were already pre-trained on massive datasets. This is a momentous development since it enables anyone building a machine learning model involving language processing to use this powerhouse as a readily-available component – saving the time, energy, knowledge, and resources that would have gone to training a language-processing model from scratch.
BERT builds on top of a number of clever ideas that have been bubbling up in the NLP community recently – including but not limited to Semi-supervised Sequence Learning (by Andrew Dai and Quoc Le), ELMo (by Matthew Peters and researchers from AI2 and UW CSE), ULMFiT (by fast.ai founder Jeremy Howard and Sebastian Ruder), the OpenAI transformer (by OpenAI researchers Radford, Narasimhan, Salimans, and Sutskever), and the Transformer (Vaswani et al).
ULMFiT: Nailing down Transfer Learning in NLP
ULMFiT introduced methods to effectively utilize a lot of what the model learns during pre-training – more than just embeddings, and more than contextualized embeddings. ULMFiT introduced a language model and a process to effectively fine-tune that language model for various tasks.
NLP finally had a way to do transfer learning probably as well as Computer Vision could.
Expand Other Resources
- hanxiao/bert-as-service - Mapping a variable-length sentence to a fixed-length vector using pretrained BERT model.
- brightmart/bert_language_understanding - Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN.
- algteam/bert-examples - BERT examples.
- JayYip/bert-multiple-gpu - A multiple GPU support version of BERT.
- HighCWu/keras-bert-tpu - Implementation of BERT that could load official pre-trained models for feature extraction and prediction on TPU.
- whqwill/seq2seq-keyphrase-bert - Add BERT to encoder part for https://github.com/memray/seq2seq-keyphrase-pytorch
- xu-song/bert_as_language_model - BERT as language model, a fork from Google official BERT implementation.
- Y1ran/NLP-BERT--Chinese version
- yuanxiaosc/Deep_dynamic_word_representation - TensorFlow code and pre-trained models for deep dynamic word representation (DDWR). It combines the BERT model and ELMo's deep context word representation.
- Pydataman/bert_examples - Some examples of BERT.
run_classifier.pybased on Google BERT for Kaggle Quora Insincere Questions Classification challenge.
run_ner.pyis based on the first season of the Ruijin Hospital AI contest and a NER written by BERT.
- guotong1988/BERT-chinese - Pre-training of deep bidirectional transformers for Chinese language understanding.
- zhongyunuestc/bert_multitask - Multi-task.
- Microsoft/AzureML-BERT - End-to-end walk through for fine-tuning BERT using Azure Machine Learning.
- bigboNed3/bert_serving - Export BERT model for serving.
- yoheikikuta/bert-japanese - BERT with SentencePiece for Japanese text.
- jessevig/bertviz - Tool for visualizing BERT's attention.
- FastBert - A simple deep learning library that allows developers and data scientists to train and deploy BERT based models for NLP tasks beginning with text classification. The work on FastBert is inspired by fast.ai.
Named-Entity Recognition (NER)
- kyzhouhzau/BERT-NER - Use google BERT to do CoNLL-2003 NER.
- zhpmatrix/bert-sequence-tagging - Chinese sequence labeling.
- JamesGu14/BERT-NER-CLI - Bert NER command line tester with step by step setup guide.
- mhcao916/NER_Based_on_BERT - This project is based on Google BERT model, which is a Chinese NER.
- macanv/BERT-BiLSMT-CRF-NER - TensorFlow solution of NER task using Bi-LSTM-CRF model with Google BERT fine-tuning.
- ProHiryu/bert-chinese-ner - Use the pre-trained language model BERT to do Chinese NER.
- FuYanzhe2/Name-Entity-Recognition - Lstm-CRF, Lattice-CRF, recent NER related papers.
- king-menin/ner-bert - NER task solution (BERT-Bi-LSTM-CRF) with Google BERT https://github.com/google-research.
- brightmart/sentiment_analysis_fine_grain - Multi-label classification with BERT; Fine Grained Sentiment Analysis from AI challenger.
- zhpmatrix/Kaggle-Quora-Insincere-Questions-Classification - Kaggle baseline—fine-tuning BERT and tensor2tensor based Transformer encoder solution.
- maksna/bert-fine-tuning-for-chinese-multiclass-classification - Use Google pre-training model BERT to fine-tune for the Chinese multiclass classification.
- NLPScott/bert-Chinese-classification-task - BERT Chinese classification practice.
- fooSynaptic/BERT_classifer_trial - BERT trial for Chinese corpus classfication.
- xiaopingzhong/bert-finetune-for-classfier - Fine-tuning the BERT model while building your own dataset for classification.
- Socialbird-AILab/BERT-Classification-Tutorial - Tutorial.
Expand Text Generation
Question Answering (QA)
Expand Knowledge Graph
- sakuranew/BERT-AttributeExtraction - Using BERT for attribute extraction in knowledge graph. Fine-tuning and feature extraction. The BERT-based fine-tuning and feature extraction methods are used to extract knowledge attributes of Baidu Encyclopedia characters.
- lvjianxin/Knowledge-extraction - Chinese knowledge-based extraction. Baseline: bi-LSTM+CRF upgrade: BERT pre-training.
This repository contains a variety of content; some developed by Cedric Chee, and some from third-parties. The third-party content is distributed under the license provided by those parties.
I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer.
The content developed by Cedric Chee is distributed under the following license:
The text content of the book is released under the CC-BY-NC-ND license. Read more at Creative Commons.