NLP Year In Review

Minimum viable reading

Minimum Viable Reading(MVR) is the least amount of study you need to stay updated with the industry. Select the verticals of your interest and read the latest surveys.

Faster transformers

Everybody is tired of the cost of working with transformers. This talk surveys the research happened to make them faster.

T3: High Performance Natural Language Processing

Better testing of NLP models

This talk by the author opens our eyes to better testing our models.

Beyond Accuracy: Behavioral Testing of NLP Models with CheckList

Pre-trained models

I think this is your favourite topic. Isn’t it?

Pre-trained Models for Natural Language Processing: A Survey

Language Models are Few-Shot Learners

It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners

Transfer Learning In NLP - Part 2

A Primer in BERTology

Smaller models

Distilling models will be a natural thing in the coming days until we discover a way to make great small models directly.

Speeding Up Transformer Training and Inference By Increasing Model Size

Knowledge Distillation: A Survey

A Survey of Model Compression and Acceleration for Deep Neural Networks

Domain adaptation

I think people give a whole lot less f**k about this important topic. I have a gut feeling that practitioners are hiding intentionally because this is where money gets printed.

Neural Unsupervised Domain Adaptation in NLP

Tricks For Domain Adaptation

Don't Stop Pretraining: Adapt Language Models to Domains and Tasks

Neural search

I don’t want to say anything about this topic because I already said a lot.

Pretrained Transformers for Text Ranking: BERT and Beyond

(I am planning a new talk on the state of neural search. So you can skip reading this paper.)

Generative models

GPT3 makes everything possible. Isn’t it?

Evaluation of Text Generation: A Survey

Dealing with scarcity of data

Let us please give more fuck about this important topic. Send me cool papers if you know.

Revisiting Few-sample BERT Fine-tuning

Data Augmentation using Pre-trained Transformer Models

How Effective is Task-Agnostic Data Augmentation for Pretrained Transformers?

Knowledge graphs

I hear KGs are going to be everywhere in the coming days.

A Survey on Knowledge Graphs

Language Models are Open Knowledge Graphs

Extra

All in all

This has been a fantastic year for NLP. Did I miss out any influential paper or topic? Comment and let me know!

Enjoy the holidays!


Come join Maxpool - A Data Science community to discuss real ML problems!

Connect with me on MediumTwitter & LinkedIn.