Towards Robust NLP: Out-of-distribution Benchmarks and Multi-task Fine-tuning.

Project

Towards Robust NLP: Out-of-distribution Benchmarks and Multi-task Fine-tuning.

Principal Investigator

University of Melbourne

Oracle Principal Investigator

Ari Kobren, Principal Research Scientist
Jason Peck, Research Director
Long Duong
Mark Johnson

Summary

Natural language processing technologies are driving advances in
automatic personal assistants, internet search, translation and many
other intelligent text services. The dominant paradigm is based on
first pretraining language models, followed by task-based fine-tuning.
This project aims to improve the fine-tuning step such that the
systems are more robust when exposed to situations beyond their
training domain, a setting where current systems perform poorly. In
doing so, the project will make language technology more reliable and
reduce technological barriers to NLP uptake.