April 22, 2024
In recent times training Language Models (LMs) have relied on computationally heavy training over massive datasets which makes this training process extremely laborious. In this paper we propose a novel method for numerically evaluating text quality in large unlabelled NLP datasets in a model agnostic manner to assign the text instances a "quality score". By proposing the text quality metric, the paper establishes a framework to identify and eliminate low-quality text instances, leading to improved training efficiency for LM models. Experimental results over multiple models and datasets demonstrate the efficacy of this approach, showcasing substantial gains in training effectiveness and highlighting the potential for resource-efficient LM training. For example, we observe an absolute accuracy improvement of 0.9% averaged over 14 downstream evaluation tasks for multiple LM models while using 40% lesser data and training 42% faster when training on the OpenWebText dataset and 0.8% average absolute accuracy improvement while using 20% lesser data and training 21% faster on the Wikipedia dataset.
Written by
Vasu Sharma *
Karthik Padthe *
Newsha Ardalani
Kushal Tirumala
Russ Howes
Hu Xu
Bernie Huang
Daniel Li (FAIR)
Armen Aghajanyan
Gargi Ghosh
Publisher
arxiv
Research Topics
May 24, 2024
May 24, 2024
May 06, 2024
Gregoire Mialon , Yann LeCun , Thomas Scialom , Clémentine Fourrier , Thomas Wolf
May 06, 2024
April 14, 2024
Yun Wang (Speech) , Arthur Hinsvark , Qing He , Shun Zhang , Wonjune Kang
April 14, 2024
April 14, 2024
Heng-Jui Chang , Ning Dong (AI) , Ruslan Mavlyutov , Sravya Popuri , Andy Chung
April 14, 2024
Product experiences
Foundational models
Our approach
Research
Product experiences
Latest news
Foundational models