Facebook
Text Quality-Based Pruning for Efficient Training of Language Models | Research - AI at Meta

NLP

Text Quality-Based Pruning for Efficient Training of Language Models

April 22, 2024

Abstract

In recent times training Language Models (LMs) have relied on computationally heavy training over massive datasets which makes this training process extremely laborious. In this paper we propose a novel method for numerically evaluating text quality in large unlabelled NLP datasets in a model agnostic manner to assign the text instances a "quality score". By proposing the text quality metric, the paper establishes a framework to identify and eliminate low-quality text instances, leading to improved training efficiency for LM models. Experimental results over multiple models and datasets demonstrate the efficacy of this approach, showcasing substantial gains in training effectiveness and highlighting the potential for resource-efficient LM training. For example, we observe an absolute accuracy improvement of 0.9% averaged over 14 downstream evaluation tasks for multiple LM models while using 40% lesser data and training 42% faster when training on the OpenWebText dataset and 0.8% average absolute accuracy improvement while using 20% lesser data and training 21% faster on the Wikipedia dataset.

Download the Paper

AUTHORS

Written by

Vasu Sharma *

Karthik Padthe *

Newsha Ardalani

Kushal Tirumala

Russ Howes

Hu Xu

Bernie Huang

Daniel Li (FAIR)

Armen Aghajanyan

Gargi Ghosh

Luke Zettlemoyer

Publisher

arxiv

Related Publications

May 24, 2024

SPEECH & AUDIO

NLP

DOC-RAG: ASR Language Model Personalization with Domain-Distributed Co-occurrence Retrieval Augmentation

Zhe Liu

May 24, 2024

May 06, 2024

CONVERSATIONAL AI

NLP

GAIA: a benchmark for general AI assistants

Gregoire Mialon , Yann LeCun , Thomas Scialom , Clémentine Fourrier , Thomas Wolf

May 06, 2024

April 14, 2024

SPEECH & AUDIO

NLP

Multi-task Learning for Front-end Text Processing in TTS

Yun Wang (Speech) , Arthur Hinsvark , Qing He , Shun Zhang , Wonjune Kang

April 14, 2024

April 14, 2024

SPEECH & AUDIO

NLP

CoLLD: Contrastive Layer-to-Layer Distillation for Compressing Multilingual Pre-Trained Speech Encoders

Heng-Jui Chang , Ning Dong (AI) , Ruslan Mavlyutov , Sravya Popuri , Andy Chung

April 14, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.
Build a Mobile Site
View Site in Mobile | Classic
Share by: