Buscar

DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference

RL1, Publisher: arXiv, Link>


AUTHORS:

Cristóbal Eyzaguirre, Felipe del Río, Vladimir Araujo, Álvaro Soto


ABSTRACT:

Large-scale pre-trained language models have shown remarkable results in diverse NLP applications. Unfortunately, these performance gains have been accompanied by a significant increase in computation time and model size, stressing the need to develop new or complementary strategies to increase the efficiency of these models. In this paper we propose DACT-BERT, a differentiable adaptive computation time strategy for BERT-like models. DACT-BERT adds an adaptive computational mechanism to BERT's regular processing pipeline, which controls the number of Transformer blocks that need to be executed at inference time. By doing this, the model learns to combine the most appropriate intermediate representations for the task at hand. Our experiments demonstrate that our approach, when compared to the baselines, excels on a reduced computational regime and is competitive in other less restrictive ones.

0 visualizaciones

Entradas Recientes

Ver todo

RL2, Publisher: Journal of Machine Learning Research, Link> AUTHORS Jorge Pérez, Pablo Barceló, Javier Marinkovic ABSTRACT Alternatives to recurrent neural networks, in particular, architectures bas

RL2, Publisher: https://github.com/pdm-book/community Link> AUTHORS Marcelo Arenas, Pablo Barceló, Leonid Libkin, Wim Martens, Andreas Pieris ABSTRACT This is a release of parts 1, 2, and 4 of the