commit 37e35de (2022-01-24 21:38:57 -0500) Torsten Scholak: fix link to repo

PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

Torsten Scholak Nathan Schucher Dzmitry Bahdanau

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Published on Nov 1, 2021

Official link: https://aclanthology.org/2021.emnlp-main.779

PDF Code Talk

Tagged as: research haskell

TL;DR: Introducing PICARD - a simple and effective constrained beam search algorithm for any language model. PICARD helps to generate valid code, which is useful for program synthesis and semantic parsing. We achieve SoTA on both Spider and CoSQL.

Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of 10,000s of sub-word tokens. When fine-tuned to target constrained formal languages like SQL, these models often generate invalid code, rendering it unusable. We propose PICARD, a method for constraining auto-regressive decoders of language models through incremental parsing. PICARD helps to find valid output sequences by rejecting inadmissible tokens at each decoding step. On the challenging Spider and CoSQL text-to-SQL translation tasks, we show that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions. Code and trained models are available here.

Next Publication

Towards Neural Functional Program Evaluation

Dec 9, 2021

Are neural models bad at interpreting programs? For the AIPLANS NeurIPS workshop in 2021, we created a dataset of functional programs, and trained T5 to reduce them to their normal forms. Turns out it works even for challenging data splits!

Previous Publication

DuoRAT: Towards Simpler Text-to-SQL Models

Jun 1, 2021

It's like RAT-SQL, but simpler and faster.