commit 8337134 (2022-01-23 09:56:47 -0500) Torsten Scholak: treat lists as maybe + nonempty

Towards Neural Functional Program Evaluation

Torsten Scholak Jonathan Pilault Joey Velez-Ginorio

arXiv:2112.04630 [cs.CL]

Published on Dec 9, 2021

Official link: https://arxiv.org/abs/2112.04630

PDF Code Poster

Tagged as: research haskell

TL;DR: Are neural models bad at interpreting programs? For the AIPLANS NeurIPS workshop in 2021, we created a dataset of functional programs, and trained T5 to reduce them to their normal forms. Turns out it works even for challenging data splits!

This paper explores the capabilities of current transformer-based language models for program evaluation of simple functional programming languages. We introduce a new program generation mechanism that allows control over syntactic sugar for semantically equivalent programs. T5 experiments reveal that neural functional program evaluation performs surprisingly well, achieving high 90% exact program match scores for most in-distribution and out-of-distribution tests. Using pretrained T5 weights has significant advantages over random initialization. We present and evaluate on three datasets to study generalization abilities that are specific to functional programs based on: type, function composition, and reduction steps.

Next Publication

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

Jan 16, 2022

Let's unify all structured-knowledge grounded tasks into the same text-to-text framework!

Previous Publication

PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

Nov 1, 2021

Introducing PICARD - a simple and effective constrained beam search algorithm for any language model. PICARD helps to generate valid code, which is useful for program synthesis and semantic parsing. We achieve SoTA on both Spider and CoSQL.