Programming Abstractions for Natural Language & Intelligent Systems

Rapid progress in machine learning has sparked a stampede toward new kinds of user interfaces based on natural interaction. Excitement over AI-based user interfaces, however, has run ahead of the engineering tools that we need to implement them. Engineers complain of a new category of pitfalls that arise from building systems around machine learning.

We are building programming language abstractions to help manage the complexity of integrating AI with real-world applications. Our interlocking set of abstractions, collectively called Opal, adds new features to mainstream programming languages.


a short paper on support for natural language understanding in SysML 2018

    author = {Alex Renda and Harrison Goldstein and Sarah Bird and
              Chris Quirk and Adrian Sampson},
    title = {Programming Language Support for Natural Language Interaction},
    booktitle = {SysML},
    year = {2018},

Typed Natural Language Understanding

A new category of cloud services for NLU have emerged, including, LUIS, Dialogflow, and Lex. These tools make it easy to get started developing conversational user interfaces, but they introduce their own sources of engineering challenges.

The core problem is that these services require out-of-band configuration in a web GUI. Developers need to specify a language model by configuring intents and entities to extract from user utterances. Then, developers have to duplicate this same structure in client code: they must write nested conditions to handle different combination of entities found in each utterance. Worse, the GUI configuration and the client code can easily get out of sync, leading to subtle correctness bugs or production failures.

We observe that algebraic types can capture the natural structure of NLU language models. Instead of asking programmers to develop configurations and code separately, we propose a domain-specific type language that puts the code in charge of the end-to-end language system. A program in our type DSL looks like this:

free-text Person;
free-text Time;
keywords Day = "Sunday" | "Monday" | ... | "Saturday";
alias Date = { day: Day, time?: Time };
trait Intent =
  | <Schedule> { who: Person, when: Date }
  | <Move> { from: Date, to: Date }
  | <List> {};

Each declaration in our DSL simultaneously declares a data type for client code and an element in the NLU service configuration. Together, the composite system guarantees type safety: that handler code agrees with the structure of NLU responses for the application’s language model.

Our prototype implementation configures models and generates TypeScript interface declarations.

Hypothetical Worlds

Unlike GUIs or command-line interactions, AI-based user interfaces are intrinsically ambiguous. Natural language can have multiple interpretations even when NLU is perfect, and predictive applications need to take action without any explicit guidance from the user. In all cases, the right interpretation depends on the hypothetical outcome of taking a given action.

Opal’s hypothetical world construct expresses nondeterministic choice. Programs use it to search a space of possible interpretations for ambiguity. Inside an Opal hyp block, code looks natural—as if it were interacting with the real world—but effects are isolated until the program commits the resulting changes.

For example, a calendar application might support an ambiguous command to schedule a meeting without a specific day. It can use hypothetical worlds to propose schedule modifications:

for (day in weekdays) {
  world = hyp {
    calendar.add(event, day);
    if (!constraints_violated(calendar)) {

This example only commits an event addition when adding it would satisfy the user’s constraints on the final schedule.