Capra is a research group at Cornell in the Computer Science and Electrical and Computer Engineering departments. Our research studies abstractions and efficiency through the interaction of programming languages and computer architecture.
Hardware Accelerator Generation
We’re designing Calyx, an intermediate language (IL) and infrastructure for building compilers that generate hardware accelerators. Calyx works by representing both hardware-like structure and software-like control together. You can try Calyx in your browser.
High-level synthesis (HLS) tools can translate C-like languages to hardware accelerators, but the semantic gap between software and hardware can yield unpredictable performance and semantics. Dahlia adds a substructural type system to model hardware resources and their constraints to statically reject HLS designs that make unpredictable area-latency trade-offs. You can try Dahlia in your browser.
We have identified a new category of geometry bugs that arise in graphics programming and other domains that have to deal with matrices and vectors. They arise when programmers lose track of the coordinate systems and reference frames that underpin the computation. Gator is a language for GPU shading with a type system that can eliminate geometry bugs and rule them out by generating correct-by-construction transformation code.
Braid is a programming language for heterogeneous programming, where a single source program targets different hardware units. We have applied it to real-time graphics programming on CPU–GPU systems. Braid compiles to WebGL, so you can try it out in your browser.
Search-Based Compilation for Digital Signal Processing
Digital signal processors (DSPs) are ubiquitous and energy efficient, but making them fast requires an expert programmer. The difficulty stems from their complex vector instruction sets and simple, in-order pipelines. To get the best results, programmers must carefully pack and move data in vector registers to enable compact execution. Diospyros uses equality saturation to automatically discover efficient vector packing schemes.
Image compression formats like JPEG are ubiquitous in computer vision, but they were designed for human perception—not for modern vision algorithms. We examine the potential for customizing JPEG compression for specific vision tasks, simultaneously improving compression the ratio and the accuracy.
Vision accelerators that run on real-time video process nearly identical frames at every time step. This project introduces activation motion compensation, a technique for approximately incremental acceleration of computer vision. It works by measuring motion in the input video and translating it to motion in the intermediate results of convolutional neural networks.
Most camera systems are optimized for photography, so they waste time and energy when they capture images for computer vision. This project designs a vision mode for cameras and their associated signal processing logic that saves energy by producing lower-quality, less-processed image data.
- Programming Abstractions for Natural Language & Intelligent Systems Despite rapid progress in machine learning capabilities, integrating ML into full applications remains complex and error prone. Opal is a new set of language features that help make it easier to build correct software that relies on AI, especially on natural language understanding.
PhD & MS Students
Undergrad & MEng
- Caleb Kim
- Crystal Hu
- David Chen
- Evan Williams
- Jan-Paul Vincent
- Mateo Guynn
- Meredith Hu
- Mia Daniels
- Nathaniel Navarro
- Pai Li
- Jenna Edwards Program Manager
- Susan Garry Research Associate
- Aditi Kabra BS 2019
- Alaia Solko-Breslin BS & Meng 2022
- Alex Renda BS 2018
- Alex Wong MEng 2019
- Apurva Koti BS 2020
- Arthur Wang MEng 2018
- Chris Gyurgyik BS 2021
- Daniel Sainati BS & MEng 2018
- David Siher BS 2022
- Edwin Peguero MS 2021
- Evan Su MEng 2018
- Harrison Goldstein BS & MEng 2018
- Henry Liu BS & MEng 2020
- Horace He BS 2020
- Irene Yoon BS 2019
- Jacob Delgado-López visitor, University of Puerto Rico
- Jasper Liang BS & Meng 2022
- Karen Zhang BS 2021
- Katy Voor BS 2020
- Kenneth Fang BS 2020
- Kimberly Baum BS 2020
- Mark Buckler PhD 2019
- Maya Ifekauche visitor, Auburn University
- Patrick LaFontaine BS 2021
- Paul Joo BS 2021
- Richard Wang BS 2021
- Sachille Atapattu
- Samuel Thomas BS 2020
- Shiyu Wang MEng 2018
- Theodore Bauer BS 2019
- Vivi Ye BS 2022
- Yinnon Sanders BS 2019
- YoungSeok (Alex) Na
- Zhijing Li MS 2021
Adrian has tenure.
Neil’s paper about compiler auto-vectorization for the RISC-V vector extension has been accepted to IEEE Micro.
We won a Google Research Scholar Program award.
Check out the preprint of our MICRO 2021 paper on software-defined vector machines.
Our paper on software-defined vector processing was accepted to MICRO 2021!