About
Frost
Field programmable gate arrays (FPGAs) are very powerful processing devices that can process data rapidly in parallel and with deterministic latency, however designing algorithms for these devices can be challenging. FPGA Open Speech Tools is a project that is designed to help make development of audio signal processing algorithms for FPGAs more accessible and efficient. This involves the creation of both new hardware and software for developers to use.
Two hardware platforms are currently being developed. The first is a low-cost platform based on the Altera DE10-Nano, called the Audio Mini. This platform is intended to be used by hobbyists and in the classroom to create custom audio processing projects. The high-performance platform, called the Audio Blade, is designed for use in a research setting. Unlike the Audio Mini which has two interfaces, this platform has eight audio interfaces. These include microphone in, headphone out, line in and out, high-speed audio in and out, as well as a microphone array input and speaker array output.
Development on FPGAs can be difficult and time consuming, so we are currently developing an autogeneration tool to help mitigate these problems. The autogeneration tool uses Mathworks’ Simulink to build complex audio processing systems with MATLAB, a programming language that is very accessible, synthesize the design into a VHLD component, and build Linux loadable kernel modules for the VHDL component to control its adjustable parameters at runtime. The autogeneration tool also streamlines the construction and compilation of a complete Quartus project, which creates the bitstream used to program the FPGA.
The loadable kernel modules can be accessed through the command line on the embedded hard processor system (HPS), however this is inconvenient for any practical application. Therefore, we are also developing a web-based application that can easily interface with these drivers at runtime. This unique, innovative app uses configuration files created by the autogeneration tool to render custom interfaces at runtime. Furthermore, these configuration files can either be present on the device on boot-up or downloaded, using the app, from a cloud service like Amazon Web Services.
Several example projects for each platform are currently being developed as well. The most basic projects are simple pass throughs, in which audio is simply passed unmodified from input to output, which serve as reference designs. Other example projects include simple hearing aids, audio effects processors, and delay-and-sum beamformers.
This project was supported by the National Institutes of Health (Grant number 5R44DC015443).
FROST Publications
AES 2020
Design of Audio Processing Systems with Autogenerated User Interfaces for System-on-Chip Field Programmable Gate Arrays
System-on-Chip (SoC) Field Programmable Gate Arrays (FPGAs) are well-suited for real time audio processing because of their high performance and low latency. However, interacting with FPGAs at runtime is complex and difficult to implement, which limits their adoption in real-world applications. We present an open source software stack that makes creating interactive audio processing systems on SoC FPGAs easier. The software stack contains a web app with an autogenerated graphical user interface, a proxy server, a deployment manager, and device drivers. An example design comprising custom audio hardware, a delay and sum beamformer, an amplifier, filters, and noise suppression is presented to demonstrate our software. This example design provides a reference that other developers can use to create high performance interactive designs that leverage the processing power of FPGAs.
View Publication
AES 2019
An Open Audio Processing Platform Using SoC FPGAs and Model-Based Development
The development cycle for high performance audio applications using System-on-Chip (SoC) Field Programmable Gate Arrays (FPGAs) is long and complex. To address these challenges, an open source audio processing platform based on SoC FPGAs is presented. Due to their inherently parallel nature, SoC FPGAs are ideal for low latency, high performance signal processing. However, these devices require a complex development process. To reduce this difficulty, we deploy a model-based hardware/software co-design methodology that increases productivity and accessibility for non-experts. A modular multi-effects processor was developed and demonstrated on our hardware platform. This demonstration shows how a design can be constructed and provides a framework for developing more complex audio designs that can be used on our platform.
View Publication