Workshop: The Future of FPGA-Acceleration in Cloud and Datacenters

FCCM 2020 Workshop: The Future of FPGA-Acceleration in Cloud and Datacenters

Field-Programmable Gate Arrays (FPGAs) are becoming integral components of general purpose heterogeneous cloud computing systems and datacenters due to their ability to serve as energy-efficient domain customizable accelerators. All major players such as Microsoft, Amazon, Intel, Baidu, Huawei and IBM now expose FPGAs to application developers in their cloud and datacenter infrastructures. Besides commercial infrastructure, a growing number of projects are on the way across the globe, in academia and other research organizations to provide the benefit of acceleration and flexibility remotely to users. Current developments are taking place behind closed door with companies and institution disclosing very little on the challenges they encounter, as well as the approaches currently used to tackle those challenges. This workshop will bring experts in various fields around cloud, FPGA, computer architecture and applications to 1) discuss the status FPGA-acceleration in cloud computers and 2) explore the future and challenges in broad adoption of FPGAs in datacenter.

Topics of interest include FPGA integration, middleware, resource virtualization, security, programming and applications.


Organizers

Christophe Bobda – University of Florida, Gainesville FL, cbobda@ece.ufl.edu

Bio: Dr. Bobda is Professor of Computer Engineering at the University of Arkansas, in Fayetteville, AR. Dr. Bobda received the Licence in mathematics from the University of Yaounde, Cameroon, in 1992, the diploma of computer science and the Ph.D. degree (Summa Cum Laude) in computer science from the University of Paderborn in Germany in 1999 and 2003 respectively. In June 2003 he joined the department of computer science at the University of Erlangen-Nuremberg in Germany as Post doc. In 2005 Dr. Bobda was appointed Assistant Professor at the University of Kaiserslautern, Germany where set the research lab for self-organizing embedded systems that he led until October 2007 before being promoted to Professor at University of Potsdam in 2007. Dr. Bobda led the working Group Computer Engineering at the University of Potsdam until 2010 when he accepted a faculty position as associate professor at the University of Arkansas where he founded the smart embedded systems lab. Dr. Bobda was tenured and promoted to the rank of full professor at the University of Arkansas in 2016. Dr. Bobda has published more than 150 papers in leading journals and conferences, has a citation count and his work has been downloaded more than 9000 times. Dr. Bobda research interests include embedded systems, system on chip and system on FPGA, computer architecture, with application in embedded imaging, high-performance computing and cyber security. Dr. Bobda’s research has been sponsored by multiple national and international organizations including the National Science Foundation (NSF), the Air Force Research Lab, the Navy, the German Research Association (DFG) and the German French University.

Peter Hofstee – IBM POWER Systems Performance, Austin TX, hofstee@us.ibm.com

Bio: H. Peter Hofstee is a distinguished research staff member at IBM and part-time professor at TU Delft, best known for his contributions to heterogeneous computing as chief architect of the Synergistic Processor Elements in the Cell Broadband Engine used in the PlayStation 3, and the first supercomputer to reach sustained petaflop operation. He currently focuses on optimizing system performance for big data, analytics, and cloud, including the use of accelerated computation. Recent contributions include coherently attached reconfigurable acceleration on POWER7, paving the way for the new coherent attach processor interface on POWER8 and POWER9. Peter holds more than 100 issued patents.


Program

All times shown in Central Daylight Time (UTC-5)

Time Title Presenter Slides
11:00 Opening Peter Hofstee, IBM - Austin, TX
11:05 NSF Funding Opportunities and Priorities in CNS Erik Brunvand, NSF Available Here
11:30 The Future of FPGAs Needs Open Middleware Now Paul Chow, University of Toronto Available Here
12:00 Secure and Virtualized FPGA Management for FPGAs in Cloud and Datacenters Dirk Koch, University of Manchester
12:30 cloudFPGA: Promote FPGAs to 1st Citizen in the Cloud Francois Abel, IBM Research Europe Available Here
1:00 Break
1:15 The Open Cloud FPGA Testbed: Supporting Experiments on Emerging Datacenter Configurations Martin Herbordt, Boston University and Miriam Leeser, Northeastern University Available Here
1:45 openRole: Do we need a POSIX for FPGAs? Burkhard Ringlein, IBM Research Europe Available Here
2:15 Security and Privacy Concerns for the FPGA-Accelerated Cloud and Datacenters Russell Tessier, University of Massachusetts Amherst Available Here
2:45 Cloud-scale Key Value Store in FPGA John W Lockwood, Algo-Logic Available Here
3:15 Break
3:30 Powering Cloud and Datacenters with Xilinx Adaptive Compute Acceleration platforms Cathal McCabe, Xilinx Available Here
4:00 Global-Scale FPGA-Accelerated Deep Learning Inference with Microsoft's Project Brainwave Gabriel Weisz, Microsoft Available Here
4:30 Single-Tenant Cloud FPGA Security Jakub Szefer, Yale University Available Here
5:00 Gator Reconfigurable Cloud Computing: Hardware Virtualization Challenges Christophe Bobda, University of Florida Available Here

Talks

Opening – Peter Hofstee, IBM

Bio: H. Peter Hofstee is a distinguished research staff member at IBM and part-time professor at TU Delft, best known for his contributions to heterogeneous computing as chief architect of the Synergistic Processor Elements in the Cell Broadband Engine used in the PlayStation 3, and the first supercomputer to reach sustained petaflop operation. He currently focuses on optimizing system performance for big data, analytics, and cloud, including the use of accelerated computation. Recent contributions include coherently attached reconfigurable acceleration on POWER7, paving the way for the new coherent attach processor interface on POWER8 and POWER9. Peter holds more than 100 issued patents.


NSF Funding Opportunities and Priorities in CNS – Erik Brunvand, NSF

Presentation Slides: Available Here

Abstract: I will present an overview of the Computer and Network Systems (CNS) division at the National Science Foundation, with a focus on the Computer Systems Research (CSR) cluster in particular. As a division within the Computer and Information Science and Engineering (CISE) directorate, CNS seeks to develop a better understanding of the fundamental properties of computer and network systems and to create better abstractions and tools for designing, building, analyzing, and measuring future systems. The Division also supports the computing infrastructure that is required for experimental computer science, and it coordinates cross-divisional activities that foster the integration of research, education, and workforce development. As such, it’s a bit of a “kitchen sink” of topics and responsibilities. I’ll give an overview of the major activities and research areas supported by CNS, and by the Computer Systems Research (CSR) cluster within CNS in particular.

Bio: Erik Brunvand joined the NSF as a Program Director in the Computer and Network Systems (CNS) division in September 2019. He is primarily managing programs in the Computer Systems Research (CSR) cluster within CNS. He is on leave from the University of Utah, in Salt Lake City, under the IPA program where is a Professor in the School of Computing. His research interests are in computer architecture, specifically architectures for computer graphics, asynchronous and self-timed systems, VLSI design, and arts/technology collaborations.​


The Future of FPGAs Needs Open Middleware Now – Paul Chow, University of Toronto

Presentation Slides: Available Here

Abstract: Since Microsoft published their work on Catapult in 2014, FPGAs seem to have become more than an exotic and niche technology that can only be tamed by the magicians of hardware. The evidence shows that they have become available in many data centers, but have they really become successful?  In this talk I will first define success for FPGAs and argue that the door to success has only just opened a crack.  Without a greater effort to make FPGA application development and deployment almost as easy as software, the door will soon close, and FPGAs will remain at the periphery as hardware that augments the infrastructure of the data centers.  The result will be that FPGAs will not grow to being a common and important technology for computation, and become the commercial success that the vendors are striving to achieve.  At the University of Toronto, we are building a middleware platform with the vision that FPGAs are first class citizens in a heterogenous computing system, a.k.a., the data center, and our goal is to enable application portability and reusability of FPGA code just like you can do with software.  I will present an overview of our efforts and successes.  Building a truly viable middleware is a significant challenge and we have only shown feasibility.  We believe a unified and well-managed open source effort is required to achieve true success for FPGAs.

Bio: Paul Chow received the B.A.Sc. degree with honours in Engineering Science, and the M.A.Sc. and Ph.D. degrees in Electrical Engineering from the University of Toronto, Toronto, Ont., in 1977, 1979 and 1984, respectively.  In 1984 he joined the Computer Systems Laboratory at Stanford University, Stanford, CA, as a Research Associate, where he was a major contributor to an early RISC microprocessor design called MIPS-X, one of the first microprocessors with an on-chip instruction cache.  He joined the Department of Electrical and Computer Engineering at the University of Toronto in January 1988, where he is now a Professor and holds the Dusan and Anne Miklas Chair in Engineering Design.  His research interests include high performance computer architectures, reconfigurable computing, heterogeneous cloud computing, embedded and application-specific processors, and field-programmable gate array architectures and applications.


Secure and Virtualized FPGA Management for FPGAs in Cloud and Datacenters – Dirk Koch, University of Manchester

Abstract: For providing the full potential of FPGA acceleration in cloud environments and datacenters, industry will have to move from an Acceleration-as-a-Service model to an FPGA-as-a-Service model. When combining this with adequate FPGA virtualization techniques, this allows for resource pooling and for better utilizing of the available FPGA resources as well as for sharing of the DDR memory capacity and network bandwidth of an FPGA board by serving multiple applications with different resource requirements on the same physical FPGA node.

To realize this vision, this talk will look into the wider ecosystem required to implement virtualized FPGA-as-a-Service systems in datacenters. This will include tools and design flows, the runtime FPGA management and virtualization layers as well as the required hardware security. In particular for latter aspect, we will show that recently demonstrated security attacks can be mitigated through scanning of the FPGA bitstream or the corresponding netlist. This talk will make the point that the basic components for virtualized FPGA systems are all available and that it is time to integrate this into commercial settings.

Bio: Dirk Koch is a senior lecturer in the Advanced Processor Technologies Group at the University of Manchester. His main research interests are on run-time reconfigurable systems based on FPGAs, embedded systems, computer architecture, VLSI and hardware security. Dirk developed techniques and tools for self-adaptive distributed embedded control systems based on FPGAs. Current research projects include database acceleration using FPGAs-based stream processing, HPC and exascale computing, as well as reconfigurable instruction set extensions for CPUs and using FPGAs in datacenters.
Dirk Koch is author of the book “Partial Reconfiguration on FPGAs” and a co-editor of the book “FPGAs for Software Programmers”.


cloudFPGA: Promoting FPGAs To Become First-class Citizens in Datacenters – Francois Abel, IBM Research Europe

Presentation Slides: Available Here

Abstract: The miniaturization of CMOS technology has reached a scale at which FPGAs are starting to integrate scalar CPUs, specialized AI engines, and an ever increasing number of hard IP controllers such as PCIe, DDR4, Ethernet and encryption cores. Equipped with such a compute density and reconfigurable capability, FPGAs have the potential to disaggregate a significant part of the data processing from the traditional converged server in emerging heterogeneous datacenters. With cloudFPGA, we introduce a platform and a framework that turns FPGAs into standalone compute nodes within the datacenter. This approach sets the FPGAs free from the CPUs by directly connecting them to the datacenter network as standalone network-attached accelerators.

In this talk we’ll give an overview of the cloudFPGA project and we’ll share our vision for the use of FPGAs in the Cloud and their deployment in hyperscale datacenters.  The presentation will give a status of the various hardware, software and cloud integration components, as well as the tool-chain provided to operate our standalone network-attached FPGAs in a datacenter infrastructure. We will finish with and invitation to contribute to the cloudFPGA project that we are about to open-source.

Bio: Francois Abel is a research staff member in the Cloud & AI Systems Research department of the IBM Zurich Research Laboratory (Switzerland). His research interest is on high-speed networking with an emphasis on systems architecture and VLSI design of server interconnect fabrics and network accelerators. Francois is the father of the IBM cloudFPGA project, a disaggregated cloud and computing infrastructure to deploy FPGAs at large scale in hyperscale datacenters.


The Open Cloud FPGA Testbed:  Supporting Experiments on Emerging Datacenter Configurations – Martin Herbordt, Boston University and Miriam Leeser, Northeastern University

Presentation Slides: Available Here

Abstract: FPGAs are now widespread in datacenters, performing system tasks like SDN and metrology, system applications like encryption and compression, and applications-as-a-service like machine learning and big data analysis. But, as is the case universally with commercial clouds, these FPGAs are not directly accessible to outside users; as a result, there is no existing infrastructure that supports research on emerging datacenter configurations. To address this deficiency, the NSF has funded the Open Cloud Testbed (OCT) to be built and operated by the University of Massachusetts, Boston University, and Northeastern University.

In this talk we first describe current datacenter hardware configurations and the many places that FPGAs (potentially) reside within each node. We then give an overview of how these will be supported in the OCFT and a brief catalog of potential research projects. Results from a survey of potential users will be presented. We will leave substantial time for feedback and suggestions.

Professor Martin Herbordt is a Co-PI of the Open Cloud Testbed, an NSF CCRI Grand project; and, with Professor Miriam Leeser, Co-Lead of the Open Cloud FPGA Testbed.

Bio: Since joining Boston University in 2001, a primary focus of Martin’s work has been various aspects of high-performance computing using accelerators such as FPGAs and GPUs:  applications, including bioinformatics, molecular docking, molecular dynamics, and computational electrodynamics; development environments; and architecture of accelerator-centric clusters, i.e., compute clusters where accelerators communicate with each other directly.  This work is being supported by grants from the National Institutes of Health, the National Science Foundation, the Army Research Lab, and various industrial partners.  Other projects underway include methods for power and thermal aware application development (supported by the MGHPCC) and HPC in the Cloud (with the Massachusetts Open Cloud).  Recently completed work has involved fault-tolerant computation in space (supported by the Naval Research Lab) and in mapping algorithms to FPGA-based clusters (supported by the MIT Lincoln Lab).  Martin is a Fellow of the Hariri Institute for Computing and Computational Science and affiliated with the Center for Computational Science.

Previously, Martin was on the faculty of the University of Houston (1994 to 2001), where he founded the Computer Architecture and Automated Design research group, which was funded by grants from the Compaq Computer Corporation (now part of HP), the National Science Foundation (including a CAREER grant), and the THECB through the Advanced Technology Program.  In 1999 he was a visiting scientist at the National Center for Atmospheric Research investigating issues in supercomputer interconnection networks.  Martin was the Associate Director for Operations of the Texas Center for Computational and Information Sciences from 1997-2001.  He received the 2000-2001 College of Engineering Award for Excellence in Research.

Martin received the B.A. degree in Physics and Philosophy from the University of Pennsylvania and the Ph.D. degree in Computer Science from the University of Massachusetts. At the University of Massachusetts, he was twice an IBM Doctoral Fellow and also an ARPA Doctoral Fellow. Before that, Martin was at GCA Corporation, the inventor of production semiconductor chip fabrication, where he held various positions including project manager for control systems software and staff scientist for alignment systems.

Martin is the author or co-author of 7 book chapters and more than 100 refereed papers, and has given more than 50 invited seminars and colloquia. He has received multiple best paper awards (Int. Conf. on Field Programmable Logic and Applications,  Int. Conf. on Computer Design) and 10 best paper nominations.  Martin was the recipient of an IBM Faculty Award in 2008 for excellence in research.  He has been active with various conferences, particularly IPDPSFCCM, and FPL.  Together with Chip Weems, Martin was General Chair of the 2013 edition (27th) of the International Parallel and Distributed Processing Symposium, and together with Miriam Leeser is the General Chair of the 2014 edition (22nd) of the IEEE International Symposium on Field-Programmable Custom Computing Machines.

Bio: Miriam Leeser is a Professor at Northeastern University, Department of Electrical and Computer Engineering. She received her BS degree in Electrical Engineering from Cornell University, and Diploma and Ph.D. Degrees in Computer Science from Cambridge University in England.  After completion of her Ph.D., she joined the faculty of Cornell University, Department of Electrical Engineering.  In January, 1996 she joined the faculty of Northeastern University, where she is head of the Reconfigurable and GPU Computing Laboratory and a member of the Computer Engineering group.  Her research interests include application acceleration with Field Programmable Gate Arrays (FPGAs) and Graphics Processing Units (GPUs), programming paradigms for heterogeneous computers, computer arithmetic and reproducibility in higher performance computing.  In 1992 she received an NSF Young Investigator Award. She is Associate editor of the ACM Transactions on Reconfigurable Systems, the EURASIP Journal on Embedded Systems, and the International Journal of Reconfigurable Computing. She has been active in recruiting women and underrepresented minorities at all levels to the University, including serving on the NEU Strategies and Tactics for Recruiting to Improve Diversity and Excellence (STRIDE) committee.  She is a senior member of ACM, a senior member of IEEE and a senior member of SWE.


openRole: Do we need a POSIX for FPGAs? – Burkhard Ringlein, IBM Research Europe

Presentation Slides: Available Here

Abstract: The emergence of FPGAs as compute accelerators in the Cloud and other multi-user environments inevitably lead to a split of the FPGA design into an user application programmed by the developers and a platform specific part controlled by the infrastructure provider. This split of FPGA logic into a vendor controlled SHELL and a user-controlled ROLE allows the necessary introduction of different privilege levels within an FPGA design and potentially improves re-usability of user applications.

The SHELL – ROLE  architectural pattern can be observed widely across different FPGA platforms, but despite the strong similarity of approaches, all implementations differ in details. The differences in the SHELL – ROLE  interface between platform and application limits the portability of applications for no good reason and decelerates the adoption of all FPGA-based platforms. This situation resembles the early days of operating systems running on CPUs, before standards like POSIX introduced unified interfaces between the OS and the application code. We propose to address this with openROLE, which is an attempt to formulate similar standards for FPGA platforms.

Bio: Burkhard Ringlein is a Predoctoral Researcher in the Cloud & AI Systems Research department of the IBM Research Zurich Laboratory and pursues his PhD in cooperation with the Department of Computer Science, Computer Architecture of the Friedrich-Alexander University Erlangen-Nürnberg, Germany. As part of the cloudFPGA project, he is focusing on Distributed Reconfigurable Architectures in the context of High-Performance-Computing and AI Acceleration. Besides this, he is interested in designing compiler stacks for supporting heterogeneous computing platforms. He joined IBM Research Zurich in 2018 as a Master student and earned his Master of Science degree in November 2018 at the Friedrich-Alexander University Erlangen-Nürnberg.


Security and Privacy Concerns for the FPGA-Accelerated Cloud and Datacenters – Russel Tessier, University of Massachusetts Amherst

Presentation Slides: Available Here

Abstract: This talk addresses voltage based attacks in multi-tenant FPGAs. These types of attacks present security and privacy concerns for next-generation FPGA cloud deployments. Two important issues involving FPGAs are addressed. First, we fully characterize the effects of activating ring oscillator based power wasters on Arria 10 FPGAs. We show the ability to induce timing faults in logic circuits located throughout the chip by activating varying amount of power wasters. Arria 10 devices show a significant, instantaneous drop in voltage once the power wasters are activated leading to a fault profile. In the second part of the talk, we describe how power wasters can be used to extract the encryption key for an RSA circuit via fault injection. Our attack scenario is described in depth, and the resulting faults are characterized. 

Bio: Russell Tessier is a Professor of Electrical and Computer Engineering and Sr. Associate Dean of Academic Affairs in the College of Engineering at the University of Massachusetts Amherst. He received SM and Ph.D degrees from the Massachusetts Institute of Technology in Cambridge, MA. Prof. Tessier is the head of the Reconfigurable Computing Group at UMass. He has published over 150 papers on FPGAs and reconfigurable computing. He was a co-founder of Virtual Machine Works, an FPGA-based logic emulation company. The company was acquired by Mentor Graphics which currently markets the emulation product as its Veloce brand.


Cloud-scale Key Value Store in FPGA – John W. Lockwood, CEO Algo-Logic

Presentation Slides: Available Here

Abstract: In-memory object stores are a key building block of cloud infrastructure.   They allow network-attached clients to share data by name over the network.   Algo-Logic’s has implemented a Key Value Store (KVS) entirely in FPGA logic that enables cloud provides to host object storage systems that provide several orders of magnitude more throughput and less latency than a tradtional in-memory KVS implemented using software.   Clients use standard Application Programming Interfaces (APIs) on compute clients to Create, Read, Update, and Delete objects that in turn send packets directly to the KVS that runs in FPGA logic.  Algo-Logic’s KVS operates on the data directly from the Ethernet port and leverages on chip BlockRAM, UltraRAM, or LRAM to access the data with the lowest latency.   The solution has been ported to multiple FPGA cards that include the Xilinx ALVEO, Intel PAC, Terasic DE5Net, and Nallatech P385.   An software interface modeled on the HiRedis C/C++ API can be used for software developers accustomed to interfacing to a Redis KVS. A single 1U Dell server equipped with three Alveo U50 cards has been shown capable of achieving a throughput of a 490 Million I/O operations per second (IOPs).

Bio: John W. Lockwood, CEO, is an expert in building FPGA-accelerated applications. He has founded three companies in the areas of low latency networking, Internet security, and electronic commerce and has worked at the National Center for Supercomputing Applications (NCSA), AT&T Bell Laboratories, IBM, and Science Applications International Corp (SAIC). As a professor at Stanford University, he managed the NetFPGA program from 2007 to 2009 and grew the Beta program 10 to 1,021 cards deployed worldwide. As a tenured professor, he created and led the Reconfigurable Network Group within the Applied Research Laboratory at Washington University in St. Louis. He has published over 100 papers and patents on topics related to networking with FPGAs and served as served as principal investigator on dozens of federal and corporate grants. He holds BS, MS, PhD degrees in Electrical and Computer Engineering from the University of Illinois at Urbana/Champaign and is a member of IEEE, ACM, and Tau Beta Pi.


Powering Cloud and Datacenters with Xilinx Adaptive Compute Acceleration platforms – Cathal McCabe, Xilinx

Presentation Slides: Available Here

Abstract: The demise of Moore’s law has introduced significant challenges for next generation computing systems. A new approach is needed to address the ever-increasing performance, network, and storage demands while balancing energy consumption, cost, and other constraints. Xilinx Adaptive Compute platforms couple production ready hardware with new open-source software abstractions to address these new challenges.

This talk will highlight some of the challenges for cloud and datacenter systems and applications, give an overview of the latest Alveo and next generation Versal Adaptive Compute platforms and software abstractions from Xilinx, and introduce a new community-based research initiative from Xilinx.

Bio: Cathal a senior applications engineer in the Xilinx CTO department, where he manages the Xilinx University Program in EMEA. He is responsible for supporting academics working with the latest Xilinx Adaptive Compute technologies for teaching and research. He is also part of the PYNQ project team. PYNQ is a Python and Jupyter Notebook based open-source framework which makes it easier for developers to design for and use Xilinx platforms.

Before joining Xilinx, he was a senior engineer in the Science and Technology Facilities Council (STFC) in the UK where he was the Europractice manager for FPGA, Embedded, and ESL design tools and flows. He was responsible for supporting universities across Europe in the use of advanced microelectronic design flows.


Global-Scale FPGA-Accelerated Deep Learning Inference with Global-Scale FPGA-Accelerated Deep Learning Inference with Microsoft’s Project Brainwave – Gabriel Weisz, Microsoft

Presentation Slides: Available Here

Abstract: The computational challenges involved in accelerating deep learning have attracted the attention of computer architects across academia and industry. While many deep learning accelerators are currently theoretical or exist only in prototype form, Microsoft’s Project Brainwave is in massive-scale production in data centers across the world. Project Brainwave runs on our Catapult networked FPGAs, which provide latency that is low enough to enable “Real-time AI” – deep learning inference that is fast enough for interactive services and achieves peak throughput at a batch size of 1. Project Brainwave powers the latest version of Microsoft’s Bing search engine, which uses cutting-edge neural network models that are much larger than the neural networks used in typical benchmarks.

In this talk, I’ll discuss how Project Brainwave’s FPGA-based and software components work together to accelerate both first-party workloads – like Bing search – and third-party applications using neural network models, like high energy physics and manufacturing quality control. I’ll also talk about how FPGAs are the perfect platform for the fast-changing world of deep neural networks, since their reconfigurability allows us to update our accelerator in place to keep up with the state of the art.

Bio: Gabriel Weisz is a Principal Hardware Engineer at Microsoft, and works on compiling neural networks to the Brainwave neural network accelerator. He holds Ph.D. and M.S. degrees in Computer Science from Carnegie Mellon University, and a B.S. with a double major in Computer Science and Electrical Engineering from Cornell University.


Single-Tenant Cloud FPGA Security – Jakub Szefer, Yale University

Presentation Slides: Available Here

Abstract: Cloud FPGAs have emerged as an important computing paradigm in recent years due to the ability of users to gain access to FPGA resources quickly, flexibly, and on-demand.  However, as public cloud providers make FPGAs available to many, potentially mutually-untrusting users, security of these Cloud FPGA deployments needs to be analyzed, and defenses developed for protecting the Cloud FPGAs.  This talk will discuss Cloud FPGA security from the perspective of side and covert channel attacks, and will focus on the single-tenant scenario.  This talk will cover our recent work on thermal channels that can be used to create covert channels between users renting same FPGA over time. The talk will also discuss our other recent work on voltage-based channels that leverage custom circuits instantiated inside the FPGAs to measure voltage changes.  Finally, the talk will discuss newest research on Cloud FPGA fingerprinting.  The talk will end with overview of some potential defenses and open challenges in securing Cloud FPGAs.

Bio: Jakub Szefer’s research interests are at the intersection of computer architecture and hardware security. His research focuses on secure processor architectures, cloud security, hardware security and verification, physically unclonable functions, hardware FPGA implementation of cryptographic algorithms, and Cloud FPGA security. His research is supported through National Science Foundation and industry donations. He joined Yale University in summer 2013 as an Assistant Professor of Electrical Engineering, where he started the Computer Architecture and Security Laboratory (CASLAB). Prior to joining Yale, he received Ph.D. and M.A. degrees in Electrical Engineering from Princeton University and worked with Prof. Ruby B. Lee on secure hardware architectures. He received B.S. with highest honors in Electrical and Computer Engineering from University of Illinois at Urbana-Champaign. He has received the NSF CAREER award in 2017. Most recently, Jakub is the author of a new book: “Principles of Secure Processor Architecture Design”, published in 2018; and he has been promoted to the IEEE Senior Member rank in 2019.


Gator Reconfigurable Cloud Computing: Hardware Virtualization Challenges – Christophe Bobda, University of Florida

Presentation Slides: Available Here

Abstract: Field-Programmable Gate Arrays (FPGAs) are becoming important components within commercially available cloud computing systems. However, the FPGAs are not yet sufficiently abstracted within existing software ecosystems. Contrary to how applications are transparently scheduled across general purpose processors, software processes need to explicitly provision and control communications with hardware circuits within the FPGAs. The Gatorrecc cloud infrastructure proposes a novel virtualization framework that aims FPGA multi-tenancy and leverages VirtIO to implement an efficient communication scheme between virtual machines and the FPGAs. It avoids the overhead of context switches between virtual machine and host address spaces by using the in-kernel network stack for transferring packets to FPGAs. Prototyping the FPGA virtualization stack on an OpenStack set up running the KVM hypervisor demonstrated 2x to 35x performance improvement compare to state-of-the-art, and higher FPGA utilization compared to single-tenant deployment.

Bio: Dr. Bobda is Professor of Computer Engineering at the University of Arkansas, in Fayetteville, AR. Dr. Bobda received the Licence in mathematics from the University of Yaounde, Cameroon, in 1992, the diploma of computer science and the Ph.D. degree (Summa Cum Laude) in computer science from the University of Paderborn in Germany in 1999 and 2003 respectively. In June 2003 he joined the department of computer science at the University of Erlangen-Nuremberg in Germany as Post doc. In 2005 Dr. Bobda was appointed Assistant Professor at the University of Kaiserslautern, Germany where set the research lab for self-organizing embedded systems that he led until October 2007 before being promoted to Professor at University of Potsdam in 2007. Dr. Bobda led the working Group Computer Engineering at the University of Potsdam until 2010 when he accepted a faculty position as associate professor at the University of Arkansas where he founded the smart embedded systems lab. Dr. Bobda was tenured and promoted to the rank of full professor at the University of Arkansas in 2016. Dr. Bobda has published more than 150 papers in leading journals and conferences, has a citation count and his work has been downloaded more than 9000 times. Dr. Bobda research interests include embedded systems, system on chip and system on FPGA, computer architecture, with application in embedded imaging, high-performance computing and cyber security. Dr. Bobda’s research has been sponsored by multiple national and international organizations including the National Science Foundation (NSF), the Air Force Research Lab, the Navy, the German Research Association (DFG) and the German French University.