{"id":2010,"date":"2021-03-25T02:22:26","date_gmt":"2021-03-25T02:22:26","guid":{"rendered":"https:\/\/www.fccm.org\/?page_id=2010"},"modified":"2021-04-26T15:57:12","modified_gmt":"2021-04-26T15:57:12","slug":"programs","status":"publish","type":"page","link":"https:\/\/www.fccm.org\/programs\/","title":{"rendered":"Programs"},"content":{"rendered":"\n
\"\"<\/figure>\n\n\n\n

All times shown in Eastern Daylight Time (UTC-4)<\/strong>
Links will be emailed to registrants.<\/strong><\/p>\n\n\n\n

Main Program<\/strong><\/h1>\n\n\n\n

Monday, May 10th<\/h2>\n\n\n\n\n\n\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t
8:30 - 8:45<\/td>Opening<\/strong><\/td>\n<\/tr>\n
8:45 - 9:45<\/td>Session 1: FPGA CAD<\/b><\/font>
\n
\nSession Chair: Kia Bazargan<\/strong>
\n
\nXBERT: Xilinx Logical-Level Bitstream Embedded RAM Transfusion<\/strong>
\nMatthew Hofmann, Zhiyao Tang, Jonathan Orgill, Jonathan Nelson, David Glanzman, Brent Nelson and Andre Dehon
\n
\nA Safari through FPGA-based Neural Network Compilation and Design Automation Flows<\/strong>
\nPatrick Plagwitz, Frank Hannig, Martin Str\u00f6bel, Christoph Strohmeyer and J\u00fcrgen Teich
\n
\nFlexible Instrumentation for Live On-Chip Debug of Machine Learning Training on FPGAs<\/strong>
\nDaniel Holanda Noronha, Zhiqiang Que, Wayne Luk and Steve Wilton<\/td>\n<\/tr>\n
9:45 - 10:30<\/td>Poster Session 1<\/a><\/strong><\/td>\n<\/tr>\n
10:30 - 11:30<\/td>Session 2: Machine Learning 1 (Inference and Time-Series Prediction)<\/b><\/font>
\n
\nSession Chair: Miaoqing Huang<\/strong>
\n
\nBoostGCN: A Framework for Optimizing GCN Inference on FPGA<\/strong>
\nBingyi Zhang, Rajgopal Kannan and Viktor Prasanna
\n
\nFA-LAMP: FPGA-Accelerated Learned Approximate Matrix Profile for Time Series Similarity Prediction<\/strong>
\nAmin Kalantar, Zachary Zimmerman and Philip Brisk
\n
\nHAO: Hardware-aware neural Architecture Optimization for Efficient Inference<\/strong>
\nZhen Dong, Yizhao Gao, Qijing Huang, John Wawrzynek, Hayden K.H. So and Kurt Keutzer<\/td>\n<\/tr>\n
11:30 - 12:15<\/td>Keynote 1 (Maya Gokhale): FPGAs in High Performance Computing<\/a><\/b><\/font>
\n
\nSession Chair: Greg Stitt<\/strong><\/td>\n<\/tr>\n
12:15 - 13:30<\/td>Break for Lunch<\/td>\n<\/tr>\n
13:30 - 14:30<\/td>Session 3: Applications 1 (Scientific Computing and Robotics)<\/b><\/font>
\n
\nSession Chair: He Li<\/strong>
\n
\nGAME: Gaussian Mixture Model Mapping and Navigation Engine on Embedded FPGA<\/strong>
\nYuanfan Xu, Zhaoliang Zhang, Jincheng Yu, Jianfei Cao, Haolin Dong, Zhengfeng Huang, Yu Wang and Huazhong Yang
\n
\nSystematically migrating an operational microphysics parameterisation to FPGA technology<\/strong>
\nJames Targett, Michael Lange, Olivier Marsden and Wayne Luk
\n
\nSolving Large Top-K Graph Eigenproblems with a Memory and Compute-optimized FPGA Design<\/strong>
\nFrancesco Sgherzi, Alberto Parravicini, Marco Siracusa and Marco Santambrogio<\/td>\n<\/tr>\n
14:30 - 15:40<\/td>Poster Session 2<\/a><\/strong><\/td>\n<\/tr>\n
15:40 - 17:00<\/td>Session 4: Architecture<\/b><\/font>
\n
\nSession Chair: Skand Hurkat<\/strong>
\n
\nCompute-Capable Block RAMs for Efficient Deep Learning Acceleration on FPGAs<\/strong>
\nXiaowei Wang, Vidushi Goyal, Jiecao Yu, Valeria Bertacco, Andrew Boutros, Eriko Nurvitadhi, Charles Augustine, Ravi Iyer and Reetuparna Das
\n
\nBenchmarking Optane DC Persistent Memory Modules on FPGAs<\/strong>
\nJialiang Zhang, Nicholas Beckwith and Jing Li
\n
\nFANS: FPGA Accelerated Near-Storage Sorting Solution<\/strong>
\nWeikang Qiao, Jihun Oh, Licheng Guo, Mau-Chung Frank Chang and Jason Cong
\n
\nMocarabe: High-Performance Time-Multiplexed Overlays for FPGAs<\/strong>
\nFrederick Tombs, Alireza Mellat and Nachiket Kapre<\/td>\n<\/tr>\n
17:00 - 18:00<\/td>Break<\/td>\n<\/tr>\n
18:00<\/td>Demo Night<\/strong>
\n
\nSession Chair: Gabriel Weisz <\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n\n\n\n

<\/p>\n\n\n\n

Tuesday, May 11th<\/h2>\n\n\n\n\n\n\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t
8:30 - 9:40<\/td>Session 5: Applications 2 (Medical, Biology, Physics)<\/b><\/font>
\n
\nSession Chair: John Wickerson<\/strong>
\n
\nHEDAcc: FPGA-based Accelerator for High-order Epistasis Detection (best paper candidate<\/font>)<\/strong>
\nGaspar Ribeiro, Nuno Neves, Sergio Santander-Jim\u00e9nez and Aleksandar Ilic
\n
\nThe Importance of Being X-Drop: High Performance Genome Alignment on Reconfigurable Hardware (best paper candidate<\/font>)<\/strong>
\nAlberto Zeni, Guido Walter Di Donato, Lorenzo Di Tucci, Marco Rabozzi and Marco Santambrogio
\n
\nUpgrade of FPGA Range-Limited Molecular Dynamics to Handle Hundreds of Processors<\/strong>
\nChunshu Wu, Tong Geng, Sahan Bandara, Chen Yang, Vipin Sachdeva, Woody Sherman and Martin Herbordt
\n
\nFPGA-accelerated Iterative Reconstruction for Transmission Electron Tomography (short)<\/strong>
\nLinjun Qiao, Guojie Luo, Wentai Zhang and Ming Jiang<\/td>\n<\/tr>\n
9:40 - 10:30<\/td>Poster Session 3<\/a><\/strong><\/td>\n<\/tr>\n
10:30 - 11:30<\/td>Session 6: Machine Learning 2 (CNNs)<\/b><\/font>
\n
\nSession Chair: Eriko Nurvitadhi<\/strong>
\n
\nOptimized FPGA-based Deep Learning Accelerator for Sparse CNN using High Bandwidth Memory<\/strong>
\nChao Jiang, Dave Ojika, Bhavesh Patel and Herman Lam
\n
\nunzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights Generation<\/strong>
\nStylianos I. Venieris, Javier Fernandez-Marques and Nicholas Lane
\n
\nESCA: Event-Based Split-CNN Architecture with Data-Level Parallelism on UltraScale+ FPGA (short) (best paper candidate<\/font>)<\/strong>
\nPankaj Bhowmik, Md Jubaer Hossain Pantho, Joel Mandebi Mbongue and Christophe Bobda
\n
\n3D-VNPU_A Flexible Accelerator for 2D\/3D CNNs on FPGA (short)<\/strong>
\nHuipeng Deng, Jian Wang, Huafeng Ye, Shanlin Xiao, Xiangyu Meng and Zhiyi Yu<\/td>\n<\/tr>\n
11:30 - 12:00<\/td>Keynote 2 (Thomas Rondeau): DARPA\u2019s FPGA Killer<\/a><\/b><\/font>
\n
\nSession Chair: Greg Stitt<\/strong><\/td>\n<\/tr>\n
12:00 - 13:15<\/td>Break for Lunch<\/td>\n<\/tr>\n
13:15 - 14:35<\/td>Session 7: High-Level Synthesis<\/b><\/font>
\n
\nSession Chair: Dilip Vasudevan<\/strong>
\n
\nClockwork: Resource-Efficient Static Scheduling for Multi-Rate Image Processing Applications on FPGAs (best paper candidate<\/font>)<\/strong>
\nDillon Huff, Steve Dai and Pat Hanrahan
\n
\nProbabilistic Scheduling in High-Level Synthesis<\/strong>
\nJianyi Cheng, John Wickerson and George Constantinides
\n
\nExtending High-Level Synthesis for Task-Parallel Programs<\/strong>
\nYuze Chi, Licheng Guo, Jason Lau, Young-kyu Choi, Jie Wang and Jason Cong
\n
\nHLS-Compatible, Embedded-Processor Stream Link (short)<\/strong>
\nEric Micallef, Yuanlong Xiao and Andre Dehon
\n
\nAn Empirical Study of the Reliability of High-Level Synthesis Tools (short)<\/strong>
\nYann Herklotz, Zewei Du, Nadesh Ramanathan and John Wickerson<\/td>\n<\/tr>\n
14:35 - 15:15<\/td>Poster Session 4<\/a><\/strong><\/td>\n<\/tr>\n
15:15 - 16:15<\/td>Session 8: Security and Cloud Computing<\/b><\/font>
\n
\nSession Chair: Mirjana Stojilovic<\/strong>
\n
\nCloud FPGA Cartography using PCIe Contention<\/strong>
\nShanquan Tian, Ilias Giechaskiel, Wenjie Xiong and Jakub Szefer
\n
\nTrusted Configuration in Cloud FPGAs<\/strong>
\nShaza Zeitouni, Jo Vliegen, Tommaso Frassetto, Dirk Koch, Ahmad-Reza Sadeghi and Nele Mentens
\n
\nRemote Power Attacks on the Versatile Tensor Accelerator in Multi-Tenant FPGAs (short) (best paper candidate<\/font>)<\/strong>
\nShanquan Tian, Shayan Moini, Adam Wolnikowski, Daniel Holcomb, Russell Tessier and Jakub Szefer
\n
\nRuntime Detection of Probing\/Tampering on Interconnecting Buses (short)<\/strong>
\nZhenyu Xu, Thomas Mauldin, Qing Yang and Tao Wei<\/td>\n<\/tr>\n
16:15<\/td>Awards and Closing<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n\n\n\n

Keynotes<\/strong><\/h1>\n\n\n\n

Monday, May 10th<\/h2>\n\n\n\n

Title: FPGAs in High Performance Computing<\/strong>
Speaker: Maya Gokhale<\/strong><\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Abstract:
<\/strong><\/p>\n\n\n\n

The repurposing of FPGAs for computing was initiated three decades ago
as extreme compute accelerators to supercomputers. Today, FPGA
acceleration is commonplace in a diverse range of settings, from
extra-terrestrial to cloud. The US Department of Energy’s exascale
computer architectures will rely on extreme scale compute accelerators
to reach their 2 Exaflop target. In contrast to the early FPGA computing
vision and to the success of FPGAs in the cloud, the DOE exascale
machine accelerators will exclusively be GPUs. In this talk, I will
discuss factors determining HPC system architecture choices, challenges
facing FPGA computing in traditional HPC workloads, and novel
opportunities for FPGA acceleration in the expanding HPC arena.<\/p>\n\n\n\n

Bio:
<\/strong><\/p>\n\n\n\n

Maya Gokhale is Distinguished Member of Technical Staff at the Lawrence
Livermore National Laboratory, USA. Her career spans research conducted
in academia, industry, and National Laboratories. Maya received a Ph.D.
in Computer Science from University of Pennsylvania. Her current
research interests include data intensive heterogeneous architectures
and reconfigurable computing. Maya is co-recipient of an R&D 100 award
for a C-to-FPGA compiler, co-recipient of four patents related to memory
architectures for embedded processors, reconfigurable computing
architectures, and cybersecurity, and co-author of more than one hundred
forty technical publications.
Maya is on the editorial board of the Proceedings of the IEEE and an
associate editor of IEEE Micro. She is a co-recipient of the National
Intelligence Community Award, is a member of Phi Beta Kappa, and is an
IEEE Fellow.<\/p>\n\n\n\n


\n\n\n\n

Tuesday, May 11th<\/h2>\n\n\n\n

Title: DARPA\u2019s FPGA Killer<\/strong>
Speaker: Thomas Rondeau<\/strong><\/strong><\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Abstract:<\/strong><\/p>\n\n\n\n

FPGAs have many uses: embedded systems, glue logic on hardware devices, applications accelerators, and so on. In rough performance terms, an application will run a hundred times faster on an FPGA than in a general-purpose computer. But an ASIC could execute that application a thousand times faster. The problem is that ASIC development is slow, hard, and expensive (even more so than FPGA development). Moore\u2019s Law has allowed us to cram more and more devices onto integrated circuits so that custom devices can now be fabricated with dozens or hundreds of components, including multicore CPUs, GPUs, accelerators, and memories. Such devices take years to implement and cost millions of dollars to fabricate \u2013 and the results is often inflexible and not able to adapt to changing requirements. DARPA\u2019s Domain-Specific System on Chip program aims to improve the most significant aspects of SoC development and deployment so that complex SoCs targeted at multiple simultaneous applications can be implemented in months using automated, high-level software tools, and reconfigured or even reprogrammed at run-time to accommodate changing circumstances. Is this the end of the FPGA?<\/p>\n\n\n\n

Bio<\/strong>:<\/p>\n\n\n\n

Dr. Tom Rondeau joined DARPA as a program manager in May 2016. His research interests include adaptive and reconfigurable radios, improving the development cycle for new signal-processing techniques, and creating general purpose electromagnetic systems.<\/p>\n\n\n\n

Prior to joining DARPA, Dr. Rondeau was the maintainer and lead developer of the GNU Radio project and a consultant on signal processing and wireless communications. He worked as a visiting researcher with the University of Pennsylvania and as an Adjunct with the IDA Center for Communications Research in Princeton, NJ.<\/p>\n\n\n\n

Dr. Rondeau holds a Ph.D. in electrical engineering from Virginia Tech and won the 2007 Outstanding Dissertation Award in math, science, and engineering from the Council of Graduate Schools for his work in artificial intelligence in wireless communications.<\/p>\n","protected":false},"excerpt":{"rendered":"

All times shown in Eastern Daylight Time (UTC-4)Links will be emailed to registrants. Main Program Monday, May 10th Tuesday, May 11th Keynotes Monday, May 10th Title: FPGAs in High Performance ComputingSpeaker: Maya Gokhale Abstract: The repurposing of FPGAs for computing was initiated three decades agoas extreme compute accelerators to supercomputers. Today, FPGAacceleration is commonplace in … <\/p>\n