Keynote Speakers
Keynote 1: "Language Models – The Most Important Compute Challenge of Our Time"
ChatGPT recently became one of the fastest growing new applications in history, thanks to its intriguing text generation capabilities that are able to answer questions, write poetry, and even problem solve, with far-reaching implications to many problems. Multimodal language models integrate understanding, generation, and reasoning capabilities into systems that are now being integrated in fundamental ways into products around the tech industry. The possibilities are extraordinary, but much research remains to make these systems reliable and trustworthy, as well as integrate them into applications. Additionally, the computational challenges behind these models are extreme. Systems for training and deploying these models must be highly scalable and run at extraordinary efficiency. In this talk, I’ll discuss the work we have been doing at NVIDIA to optimize systems for Generative Model training and inference, and highlight some of the challenges that remain for future work.
Bryan Catanzaro is Vice President of Applied Deep Learning Research at NVIDIA, where he leads a team of AI researchers working on chip design, audio and speech, language modeling, graphics and vision, with the goal of finding practical new ways to use AI for NVIDIA’s products and workflows. DLSS, Megatron, CUDNN, Pascaline, WaveGlow and DeepSpeech are some of the projects he’s helped create. Bryan received his PhD in EECS from the University of California, Berkeley.
Keynote 2: "Digital Twins: Clash of Mathematics, AI, Data and High Performance Simulation"
Understanding the complexity and finding certainty in such uncertain times can deliver major operational advantages for any economical, industrial or scientific assets. The operational team are under increasing pressure to optimize performance while minimising risks and increasing productivity. All set against a volatile global backdrop. A digital twin is a virtual representation - a true-to-reality simulation of physics and materials - of a real-world physical asset or system, which is continuously updated. Digital twins aren’t just for inanimate objects and people. They can be a virtual representation of computer networking architecture used as a sandbox for cyberattack simulations. They can replicate a fulfillment center process to test out human-robot interactions before activating certain robot functions in live environments. The applications are as wide as the imagination. Digital twin technology, with data at its core, is helping scientists, engineers, biologist or even economist gain control and understanding over their resources and assets. By connecting the right people to the right data, the right processes (mathematics and simulation), one can gain greater end-to-end insights. One can quickly identify the actions and strategies needed to deliver sustainable performance improvements. In this talk, we discuss how Mathematics, data, simulation and AI make digital twins possible.
Dr. Simon See is currently Head of the AI Technology Centre and AI Nation at NVIDIA. He is also a Professor at Shanghai Jiaotong University (China) and King-Mong Kung Technical Univ (Thailand). Professor See is also the Chief Scientific Computing Advisor to BGI (China). Prior to NVIDIA, he worked for DSO National Lab, IBM, SGI and Sun Microsystems. Dr See graduated from University of Salford with a Doctorate in Applied Mathematics/Engineering. His research interests are Computer Architecture and Systems, Simulation and Applied Mathematics. He has published 70+ peer reviewed papers.
Keynote 3: "Challenges in Computing Industry"
In this talk, we'll share 7 industrial computing challenges, originated from emerging computing architectures, including data-centric architecture and converged architecture for diversified computing power. We also suggest some research directions that may help to address the challenges.
Dr. Tingyao WU currently holds a position of director of technology planning for computing industry in European Research Institute, Huawei Technologies. He received his BSc degree from Peking University in 2003 and PhD degree from Catholic University of Leuven (KU Leuven), Belgium, both in the area of speech recognition and machine learning in 2009. Prior to joining Huawei, he worked with Bell Labs, Nokia (formerly Alcatel-Lucent) for 7 years. His research interests include hardware-software co-design for general purpose computing and AI computing, and general AI technologies. Dr. Tingyao WU has published more than 30 peer-reviewed publications and holds 9 patents.

Instructors
Wen-mei W. Hwu received the PhD degree in computer science from the University of California, Berkeley, 1987. He is the Walter J. (“Jerry”) Sanders III-Advanced Micro Devices endowed chair of electrical and computer engineering at the University of Illinois at Urbana-Champaign. His research interests include the areas of architecture, implementation, software for high-performance computer systems, and parallel processing. He is a principal investigator (PI) for the petascale Blue Waters system, a codirector of the Intel and Microsoft funded Universal Parallel Computing Research Center (UPCRC), and PI for the world’s first NVIDIA CUDA Center of Excellence. He is the chief scientist of the Illinois Parallel Computing Institute and the director of the IMPACT lab.
For his contributions to the areas of compiler optimization and computer architecture, he received the 1993 Eta Kappa Nu Outstanding Young Electrical Engineer Award, the 1994 University Scholar Award of the University of Illinois, the 1997 Eta Kappa Nu Holmes MacDonald Outstanding Teaching Award, the 1998 ACM SigArch Maurice Wilkes Award, the 1999 ACM Grace Murray Hopper Award, the 2001 Tau Beta Pi Daniel C. Drucker Eminent Faculty Award, the 2006 most influential ISCA paper award, and the University of California, Berkeley distinguished alumni in computer science award. From 1997 to 1999, he was the chairman of the Computer Engineering Program at the University of Illinois. In 2007, he introduced a new engineering course in massively parallel processing with David Kirk of NVIDIA. He is a fellow of IEEE and of the ACM.
Juan Gómez Luna is a senior researcher and lecturer at SAFARI Research Group @ ETH Zürich. He received the BS and MS degrees in Telecommunication Engineering from the University of Sevilla, Spain, in 2001, and the PhD degree in Computer Science from the University of Córdoba, Spain, in 2012. Between 2005 and 2017, he was a faculty member of the University of Córdoba. His research interests focus on GPU and heterogeneous computing, processing-in-memory, memory systems, and hardware and software acceleration of medical imaging and bioinformatics. He is the lead author of PrIM (https://github.com/CMU-SAFARI/prim-benchmarks), the first publicly-available benchmark suite for a real-world processing-in-memory architecture, and Chai (https://github.com/chai-benchmarks/chai), a benchmark suite for heterogeneous systems with CPU/GPU/FPGA.
Izzat El Hajj is an Assistant Professor in the Department of Computer Science at the American University of Beirut. His research interests are in application acceleration and programming support for parallel processors and memory technologies, with a particular interest in GPUs and processing-in-memory. He is a co-author of the textbook Programming Massively Parallel Processors: A Hands-on Approach, 4th edition. He received his M.S. and Ph.D. in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, where he worked with the IMPACT Research Group led by Prof. Wen-mei Hwu and received the Dan Vivoli Endowed Fellowship. Prior to that, he received his B.E. in Electrical and Computer Engineering at the American University of Beirut, where he graduated with high distinction and received the Distinguished Graduate Award.
Antonio J. Peña holds a BS + MS degree in Computer Engineering (2006), and MS and PhD degrees in Advanced Computer Systems (2010, 2013), from Jaume I University of Castellón, Spain. He is currently a Leading Researcher at Barcelona Supercomputing Center (BSC), Computer Sciences Department, where he leads the “Accelerators and Communications for HPC” Group. Antonio is a Ramón y Cajal Fellow, former Marie Sklodowska-Curie Fellow, and former Juan de la Cierva Fellow, and a recipient of the 2017 IEEE TCHPC Award for Excellence for Early Career Researchers in High Performance Computing. He is also an ERC Consolidator Laureate and Sr. IEEE/ACM member. Antonio is also Teaching and Research Staff at Universitat Politècnica de Catalunya (UPC). His research interests in the area of runtime systems and programming models for high performance computing include resource heterogeneity and communications.
Leonidas Kosmidis is a Leading Researcher at the Barcelona Supercomputing Center (BSC) and the Universitat Politècnica de Catalunya (UPC). He holds a PhD and a MSc degree in Computer Architecture from UPC and a BSc in Computer Science from the University of Crete, Greece. He is leading the research on embedded GPUs for safety critical systems, both at hardware and system software level within the CAOS (Computer Architecture/Operating Systems) group. He is the PI of several projects funded by the European Space Agency (ESA) such as the GPU4S (GPU for Space) and the Horizon Europe METASAT project, as well as projects funded by industry such as the Airbus Defence and Space which focus on the adoption of GPUs in space and avionics systems. He is also participating in several standardisation efforts regarding GPU programming in safety critical systems. Dr. Kosmidis is the recipient of the RISC-V Educator of the Year Award in 2019 from the RISC-V Foundation and an Honourable Mention for the EuroSyS Roger Needham PhD Award in 2018, which is awarded to the best PhD thesis in Europe.
Vicenç Beltran received his Engineering and PhD degrees in Computer Science in 2004 and 2009, respectively, from the Technical University of Catalonia (UPC). Since 2009, he has been a Senior Researcher at the Barcelona Supercomputing Center (BSC), where he works on parallel and distributed programming models, domain-specific languages, operating systems, and tools for HPC systems. He has been a Work-Package leader in several EU projects such as DEEP, DEEP-ER, DEEP-EST, and INTERTWinE. Moreover, he has also participated in industrial projects with LG, REPSOL, and Huawei. He has over 40 publications in refereed journals and international conference proceedings. He has also participated in many training activities related to the OmpSs-2 task-based programming model. He currently leads the System Tools and Advanced Runtimes (STAR) group that develops the OmpSs-2 programming model.
Xavier Teruel received the Technical Engineering and the Engineering degree in Computer Science at Universitat Politecnica de Catalunya (UPC) in 2003 and 2006, respectively. Since 2006, Xavier is working as a researcher within the group of Parallel Programming Models in the Computer Sciences department at the Barcelona Supercomputing Center (BSC).
His research interests include the areas of operating systems, programming languages, compilers, runtime systems and applications for high-performance architectures and multiprocessor systems, mostly focused in shared memory environments.
Marc Jordà received his M.S. in Computer Architecture, Networks and Systems in 2012 from the Universitat Politècnica de Catalunya, Barcelona. Since then, he has been a research engineer at the Barcelona Supercomputing Center – Centro Nacional de Supercomputación, working in several topics from the field of high-performance computing, including application acceleration with GPUs, GPU hardware simulation, and performance analysis.