July 2025 Meetup of the NL-RSE and HW Acceleration NL communities

Artists impression of AI accelerators
Image generated using Sora

Using AI accelerators for scientific applications

In recent years, various types of specialized hardware accelerators (e.g., GPUs, FPGAs, TPUs, DSPs) have become the standard platforms for boosting performance in computationally intensive applications across specific domains. Scientific workloads have been very successful in using and influencing GPU developments for acceleration. Nowadays the development of new accelerator hardware is guided by LLM workloads instead of scientific work.

In this meetup we will address the following topics:

  • Can we, in the scientific community, stay on board the high-performance AI accelerator train?
  • Which numerical methods can benefit from new developments in AI hardware?
  • What kind of hardware is available in the Netherlands, and how to get access?
  • Which developments are happening now in the Netherlands, software and hardware-wise?

Join us the afternoon of the 7th of July for talks and discussions.

Location

SURF office in Utrecht, Moreelsepark 48, 3511 EP Utrecht.

Registration

Attendance is free and open to all.

Registration

Agenda

TimeSpeakerSubject
13:00 - 13:30Pre-meetup networking, coffee
13:30 - 13:35Daan & StevenIntroduction
13:35 - 14:10Xavier Farré ÁlvarezIntroduction, overview of GPU, TPU capability growth in the past few years
14:10 - 14:30Nick BrummansState of the art of AI accelerators: The SPIKE-1 NVIDIA DGX system at TU/e
14:30 - 15:15John RomeinTensor Core GPU processing implementations
15:15 - 15:40Break
15:40 - 16:00Daan van VugtSimulation methods (un)suitable for AI accelerators
16:00 - 16:45Christiaan BoerkampTina: Acceleration of Non-NN Signal Processing Algorithms Using NN Accelerators
16:45 - 17:30Networking and drinks

Abstracts

Xavier Farré Álvarez - overview of GPU, TPU capability growth in the past few years

Xavier will provide an overview of trends in the GPU landscape and beyond, and how these match specific computations. He'll explain how and where to get access, and showcase the work of the SURF Experimental Technologies platform.

Xavier Álvarez Farré is a Supercomputing Advisor at SURF and the EuroCC vice coordinator for HPC training in the Netherlands.

SURF Experimental Technologies Platform (ETP)

ETP is an open and collaborative environment as experimental hardware infrastructure where a dedicated team of expert system administrators and advisors foster the assessment of cutting-edge ICT technologies and methodologies. The ETP focuses on optimizing complex workflows, enhancing energy efficiency, and cultivating a vibrant community of researchers and engineers developing emerging techniques. It also stimulates knowledge sharing and dissemination through various means. ETP consists of several edge technology hardware for compute, network, storage and infrastructure (i.e., Liqid composable infrastructure). In particular, it has accelerators such as AMD MI210 GPU, Nvidia GH200 Grace Hopper, AMD VCK5000 FPGA Versal and AMD Alveo U250 FPGA card.

Nick Brummans - State of the art of AI accelerators: The SPIKE-1 NVIDIA DGX system at TU/e

At TU/e we have a "state of the art" NVIDIA DGX (4x B200) system called SPIKE-1. It offers next-generation AI accelerators, high-throughput, low-latency computing tailored for large-scale AI workloads. When integrated with NVIDIA Run AI software, the system's full potential is unlocked through dynamic resource orchestration, efficient workload management, flexible allocation (fractional, multi-tenant, or exclusive use) and automatic scaling of training and inference jobs. Together with the NVIDIA AI Enterprise building blocks the platform supports the full spectrum from data preprocessing, experimental model training and enterprise grade deployments.

Nick Brummans is a Machine Learning engineer at the TU/e Supercomputing Center.

Daan van Vugt - Simulation methods (un)suitable for AI accelerators

Artificial Intelligence (AI) accelerators, such as GPUs and TPUs, are optimized for parallel and matrix-heavy computations, making them highly effective for certain simulation methods. Suitable simulation methods include those that are data-parallel and compute-intensive, such as Monte Carlo simulations, cellular automata, lattice Boltzmann methods and finite difference methods. In contrast, methods involving irregular memory access patterns, recursive dependencies, or dynamic data structures, such as direct linear system solves and discrete event simulations are often inefficient.

Selecting or adapting simulation methods to align with the architectural strengths of AI accelerators is crucial for achieving high performance and scalability.

Daan van Vugt is the CEO of Ignition Computing.

John Romein - Accelerating Signal Processing with Tensor Cores

GPU Tensor Cores have been designed to accelerate training and inference in AI applications, but they can be used for other purposes as well. In this presentation, I will show how we use tensor cores to speed up signal-processing tasks like correlation and beam forming, and how we use them for radio-astronomical and medical ultrasound imaging applications. I will discuss the challenges of programming them, and show performance and energy-efficiency results.

John W. Romein is a senior researcher High-Performance Computing at ASTRON (the Netherlands Institute for Radio Astronomy). He received his PhD from the Vrije Universiteit, Amsterdam on distributed game-tree search. His research interests include accelerated computing and high-speed data transport.

Christiaan Boerkamp

Christiaan will give an overview of TINA, TINA is a novel framework aimed at redefining the landscape of hardware acceleration using high-level programming languages. In an age where digital technologies are advancing at an unprecedented pace, the need for portable and accessible computing performance has become paramount. Hardware accelerators, such as GPUs, FPGAs, and TPUs, have been developed to meet this soaring demand on computing performance, offering substantial improvements in processing speed and efficiency. However, the integration of these accelerators into existing software architectures remains a formidable challenge. TINA aims to address this disjointed landscape by providing a cohesive, scalable and portable framework that not only simplifies the integration of various hardware accelerators into software applications but also optimises their performance and accessibility. The presentation will include an explanation of how TINA works, a brief demonstration, and a roadmap outlining future developments

Christiaan Boerkamp is a co-founder of the start-up VLV Medical and one of the founders of the open-source framework TINA. He holds a bachelor's degree in Computer Science from NHL and a master's degree in Biomedical Engineering from TU Delft

Contact

If you have any questions, please contact the hosts Daan van Vugt and/or Steven van der Vlugt

© 2025. All rights reserved.

NL-RSE. The community of Research Software Engineers from Dutch universities, knowledge institutes, companies and other relevant organizations for sharing knowledge, organizing meetings and raising awareness for the scientific recognition of research software.