HPC Bytes: Scaling DL Models Across Compute Nodes

HPC Bytes: Scaling DL Models Across Compute Nodes

Content

HPC-Häppchen: Scaling DL Models Across Compute Nodes

Scaling Deep Learning models across multiple compute nodes is key to accelerating training for large datasets. This presentation demonstrates how to adapt existing DL models for distributed training. We will explore key challenges such as data parallelism and communication overhead, while demonstrating how to implement multi-node training setups efficiently with Horovod.

Access:

To access the event, click on the following link: 

Big Blue Button Link <https://webconf.hrz.uni-marburg.de/n/rooms/hsy-cmh-knb-ndw/join

Access Code: kw7i98