Monday, September 23
"Deep Learning in Scientific Research"
Prof. Dr. Kristian Kersting (TU Darmstadt, Machine Learning Lab)
"Deep Machines That Know When They Do Not Know"
Our minds make inferences that appear to go far beyond standard machine learning. Whereas people can learn richer representations and use them for a wider range of learning tasks, machine learning algorithms have been mainly employed in a stand-alone context, constructing a single function from a table of training examples. In this talk, I shall touch upon a view on machine learning, called probabilistic programming, that can help capturing these human learning aspects by combining high-level programming languages and probabilistic machine learning — the high-level language helps reducing the cost of modelling and probabilities help quantifying when a machine does not know something. Since probabilistic inference remains intractable, existing approaches leverage deep learning for inference. Instead of “going down the full neural road,” I shall argue to use sum-product networks, a deep but tractable architecture for probability distributions. This can speed up inference in probabilistic programs, as I shall illustrate for unsupervised science understanding, and even pave the way towards automating density estimation, making machine learning accessible to a broader audience of non-experts.
This talk is based on joint works with many people such as Carsten Binnig, Zoubin Ghahramani, Andreas Koch, Alejandro Molina, Sriraam Natarajan, Robert Peharz, Constantin Rothkopf, Thomas Schneider, Patrick Schramwoski, Xiaoting Shao, Karl Stelzner, Martin Trapp, Isabel Valera, Antonio Vergari, and Fabrizio Ventola.
PD Dr. Olena Linnyk (Frankfurt Institute for Advanced Studies, milch&zucker Gießen)
"What Can Machine Learning Do For My Project"
The Deep learning Revolution has changed the development of almost every industry and every scientific discipline in the last 5 years. My tutorial aims at ispiring you to find the added value for your own work that Deep and Machine learning can provide. After giving you the practical overview of the methodical and technological devepolments, which made the theories developed already in the 1980s suddenly so useful, I will show the general principles of the DL and ML application and real life examples. The same methods based e.g. on the deep convolutional networks and Bayes analysis can be applied to the problems in the fields of theoretical physics, experimental detector design and social sciences. Online, real-time calibration is the expected next big improvements in the operation and design of the large sensor systems in science, research and industry alike. We are developing AI-algorithms to automatically and effectively calibrate detector components for experimental groups in the field of high energy physics. Previously, we have shown in theoretical simulations that deep neural networks can be trained to decode important underlying physics characteristics from the event-by-event distribution of the particles produced in the heavy-ion collisions. On the other hand, we are using the modern machine learning tools for the development of the AI-supported competence-oriented matching in human resources (project KIOMA). In this project we aim to fight the shortage of skilled workers by providing companies with data-based recruiting analytics and empowering the workers with market-driven self-education and career-planning strategies.
Tuesday, September 24
"High Performance Computing at TU Darmstadt"
Christian Griebel (TU Darmstadt, Lichtenberg High Performance Computer)
"New Computational Resources with Lichtenberg II and the Coming DL/ML Software Environment"
We will be looking forward on the Lichtenberg-II cluster and its future advancement, providing leading-edge and efficient computational resources as well as new software ecosystems for the challenges in deep learning / machine learning and for container-based HPC. We will touch technical details and the general hardware and software roadmap of the new system.
The other part will cover the existing Lichtenberg-I resources with focus on DL/ML applications, and briefly introduce into the process of applying for scientific computations on the Lichtenberg. We will be giving insights into the job scheduler's mode of operation in allotting a fair share of the resources to every project and user.
Prof. Dr. Christian Bischof (TU Darmstadt, Scientific Computing)
"Software-Factory 4.0" (20min)
Mohammad Norouzi (TU Darmstadt, Parallel Programming)
"Automatic Construct Selection and Variable Classification in OpenMP" (45min)
Prof. Dr. Christian Bischof (TU Darmstadt, Scientific Computing), Dr. Christian Iwainsky (HKHLR)
"News in Performance Modeling" (25min)
Details to be announced.
TU Darmstadt, Hochschulstraße 1, Altes Hauptgebäude S1|03, Room 23
The talks are free of charge. No registration is necessary.