Model-driven Machine Learning    Home    Research    People    ML Seminar    Github    Internal

Seminar

We organize a biweekly seminar on machine learning, every second Tuesday at 3pm (GMT+2). We discuss papers on ML, often (but not always) with connections to Earth science, climate and weather and materials science.

The seminar also allows members of the Hamburg machine learning community to connect and present their ongoing work. We meet in person at HZG, but we also welcome remote online participants and stream the meeting live on our YouTube channel.

To get updates about each meeting or suggest a topic, please join our mailing list.

Future Topics

13. TBA 25.08.20

12. “The Big Picture” 11.08.20

We discuss “Adversarial Super-resolution of Climatological Wind and Solar Data” [1], a recent study using Generative Adversarial Networks (GANs) with convolutional layers to increase the resolution of wind and irradiance fields output by climate models. This study uses high-resolution data to train a neural network to generate high-res from low-res data.

This week’s topic is related to several previous themes: we covered GANs in episode 5 (“Real Fake Clouds”), and addressed a related problem of filling in missing data using convolutional networks in episode 8 (“Uncharted History”).

We’ll discuss the approach taken in this paper, describe the Machine Learning tool SRGAN which it uses [2], and debate the conceptual issues that arise when using ML to “invent” new pixel outputs for your model. We’ll also mention how GAN-based superresolution can introduce bias into results [3], and what this could mean for climate and earth science applications.

Main paper: [1] Stengel et al., “Adversarial super-resolution of climatological wind and solar data,” PNAS July, 2020. https://www.pnas.org/content/117/29/16805

Technical Background: [2] Ledi et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”, arXiv 2016. https://arxiv.org/abs/1609.04802

Blog post on bias in GANs for SR: [3] https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias



Past Topics

11. “Teach Yourself Physics in 2 Million Easy Steps” 28.07.20

In this episode we discuss “Learning to Simulate Complex Physics with Graph Networks” [1], a recent study using deep learning to emulate physical dynamical systems.

We have already encountered several approaches to predicting physical systems in previous seminars (episodes 3, 4, 7 & 9). Typically, a machine learning model is trained on data generated with a numerical solver to predict a (partial) system state many time steps ahead, using the current (partial) system state as input. The network solves the task in a way that bears little, if any resemblance to the numerical solver used to generate its training data.

Here, the authors follow another approach [2], where the machine learning model is trained explicitly to reproduce or “emulate” the behavior of the numerical solver over individual numerical integration steps. We will discuss how learning to carry out single-time-step updates offers certain advantages over learning to predict the future directly from the present as the learning problem is very well posed conceptually and mathematically. However, a critical concern is whether the solutions of this ‘learned simulation’ stay realistic over many integration steps when using the emulator network instead of the numerical solver.

Another twist for the upcoming session will be that the dynamical systems studied are multi-body systems, i.e. they consist of a fixed number of discrete interacting objects. To learn state predictions on this kind of data, the authors designed their own Interaction Network [3], which is a subclass of so-called Graph Neural Networks. GNNs [4] have become a very active and vast topic of research over the past years that we can only briefly touch upon in our session.

Main paper: [1] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia, “Learning to Simulate Complex Physics with Graph Networks,” arXiv:2002.09405 [physics, stat], Feb. 2020, Accessed: Jun. 27, 2020. http://arxiv.org/abs/2002.09405.

Additional Background: [2] R. Grzeszczuk, D. Terzopoulos, and G. E. Hinton, “Fast Neural Network Emulation of Dynamical Systems for Computer Animation,” in Advances in Neural Information Processing Systems 11, M. J. Kearns, S. A. Solla, and D. A. Cohn, Eds. MIT Press, 1999, pp. 882–888. https://papers.nips.cc/paper/1562-fast-neural-network-emulation-of-dynamical-systems-for-computer-animation

[3] P. W. Battaglia, R. Pascanu, M. Lai, D. Rezende, and K. Kavukcuoglu, “Interaction Networks for Learning about Objects, Relations and Physics,” arXiv:1612.00222 [cs], Dec. 2016, Accessed: Jul. 16, 2020. [Online]. Available: http://arxiv.org/abs/1612.00222.

Further reading: [4] J. Zhou et al., “Graph Neural Networks: A Review of Methods and Applications,” arXiv:1812.08434 [cs, stat], Jul. 2019, Accessed: Jul. 16, 2020. [Online]. Available: http://arxiv.org/abs/1812.08434.



10. “Try to look like a little black cloud” 14.07.20

In light of recent meteorological events in Hamburg, the next ML@HZG seminar will focus on clouds.

We will begin with a well-written 3-page review[1] that discusses how cloud resolving models (CRMs) can play an important role in our understanding of our climate and its potential changes in the future, but impose immense computational demands.

We’ll then discuss how small-scale CRMs can be used as cloud parameterizations for large-scale climate models, focusing on the Superparameterized Community Atmosphere Model (SPCAM) [2]. This approach aims to capture the two-way interactions between cloud physics and coarser-scale meteorological variables without paying the cost of a huge CRM simulation, but instead embedding a small CRM into each grid cell. Further work[3] showed how the embedded CRMs can be simplified without compromising accuracy.

Finally, we’ll discuss how machine learning can be used to imitate the effect of the miniature CRMs used in SPCAM, which in turn aims to imitate what a large-scale CRM might look like. Recent work[4] has shown the neural networks can be trained reproduce the feedback between coarse-scale climate model variables and each grid cell’s CRM, with a considerable reduction of computational.

As we’ll discuss, often the true test of these techniques is their ability to match observed phenomena in large, long simulations!

Postscript: as discussed in the seminar, it’s not totally clear why the MJO moves east, but there are some interesting theories as to why[5] (thanks to Eduardo Zorita for the reference).

[1] T. Schneider et al., “Climate goals and computing the future of clouds,” Nature Clim Change, vol. 7, no. 1, pp. 3–5, Jan. 2017, doi: 10.1038/nclimate3190.

[2] M. Khairoutdinov, D. Randall, and C. DeMott, “Simulations of the Atmospheric General Circulation Using a Cloud-Resolving Model as a Superparameterization of Physical Processes,” J. Atmos. Sci., vol. 62, no. 7, pp. 2136–2154, Jul. 2005, doi: 10.1175/JAS3453.1.

[3] M. S. Pritchard, C. S. Bretherton, and C. A. DeMott, “Restricting 32–128 km horizontal scales hardly affects the MJO in the Superparameterized Community Atmosphere Model v.3.0 but the number of cloud-resolving grid columns constrains vertical mixing,” Journal of Advances in Modeling Earth Systems, vol. 6, no. 3, pp. 723–739, 2014, doi: 10.1002/2014MS000340.

[4] S. Rasp, M. S. Pritchard, and P. Gentine, “Deep learning to represent subgrid processes in climate models,” PNAS, vol. 115, no. 39, pp. 9684–9689, Sep. 2018, doi: 10.1073/pnas.1810286115.

[5] B. Wang, F. Liu, and G. Chen, “A trio-interaction theory for Madden–Julian oscillation,” Geosci. Lett., vol. 3, no. 1, p. 34, Dec. 2016, doi: 10.1186/s40562-016-0066-z.



9. “The Best of All Possible Worlds” 30.06.20

We consider the critically important and monstrously difficult problem of tuning climate model parameters to match observations (reviewed in Hourdin et al., 2017).

This process is quite challenging, because:

We discuss several approaches to this problem:



8. “Uncharted History” 16.06.20

We discuss the paper “Artificial intelligence reconstructs missing climate information”, Kadow et al. 2020, Nat. Geosci. pdf code

We are very happy to have the first-author of the paper with us to present the study!

The computer vision field of image inpainting paper uses several techniques to reconstruct broken images, paintings, etc. In recent years, more and more diverse machine learning techniques have boosted the field. A major step was taken by Liu et al. 2018 paper video in using partial convolutions in a CNN. The study shown here will transfer the technology to climate research. The presentation will show the journey of changing and applying the NVIDIA technique to one of the big obstacles in climate research: missing climate information of the past. Therefore a transfer learning approach is set up using climate model data. After evaluating test-suites, a reconstruction of HadCRUT4 - one of the most important climate data sets - is shown and analyzed.



7. “Compressed Pressure”, 02.06.20

The main paper for this session will be Latent Space Physics: Towards Learning the Temporal Evolution of Fluid Flow, Wiewel et al, 2019. Also see their blog post.

We will also briefly discuss a follow-up from Wiewel et al. 2020, and a related paper on generative fluid modelling from the same group, Kim et al. 2019. The latter is nicely summarized in this video.

For those interested in the underlying ML methods, this session will be about autoencoders and sequence-to-sequence models:



6. “Minimalist Chaos”, 19.05.20

We’ll discus the Lorenz `96 model (L96) and its myriad uses. In “Predictability - a problem partly solved”, Edward Lorenz introduced a simple mathematical model exhibiting many of Earth science’s core computational challenges.

Challenging features of L96 include chaotic dynamics, nonlinearity, combination of dissipative and conservative aspects and coupling of vastly differing scales in space and time. Chaos means that small perturbations in the model state due to numerical errors or observation noise will, over time, lead to large deviations in the future model state.

L96 is a frequent test case for algorithms tackling many fundamental problems. We consider two of these: parameter tuning, and parameterizing sub-grid processes:

Finally, we’ll revisit the original paper and the issue of predictability, nearly 25 years later.



5. “Real Fake Clouds” 05.05.20

We discuss the paper “Modeling Cloud Reflectance Fields using Conditional Generative Adversarial Networks,” Schmidt et al. 2020, arXiv. pdf code

This paper uses generative adversarial networks, or GANs. In the GAN framework, a generator network learns to generate “fake” data points while a second discriminator network learns to tell real from fake data. Schmidt et al. use GANs to predict cloud reflectance fields from meteorological variables such as temperature and wind speed. Given these meteorological variables, it can produce multiple realistic output patterns instead of an ensemble average. That is, the network attempts to learn the conditional probability distribution of reflectance given the input variables.

We start with a very brief introduction of GANs. More background can be found in Diego Gomez Mosquera’s high accessible blog post or Ian Goodfellow’s extensive tutorial.

Importantly, this paper wasn’t able to get good results just by applying the GAN framework out of the box, and had to use some of the latest specialized tricks as well. So we’ll briefly go through some of these tricks:

Additional links from the discussion: Variational Dropout and and the Local Reparameterization Trick pdf Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning pdf



4. “Far into the Future”, 21.04.20

Lennard Schmidt from UFZ present on his work. He applies machine learning to do quality control for hydrological measurement data. He also uses a sophisticated convLSTM architecture to predict hydrological dynamics in an Elbe catchment basin. Code for a convLSTM layer in tensorflow/keras can be found here.

Eduardo Zorita presents “Deep learning for multi-year ENSO forecasts,” Ham et al. 2019, Nature. link This paper uses machine learning algorithms to predict the El Niño/Southern Oscillation 1.5 years into the future, farther than previous methods have achieved. Notably, it trains on a combination of simulations and historical data.

Additional references on the predictability paradox in climate science: “Do seasonal‐to‐decadal climate predictions underestimate the predictability of the real world?” Eade et al. 2014, Geophys. Research Letters. link

“Skilful predictions of the winter North Atlantic Oscillation one year ahead.” Dunstone et al. 2016, Nature. link



3. “MetNet, Convolutional-Recurrent Nets, and the Self-Attention Principle” 07.04.20

Linda von Garderen presents on her work.

We cover Google Research’s recent work on weather prediction: “MetNet: A Neural Weather Model for Precipitation Forecasting,” Sønderby et al., 2020, arXiv. paper, blog post

To understand the ML tools that went into this work, we briefly review some concepts from earlier works:

With these concepts in mind, we examine how MetNet combines them, and consider their results from the perspectives of both ML and weather prediction.

Relevant discussion links:



2. “Don’t Fear the Sphere” 31.03.20

We cover “Spherical CNNs on Unstructured Grids,” Jiang et al. 2019, ICLR. We also survey other ML approaches to spherical data (more links in the description on YouTube). With 5 minute presentations by Julianna Carvalho, Tobias Finn and Lennart Marien.



1. “Hidden Fluid Mechanics” 24.03.20

We discuss the paper “Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations,” Raissi et al. 2020, Science, and the more technical study from the same group, “Physics Informed Neural Networks,” Raissi et al., 2019, J. Computational Physics. Tobias Weigel from DKRZ explains the ML support team that forms part of the local Helmholtz AI unit.