2022 Fall Lectures in Climate Data Science

Biweekly on Thursdays || Sept. 8, 2022 - Dec. 1, 2022

THURSDAY || SEPT. 8, 2022

DAVID ROLNICK
McGill University

Machine Learning  in Climate Change Mitigation and Adaptation
Machine learning (ML) can be a powerful tool in helping society reduce greenhouse gas emissions and adapt to a changing climate. In this talk, we will explore opportunities and challenges in ML for climate action, from optimizing electrical grids to monitoring crop yield and biodiversity, with an emphasis on how to incorporate domain-specific knowledge into machine learning algorithms. We will also consider ways that ML is used in ways that contribute to climate change, and how to better align the use of ML overall with climate goals.

THURSDAY || SEPT. 22, 2022

LAURE ZANNA
NYU

Machine Learning for Ocean and Climate Modeling: advances, challenges and outlook
Climate simulations, which solve approximations of the governing laws of fluid motions on a grid, remain one of the best tools to understand and predict global and regional climate change. Uncertainties in climate predictions originate partly from the poor or lacking representation of processes, such as ocean turbulence and clouds, that are not resolved in global climate models but impact the large-scale temperature, rainfall, sea level, etc. The representation of these unresolved processes has been a bottleneck in improving climate simulations and projections. The explosion of climate data and the power of machine learning (ML) algorithms are suddenly offering new opportunities: can we deepen our understanding of these unresolved processes and simultaneously improve their representation in climate models to reduce climate projections uncertainty? In this talk, I will discuss the advantages and challenges of using machine learning for climate projections. I will focus on our recent work in which we leverage machine learning tools to learn representations of unresolved ocean processes and improve climate simulations for illustration. Some of our work suggests that machine learning could open the door to discovering new physics from data and enhance climate predictions. Yet, many questions remain unanswered, making the next decade exciting and challenging for ML + climate modeling.

THURSDAY || OCT. 6, 2022

VIPIN KUMAR
University of Minnesota

Inverse Modeling via Knowledge-Guided Self-Supervised Learning: An application in Hydrology
Machine Learning is beginning to provide state-of-the-art performance in a range of environmental applications such as streamflow prediction in a hydrologic basin. However, building accurate broad-scale models for streamflow remains challenging in practice due to the variability in the dominant hydrologic processes, which are best captured by sets of process-related basin characteristics. Existing basin characteristics suffer from noise and uncertainty, among many other things, which adversely impact model performance. To tackle the above challenges, in this talk, we present a novel Knowledge-guided Self-Supervised Learning (KGSSL) inverse modeling framework to extract system characteristics from driver(input) and response(output) data. This first-of-its-kind framework achieves robust performance even when characteristics are corrupted or missing. We evaluate the KGSSL framework in the context of stream flow modeling using CAMELS (Catchment Attributes and Meteorology for Large-sample Studies) which is a widely used hydrology benchmark dataset. Specifically, KGSSL outperforms baseline by 16% in predicting missing characteristics. Furthermore, in the context of forward modelling, KGSSL inferred characteristics provide a 35% improvement in performance over a standard baseline when the static characteristic are unknown.

THURSDAY || OCT. 20, 2022

ELIZABETH BARNES
Colorado State University

Explainable AI for Climate Science: Detection, Prediction and Discovery
Earth’s climate is chaotic and noisy. Finding usable signals amidst all of the noise can be challenging: be it predicting if it will rain, knowing which direction a hurricane will go, understanding the implications of melting Arctic ice, or detecting the impacts of human-induced climate warming. Here, I will demonstrate how explainable artificial intelligence (XAI) techniques can sift through vast amounts of climate data and push the bounds of scientific discovery. Examples include extracting robust indicator patterns of climate change and identifying Earth system states that lead to more predictable behavior weeks-to-years in advance. But machine learning models are only as capable as the scientists designing them. I will further discuss how climate science requires the crafting of domain specific XAI methods, both to gauge the trustworthiness of the XAI’s predictions and quantify uncertainty, but also to uncover predictable signals we didn’t know were there. Explainable AI can open doors to scientific understanding — supporting scientists as we ask new questions about the coupled human-Earth climate system.

THURSDAY || NOV. 3, 2022

** AS OF 11/1/2022: This Lecture is postponed to a later date. **

MICHAEL PRITCHARD
University of California at Irvine

Adventures in Hybrid Physics: Machine Learning for Multi-scale Climate Modeling, AI-assisted Climate Model Inter-comparison, and … NVIDIA
Low cloud forming turbulence is a key source of climate model prediction uncertainty that, despite seeming unapproachable to simulate on planetary scales, could soon come into computational range with hybrid machine learning methods. I will discuss a chain of recent work driving in this direction that has tried to outsource explicit computations within “multi-scale” climate models to simple neural networks. Focus will be on the unsolved challenge of controlling stubborn prognostic error growth in such hybrid AI climate models and especially the emerging potential of physical renormalizations to achieve “climate invariance.” Some emerging results trying to quantify the importance of such design decisions amidst the noise of hyperparameter selection will be included. Then, after a brief interlude on some fun ways to use more complicated forms of AI (VAEs) to help with analyzing high-resolution climate models, I will turn to an escalating adventure with industry, from my new perspective as a director of NVIDIA’s new climate simulation research group. Here, I will introduce the twin computational and societal missions of the company’s “Earth-2” climate initiative, and its current four-prong research strategy. This includes how a sophisticated data-driven global weather prediction model inspired by vision transformers, “FourCastNet,” is to be used to solve issues of latency and compression in serving high resolution climate predictions to society, following a vision by Bjorn Stevens, among other things.

THURSDAY || NOV. 17, 2022

PEDRAM HASSANZADEH
Rice University

Learning Data-driven Subgrid-Scale Models for Geophysical Turbulence
The atmospheric and oceanic turbulent circulations involve a variety of nonlinearly interacting physical processes spanning a broad range of spatial and temporal scales. To make simulations of these turbulent flows computationally tractable, processes with scales smaller than the typical grid size of general circulation models (GCMs) have to be parameterized. Recently, there has been substantial interest (and progress) in using deep learning techniques to develop data-driven subgrid-scale (SGS) parameterizations for a number of key processes in the atmosphere, ocean, and other components of the climate system. However, for these data-driven SGS parameterizations to be useful and reliable in practice, a number of major challenges have to be addressed. These include: 1) instabilities arising from the coupling of data-driven SGS parameterizations to coarse-resolution solvers, 2) learning in the small-data regime, 3) interpretability, and 4) extrapolation to different parameters and forcings. Using several setups of 2D turbulence, as well as two-layer quasi-geostrophic turbulence, and Rayleigh-Benard convection as test cases, we introduce methods to address (1)-(4). These methods are based on combining turbulence physics and recent advances in theory and applications of deep learning. For example, we will use backscattering analysis to shed light on the source of instabilities and incorporate physical constraints to enable learning in the small-data regime. We will further introduce a novel framework based on spectral analysis of the neural network to interpret the learned physics and will show how transfer learning enables extrapolation to flows with very different physical characteristics. Time permitting, we will briefly mention some of the advances in supervised and semi-supervised learning of the SGS models, as well as the use of equation-discovery techniques. In the end, we will discuss scaling up these methods to more complex systems and real-world applications, e.g., for SGS modeling of atmospheric gravity waves. This presentation covers several collaborative projects involving Yifei Guan (Rice U), Ashesh Chattopadhyay (Rice U), Adam Subel (Rice U/NYU), Laure Zanna (NYU), and Andrew Ross (NYU).

THURSDAY || DEC. 8, 2022

Register to attend

JAKOB RUNGE
German Aerospace Center (DLR) / TU Berlin

Causal Inference, Causal Discovery, and Machine Learning
In the past decades, machine learning has had a rapidly growing impact on many fields of natural-, life- and social sciences, as well as engineering. Machine learning excels at classification and regression tasks from complex heterogeneous datasets and can answer questions like, “What statistical associations or correlations can we see in the data?”, “What objects are in this picture?”, or “What is the most likely next data point?” But many questions in science, engineering, and politics are about “What are the causal relations underlying the data?” or “What if a certain variable changes or is changed?” or “What would have happened if some variable had another value?” Data-driven machine learning alone fails to answer such questions. Causal inference provides the theory and methods to learn and utilize qualitative knowledge about causal relations. Together with machine learning, it enables causal reasoning given complex data. Furthermore, causal methods can be used to intercompare and validate physical simulation models. In this talk, I will present an overview of this exciting and widely applicable framework and illustrate it with some examples from Earth sciences and beyond.

NSF logo

Contact Us

E-mail: