Skip to Content
My MSU

GESTAR II at Morgan State University


Abstracts

Javier Amezcua - The use of observational tendencies for data assimilation in non-Markovian systems

Title: The use of observational tendencies for data assimilation in non-Markovian systems

 Authors: Javier Amezcua (University of Reading, UK. National Centre for Earth Observation, UK)
Evan Elliot (University of Reading, UK)
Peter Jan van Leeuwen (University of Reading, UK., National Centre for Earth Observation, UK)

Multi-scale systems can represent a challenge in terms of modeling and data assimilation. The evolution of resolved scales is represented by explicitly integrating differential equations, whereas the collective effect of the unresolved scales is parameterized in the form of (additive) model error. If the dynamics of the resolved and unresolved variables are sufficiently 'separated' (e.g. there are sufficient gaps in the energy spectrum), the model error can be considered Markovian (i.e. no dependence in the past). When there is sufficient interaction between scales, the error often contains a bias and a memory term (e.g. Mori-Zwanzig formalism), rendering a non-Markovian scenario. In this work we explore the use not only of observations, but also observational tendencies, to tackle the non-Markovian nature of this kind of situations.

Brian Ancell - Chaos seeding within perturbation experiments

Title: Chaos Seeding within Perturbation Experiments

 

Authors: Brian Ancell (Texas Tech University)

Allison Bogusz (Lynker Technologies)

Matthew Lauridsen (Fleet Numerical Meteorology and Oceanography Center)

Christian Nauert (Federal Aviation Administration)

 

Perturbation experiments are used widely within numerical weather prediction model frameworks to study how initial condition or other differences can affect the atmosphere over a range of spatial and temporal scales. However, it has been discovered that perturbations within the Weather Research and Forecasting (WRF) model can create numerical noise that propagates three-dimensionally through the model's spatial discretization schemes at speeds substantially faster than any realistic physical mode. The resulting noise is very small, and likely does not affect the atmospheric state within model simulations in areas where dry dynamics dominate. However, in areas of moist convection or precipitation, the noise can grow rapidly through chaos by nonlinear processes to significantly alter the state, potentially growing upscale. The growth of noise thus has the ability to cause severe misinterpretations of the realistic effects of the perturbation in the first place. This work details the propagation and growth of numerical noise in the WRF model, and compares it to a number of perturbation experiments for which realistic perturbation growth is expected. Two techniques designed to mitigate these effects, EOF analysis and sensitivity analysis, are also presented as methods that may effectively eliminate the effects of rapidly-propagating numerical noise, leading to substantially more valuable interpretations.

Nancy Baker - Revisiting assumptions: a critical re-examination of ocean surface wind assimilation in the U.S. Navy’s Global and Mesoscale Data Assimilation Systems

Title: Revisiting assumptions: a critical re-examination of ocean surface wind assimilation in the U.S. Navy’s Global and Mesoscale Data Assimilation Systems.

 

Author: Nancy L. Baker (Marine Meteorology Division, NRL, Monterey, CA)

Liang Xu (Marine Meteorology Division, NRL, Monterey, CA)

Justin Tsu (UCAR Visiting Scientist, NRL, Monterey, CA)

 

The US Navy has been assimilation ocean surface wind speed observations operationally in the Navy’s global forecast system with beneficially impact since 1990 (Phoebus and Goerss, 1991).  However, long term FSOI statistics indicate a progressive decrease in the beneficial impact of the DMSP SSMIS wind speed observations.  We assumed that this was primarily due to Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave Imager (/Sounder) (SSMI/S) imager degradation with age, and increasing calibration issues.  As part of a project to examine the potential impact of NASA CYGNSS wind speed observations, we re-examined our current methods for assimilating ocean surface wind speed and wind vector retrievals, and found that wind speed assimilation as currently implemented is clearly sub-optimal. Using a series of experiments, ranging from simplified examples to full cycling assimilation tests, we illustrate why this is the case, and discuss alternative approaches. This work has broader implications for the assimilation of other wind observations with differing wind speed and direction information content, such as wind lidar (AEOLUS), the Multi-angle Imaging SpectroRadiometer (MISR), and other non-polarimeteric microwave radiometers.  Finally, we summarize our current research focused on the assimilation of CYGNSS mean square slope (MSS) rather than wind speed.

Zak Bell - Accounting for error due to unresolved scales in data assimilation

Title: Accounting For Error Due To Unresolved Scales In Data Assimilation

 

Authors: Zackary Bell (University Of Reading)

Sarah L Dance (University Of Reading)

Joanne A Waller (University Of Reading)

 

Assimilating expensive observations from scientific observing networks has led to large gains in forecast skill but data sets of inexpensive urban observations are currently not fully utilized, e.g., citizen science automatic weather stations, smartphone observations,  vehicle based temperature observations etc.  These types of observations are subject to representation error due (in part) to the discrepancy in scales between the observations and the numerical forecast model. Measurements from urban sources will be much more affected by their local environment than rural observations placed far from a building. For example, measurements from a sheltered street will have different statistics from readings taken on top of a skyscraper. In order to use these data sets, this mismatch in scale must be compensated for by the data assimilation algorithm in order to use such data sets.

 

The standard approach to dealing with scale mismatch error is to include it as part of the observation error covariance matrix but alternative approaches do exist. One such example is a version of the Schmidt-Kalman filter; an adaption of the Kalman filter able to consider the influence of processes unresolved by the model. The Schmidt-Kalman filter achieves this through estimation of the effect of the unresolved parameters without estimating the unresolved state itself. Assimilation of urban observations via the Schmidt-Kalman filter will leave the dimensionality of the resolved state space unchanged and the representativity error accounted for. 

 

These methods of data assimilation, which explicitly deal with unresolved spatial and/or temporal scales, will be identified within existing literature and written in a unified notation. We will present a comparison of these methods.   

Jeremy Berman - The impact of warm conveyor belt forecast errors on variability in the downstream waveguide

Title  The Impact of Warm Conveyor Belt Forecast Errors on Variability in the Downstream Waveguide

 

Authors:  Jeremy Berman (University at Albany, SUNY)

Ryan Torn (University at Albany, SUNY)

 

 

Atmospheric waveguides denote the location of the jet stream and constrain the motion of Rossby waves. Perturbations of the waveguide, such as those associated with the warm conveyor belt (WCB) of midlatitude cyclones, can lead to the downstream radiation of Rossby waves, which can often spawn high-impact weather events. Previous studies have hypothesized that forecast errors associated with diabatic heating within WCBs may lead to variability in the downstream waveguide; however it is unclear to what extent this may be true.

 

The above hypothesis is evaluated by applying the ensemble-based sensitivity technique to European Centre for Medium-Range Weather Forecasts (ECMWF) model ensemble forecasts of a select North Atlantic cyclone characterized by a significant waveguide perturbation. The role of waveguide perturbation uncertainty on downstream forecasts will be assessed by comparing the sensitivity of downstream forecasts to the divergent outflow to the sensitivity of the downstream forecasts to other features, such as the position of upstream troughs or details of the waveguide itself. Finally, the role of thermodynamic uncertainty on waveguide perturbation structure will also be evaluated by computing the sensitivity of the waveguide perturbation to lower-tropospheric relative humidity and temperature within the WCB. Overall, the results suggest highly sensitive regions for additional observational sampling, which through data assimilation can help reduce forecast uncertainty.

 

Loïk Berre - Simulation and diagnosis of observation, model and background error contributions in data assimilation cycling

Title: Simulation and diagnosis of observation, model and background error contributions in data assimilation cycling

 

Authors: Loïk Berre (Météo France/CNRS, CNRM)

Benjamin Ménétrier (Météo-France/CNRS, CNRM)

 

Data assimilation is usually cycled in time, through a temporal succession of analysis and forecast steps. This implies that analysis and forecast errors result from complex combined effects of observation, model and background errors in the data assimilation cycling. In this context, a linearized formal expansion of forecast errors is here proposed, in order to derive estimates of respective accumulated contributions of observation, model and background errors of different ages in the cycling.

 

Each error contribution can be simulated by adding corresponding perturbations to the unperturbed system, followed by propagation in ensemble cycled experiments. For instance, observation errors can be simulated by using random draws of specified observation error covariances ; background perturbations at a given cycling step can be constructed either from previous steps of the ensemble cycling, or from random draws of specified background error covariances. Specific variance diagnostics can then be used to evaluate the respective contributions of these different perturbations to ensemble spread. Experimental results are displayed using the Meteo-France Ensemble Data Assimilation system.

 

It is shown for instance that the contribution of old background errors tends to be attenuated during the cycling, due to damping effects of the successive analysis steps. Theoretical and experimental results are also presented to illustrate the fact that the variance of recent observation error contributions tends to accumulate and converge as a power series. Moreover, the near stability of global forecast error variances is shown to be closely related to an equilibrium, during the cycling, between damping of old error contributions and accumulation of recent error contributions.

 

It is also discussed and illustrated how model error contributions may be diagnosed and estimated by comparing suchensemble-based diagnostics with innovation-based estimates of error covariances. Experimental results are shown, in order to compare the respective amplitudes of observation and model error contributions in the context of global Numerical Weather Prediction.

 

Luca Cantarello - Investigating satellite radiance data assimilation at different scales in an idealised convective modelling framework

Title: Investigating satellite radiance data assimilation at different scales in an idealised convective modelling framework

Authors: L. Cantarello (School of Mathematics, University of Leeds)

  1. Bokhove (School of Mathematics, University of Leeds)
  2. Tobias (School of Mathematics, University of Leeds)
  3. Inverarity (Met Office, Exeter, United Kingdom )

S.Migliorini (Met Office, Exeter, United Kingdom )

 

Satellite data are one of the main types of observation used in operational Data Assimilation (DA) schemes for Numerical Weather Prediction (NWP) models. Their efficient use to increase forecast performance is still a crucial subject of research, with clouds being a key area of interest.

DA algorithms for operational NWP models are very complex and expensive from a computational point of view. An alternative is to use simplified models. To this end, in a recent study carried out at the University of Leeds in collaboration with the Met Office, T. Kent has developed and tested an idealised model based on a modified shallow water system of equations. The model simulates convection and precipitation and is able to reproduce a variety of atmospheric conditions across several spatial scales. The model was crucially proved to be a suitable tool for DA studies, inasmuch as it shows features and behaviours comparable to those of an operational DA scheme [1,2].

We are now advancing the above-mentioned study, aiming to further investigate and extend the algorithm in order to test new configurations and support the research in satellite DA at the Met Office. Since our focus is on satellite Data Assimilation, we shall implement a brightness temperature observation operator, so that the scheme can assimilate simulated satellite observations by means of a moving observation window that cycles periodically in time across the periodic domain. The addition of the new observation operator will be essential to reduce the gap between our idealised model and the real-world operational schemes. To this aim, we plan to apply a simple blackbody radiation model, using a Rayleigh-Jeans law to simulate the brightness temperature. Thereafter, we will focus on the impact of assimilating satellite radiance observations at different spatial and temporal scales, which are known to be affected by different error growths.

 

[1] Kent, T., Bokhove, O., & Tobias, S. (2017). Dynamics of an idealized fluid model for investigating convective-scale data assimilation. Tellus A: Dynamic Meteorology and Oceanography, 69(1), 1369332.

[2] Kent, Thomas (2016) An idealised fluid model of Numerical Weather Prediction: dynamics and data assimilation.PhD thesis, University of Leeds.

David Carvalho - NASA’s GMAO atmospheric motion vectors simulator: description and application to the MISTiC Winds concept

 

Title: NASA’s GMAO atmospheric motion vectors simulator: description and application to the MISTiC Winds concept

Authors: Carvalho, D. (GMAO/ NASA; GESTAR/USRA)

 McCarty, W. R. (GMAO/ NASA)

 Errico, R. M. (GMAO/ NASA; GESTAR/Morgan State University)

 Privé, N. C. (GMAO/ NASA, GESTAR/Morgan State University)

 

An atmospheric wind vectors (AMVs) simulator was developed by NASA’s GMAO to simulate observations from future satellite constellation concepts. The synthetic AMVs can then be used in OSSEs to estimate and quantify the potential added value of new observations to the present Earth observing system and, ultimately, the expected impact on the current weather forecasting skill. The GMAO AMV simulator is a tunable and flexible computer code that is able to simulate AMVs expected to be derived from different instruments and satellite orbit configurations. As a case study and example of the usefulness of this tool, the GMAO AMV simulator was used to simulate AMVs envisioned to be provided by the MISTiC Winds, a NASA mission concept consisting of a constellation of satellites equipped with infrared spectral midwave spectrometers, expected to provide high spatial and temporal resolution temperature and humidity soundings of the troposphere that can be used to derive AMVs from the tracking of clouds and water vapor features.

The GMAO AMV simulator identifies trackable clouds and water vapor features in the G5NR and employs a probabilistic function to draw a subset of the identified trackable features. Before the simulator is applied to the MISTiC Winds concept, the simulator was calibrated to yield realistic observations counts and spatial distributions and validated considering as a proxy instrument to the MISTiC Winds the Himawari-8 Advanced Imager (AHI). The simulated AHI AMVs showed a close match with the real AHI AMVs in terms of observation counts and spatial distributions, showing that the GMAO AMVs simulator synthesizes AMVs observations with enough quality and realism to produce a response from the DAS equivalent to the one produced with real observations. When applied to the MISTiC Winds scanning points, it can be expected that the MISTiC Winds will be able to collect approximately 60,000 wind observations every 6 hours, if considering a constellation composed of 12 satellites (4 orbital planes).  In addition, one of the main expected impacts of the MISTiC Winds concept is the ability to derive water vapor feature tracking AMVs below 500-400 hPa, an unique feature among the water vapor AMVs derived from the current Earth observing system.

Marcin Chrust - Towards operational implementation of the Object Oriented Prediction System at ECMWF

Title: Towards operational implementation of the Object Oriented Prediction System at ECMWF

Authors: Marcin Chrust  (European Centre for Medium-Range Weather Forecasts)

Mats Hamrud (European Centre for Medium-Range Weather Forecasts)

Olivier Marsden (European Centre for Medium-Range Weather Forecasts)

Deborah Salmond (European Centre for Medium-Range Weather Forecasts)

Stephen English  (European Centre for Medium-Range Weather Forecasts)

 

The European Centre for Medium-Range Weather Forecasts has committed to the development of a unified framework for variational data assimilation algorithms for the earth-system model. The design of the Object Oriented Prediction System (OOPS) aims at separating data assimilation algorithms from specific implementations in underlying models (atmosphere, ocean, land-surface, sea-ice) allowing a variety of variational assimilation methods supported within a single software framework. The underlying idea is to identify independent self-contained building blocks: states, observation operators, covariance matrices, models, increments, which together form data assimilation algorithms. Such design facilitates development of new algorithms, e.g. a weakconstrained 4D-Var, fosters collaboration across institutions and among developers. It leads to improved productivity, for instance, by enabling to easily transfer algorithms developed for one model to another model without any recoding and results in significantly reduced code maintenance cost.

The Object Oriented Prediction System (OOPS) has been interfaced to the IFS and to the NEMO ocean model. The first operational implementation of the OOPS system is scheduled for Cy46r1, which will be finalized by the time of the workshop. In this presentation we will share our experience with interfacing our earth-system model into the common assimilation framework and describe the practical challenges that we needed to address. We will give a future outlook for the OOPS system evolution at ECMWF. OOPS will be central to implementing the ECMWF data assimilation strategy 2016-2025 and enabler of coupled data assimilation.

Elizabeth Cooper - Observation operators for assimilation of satellite observations in fluvial inundation forecasting

Title: Observation operators for assimilation of satellite observations in fluvial inundation forecasting

 

Authors: Elizabeth Cooper (Department of Meteorology, University of Reading, UK)

Sarah Dance (Department of Meteorology and Department of Mathematics and Statistics, University of Reading, UK)

Javier Garcia-Pintado (MARUM Center for Marine environmental Sciences and Department of Geosciences, University of Bremen)

Nancy K. Nichols (Department of Meteorology and Department of Mathematics and Statistics, University of Reading, UK)

Polly Smith  (Department of Meteorology and Department of Mathematics and Statistics, University of Reading, UK)

 

Satellite-based synthetic aperture radar (SAR) instruments provide valuable information about the location and extent of floodwater during river flooding events. In order to use this information in data assimilation for flood inundation forecasting we must first process the raw SAR data in some way to produce an observation (or set of observations) per image. We then require an observation operator to map our model forecast state vector into observation space, extracting the equivalent information from our model in order to compare it to the derived observations. The observation operator therefore depends on the type of observational information used, and on the quantity in the model state vector.

 

Conventional approaches have used observation operators designed to enable the assimilation of water levels derived from SAR images. We have developed a new observation operator to directly use backscatter values from SAR instruments as observations. We compare the performance of our new operator with that of two conventional approaches using an ensemble Kalman filter approach in simple synthetic twin experiments in an idealised domain. We show that our novel backscatter observation operator is able to produce a significant correction to modelled water levels, and can successfully retrieve the true value of an initially mis-specified channel friction parameter. Our observation operator is therefore a promising new development in the field of data assimilation for fluvial inundation forecasting.

 

 

Will Crawford - Accounting for error in an ensemble of seasonal forecasts using a high resolution global coupled model

Title:  Accounting for Error in an Ensemble of Seasonal Forecasts using a High Resolution Global Coupled Model

 

Authors:  William Crawford (ASEE/Naval Research Laboratory, Marine Meteorology Division, Monterey, CA

Sergey Frolov (Naval Research Laboratory, Marine Meteorology Division, Monterey, CA)

Neil Barton (Naval Research Laboratory, Marine Meteorology Division, Monterey, CA)

Craig Bishop (Naval Research Laboratory, Marine Meteorology Division, Monterey, CA)

 

 

Currently, the Navy’s coupled Earth system model is used to produce an ensemble of sub-seasonal forecasts initialized from an ensemble of analysis states generated from a set of uncoupled ocean and atmospheric data assimilations using the method of perturbed observations. The model has been shown to be under spread at both the initial state and throughout the forecast, as well as, suffering from significant biases. To address issues of model spread and bias throughout the forecast, a method of analysis correction based additive inflation (ACAI) is explored. During ACAI, an archive of analysis corrections is drawn upon as a representation of model error. For each ensemble forecast member, an analysis correction is chosen at random from the same season but different year and added incrementally to the model state over each 6-hour period of the forecast. In addition, a seasonal mean analysis correction is incrementally added to the forecast over the same period. The seasonal mean component of the inflation term will serve to address model bias while the randomly drawn component will aim to increase spread in the ensemble throughout the forecast period. Lastly, previous experience shows that the method of perturbed observations does not generate sufficient spread in the initial ensemble forecast state and therefore its replacement by the method of relaxation-to-prior-perturbations (RTPP) is explored along with the implementation of ACAI.

 

Sarah Dance - Data Assimilation for the REsilient City (DARE)

Title: Data Assimilation for the REsilient City (DARE)

 

Authors: Sarah L Dance  (University of Reading)

Sanita Vetra-Carvalho   (University of Reading)

Joanne A. Waller (University of Reading)

 

Urban areas often suffer from increased vulnerability to natural hazards, including extreme weather and flooding, due to dense populations and infrastructure. Our ability to manage these hazards is limited, in large part, by the accuracy of computational model predictions that we use for long term planning (e.g., designing flood defences) and the production of timely forecasts (e.g., guiding emergency responders). However, urban areas also present rich sources of data (e.g., citizen science, smartphones, internet of things etc.). To date these datasets of opportunity have not been fully explored or exploited. In this talk, we will describe a new project that is developing scientific methods underpinning the quantitative use of data from diverse sources, with applications in flood inundation modelling and numerical weather prediction.

 

In scientific observing networks, point observations are often sited away from buildings, in locations that are intended to be more broadly representative of larger areas and not designed to reflect local urban conditions. These observations lend themselves more naturally to comparison with discretized models, whose grid-lengths may be much larger than the size of a building. For datasets of opportunity, a key problem is to understand the effects of the urban environment on the observations so that uncertainties can be properly attributed and proper quality control procedures established.

 

We will describe initial work with such datasets of opportunity, such as mode-S EHS observations (arising from air traffic control secondary surveillance radar) and vehicle temperature sensors. We will discuss the new issues arising from using these data (scientific, social and political) and their potential impact on hazard forecasts.

Fabio Diniz - Comparing the adjoint- and ensemble-based approaches to observation impact on short-range forecasts

Title: Comparing the adjoint- and ensemble-based approaches to observation impact on short-range forecasts.

Authors: F. Diniz   (INPE)

  1. Todling (NASA)

 

GMAO is one of the few Centers around the world that has been evaluating observation impact on the twenty-four hour forecasts using an adjoint-based approach for many years. This is implemented within its near-real time GEOS data assimilation system (DAS) which involves the adjoint of the GEOS general circulation model (GCM), responsible for calculating forecast sensitivities, and the adjoint of the Grid-point Statistical Interpolation (GSI) analysis, responsible for calculating analysis sensitivities. The GMAO implementation of the adjoint-based observation impact dates back to the times when GEOS DAS was still running 3dVar. More recently, the GEOS assimilation approach has evolved, first into ensemble hybrid 3dVar, and as of January 2017 into hybrid 4dEnVar. These ensemble hybrid systems rely on a reduced-resolution ensemble running in parallel with the high-resolution hybrid analysis, and combine an ensemble of GEOS GCM integrations with the Ensemble Square-Root Filter (EnSRF) analysis. The adjoint-based observation impact tool is automatically available in systems implementing traditional or hybrid, 3D or 4D, variational methods.

Many hybrid data assimilation systems currently used at NWP centers do not have an adjoint of the underlying GCM, thus lacking the ability to evaluate observation impact through traditional adjoint-based methods. In such systems, an argument can be made for deriving observation impacts on forecasts using an ensemble-based approach instead. Unfortunately, typical hybrid systems use ensembles that operate at different resolution than the deterministic forecasting model which result in degraded forecast quality when compared to the central high-resolution forecasts. Worse still is the fact that in many hybrid systems, the ensemble analysis handles the observing system in substantially different ways than the way the hybrid, deterministic, analysis does. This particular issue is enough to argue that an ensemble-based approach to observation impact is bound to provide an incorrect assessments of the observations used in the hybrid systems. This presentation compares these approaches to observation impact using GEOS DAS.

Amal El Akkraoui - How much model error in a 6h ensemble forecast?

Title: How much model error in a 6h ensemble forecast?

Authors: Amal El Akkraoui  (NASA/GMAO/SSAI)

Ricardo Todling (NASA/GMAO)

Ron Errico (Morgan State University, NASA/GMAO)

 

In the hybrid DA framework, we are concerned with improving the representation of the forecast error covariance matrix using an ensemble of forecasts that contributes information about the uncertainty around the flow.An accepted idea is that a good ensemble is able to represent the time evolution of that uncertainty, and that it implicitly accounts for both the initial condition and model errors in what it encompasses as forecast error. Using the Global Modelling and Assimilation office (GMAO) 4D-EnVar system, this work explores the interplay between these two sources of errors and how they evolve within the short time frame of the data assimilation window. In this context, we examine whether a 6-h ensemble forecast is able to capture the model error component and how it reacts to various ways of initializing and/or perturbing the ensemble such as stochastic physics schemes. Finally, analogies pertaining to the concept of weak-constraint variational DA will also be considered.

Ron Errico - Guidelines to consider when performing OSSEs

Title:  Guidelines to consider when performing OSSEs

Authors:Ronald Errico (GMAO/NASA, MSU/GESTAR)

Nikki Privé  (GMAO/NASA, MSU/GESTAR)

 

There is an increasing demand to provide OSSE support when seeking funding for new atmospheric observing instruments. Various individuals and groups are running or developing OSSEs with little experience in OSSEs in particular or DA in general. In this presentation we will describe some key issues that are often neglected and some of the poor practices to be avoided. These include issues regarding NR and OSSE validation, consideration of instrument, observation operator, and forecast model error, relationships between observations and synoptic conditions, and conflicts of interest.

Steven Fletcher - Comparisons of mixed Gaussian-lognormal, logarithmic transform and Gaussian fits all based on temperature-mixing ratio microwave retrieval systems

Title: Comparisons of Mixed Gaussian-lognormal, logarithmic transform and Gaussian fits all based on temperature-mixing ratio microwave retrieval systems.

 

Authors: Steven J. Fletcher (CIRA)

Postdoc (CIRA)

Anton J. Kliewer (CIRA)

 John M. Forsythe (CIRA)

Andrew S. Jones (CIRA)

Cooperative Institute for Research in the Atmosphere (CIRA), Colorado State University, Fort Collins, CO, USA.

 

At CIRA we have developed three different distribution/descriptive statistic based temperature-mixing ratio microwave based retrieval systems.  The first is a Gaussian fits all based system where a multivariate Gaussian distribution is used to model the covariances between the temperature and mixing ratio errors; the second approach is based upon using a logarithmic transform applied to mixing ratio, which is equivalent to using the median of a lognormal distribution to minimize the background errors, where the third approach uses a maximum likelihood approach to minimizes the lognormal background errors.  It should be noted that the second and third approaches utilize the same mixed Gaussian-lognormal based distribution’s covariance matrix.

 

In this presentation we present results from the three different retrieval systems for different dynamical situations in comparison with the detection algorithms detailed in our other abstract to quantify how each system performs, with verification being assessed against goodness of fit to the brightness temperatures.

 

 

Javier García-Pintado - Experiments for online estimation of model parameters for multidecadal climate reconstruction with the Community Earth System Model (CESM)

 

Title: Experiments for online estimation of model parameters for multidecadal climate reconstruction with the Community Earth System Model (CESM).

Authors: Javier García-Pintado  (MARUM, Research Faculty University of Bremen, Germany) 

Pepijn Bakker  (MARUM, Research Faculty University of Bremen, Germany) 

André Paul  (MARUM, Research Faculty University of Bremen, Germany) 

Matthias Prange (MARUM, Research Faculty University of Bremen, Germany) 

Michael Schulz  (MARUM, Research Faculty University of Bremen, Germany) 

 

The Community Earth System Model (CESM) does not have an adjoint code, and alternative methods have to be used for assimilating paleoclimate proxy observations for multidecadal past climate reconstructions with CESM. Here, under the assumption that after some time to reach quasi-equilibrium all the model uncertainty comes from model parameters (i.e. a perfect model framework assumption except for the selected control parameters), we evaluate several strategies for online estimation of model parameters and reconstruction of the corresponding climate. One strategy is the use of an ensemble transform Kalman filter (ETKF) augmented with the selected model parameters. A second one is the use of (univariate) Gaussian Anamorphosis along with an ETKF as a way to deal with the nonlinear relation between model parameters and the climatic variables. A third strategy, with possibly a smaller computational burden if a reduced number of parameters is used as control variables, is the use of a fractional Kalman filter formulated in the parameter space (pFKF), where the observations are assimilated several times with an inflated observation error covariance. Both, the ETKF and the pFKF are used as smoothers (hence, denoted here as ETKS & pFKS), as they assimilate future observations with respect to the time of the reconstructed climate.  We focus on estimating 10 model parameters from both the atmosphere and the ocean components. The pFKS with 3 loops leads to a lower cost function than the ETKS with m=60 members, and with nearly half the computational effort (as a background plus one integration per parameter is used at each iteration of the pFKS). However, the climate sensitivity to model parameters estimated with the pFKS results from only sampling the conditional parameter distributions, while the sensitivity estimated by the ETKS results from sampling the complete (Gaussian) multivariate distribution of the model parameters. Thus, despite both methods lead in the experiment to a significantly reduced cost function with respect to the background (mostly the pFKS), we find that the sensitivities estimated from both approaches are substantially different. Here we focus on exploring and comparing these sensitivities as obtained after 60 years of model integration time.

 

 

 

 

Alexander Goldstein - New methods for the calculation and analysis of quasi-optimal adjoint perturbation

Title: New methods for the calculation and analysis of quasi-optimal adjoint perturbation

 

Author: Alexander M. Goldstein  (University of Wisconsin – Madison Department of Atmospheric and Oceanic Science)

 

Several new methods for the calculation and analysis of optimal perturbations are introduced and explored using results from the NASA GEOS-5 non-linear numerical weather prediction (NWP) and adjoint models. A response function is proposed that adds a time dimension to more canonical response functions, thereby allowing for the exploration of such fields as the intensification rate of a cyclone versus cyclone intensity at a static point in the forecast trajectory. Given that many of the processes that govern the rate of change of such a field as intensification rate are non-linear and hard to simulate, an iterative, incremental perturbation method inspired by four dimensional variational data assimilation is also introduced and discussed as a means by which to relax the constraints of the tangent-linear assumption. Several examples of both of these techniques will be examined. Additionally, a technique in which quasi-geostrophic potential vorticity (QGVP) inversion is used on a perturbation itself is discussed, and gives insight into the balanced mass and flow fields associated with the perturbation as it evolves through the trajectory in time and space.

Oliver Guillet - Modelling observation error correlations using a diffusion operator on unstructured grids

Title: Modelling observation error correlations using a diffusion operator on unstructured grids

 

Authors: O. Guillet (Météo-France CNRM, Toulouse, France)

  1. Gratton (INPT-IRIT, Toulouse, France)
  2. Gurol (UMR5318 CNRS-CERFACS, Toulouse, France)
  3. Vasseurd (ISAE-SUPAERO, Toulouse, France)
  4. Weaver (UMR5318 CNRS-CERFACS, Toulouse, France)

 

The estimation and modelling of background and observation error covariances are of utmost importance in data assimilation. Over the years, work on the representation of error covariances in data assimilation has mainly focused on the background state. Very few studies, however, have dealt with the representation of observation errors. In atmospheric and ocean data assimilation systems, observation errors are usually taken to be uncorrelated. This is often done for algorithmic convenience even though we know that this hypothesis is incorrect for certain observations, notably with satellite measurements whose errors may exhibit strong spatial, temporal and interchannel correlations. Satellite observation errors may be due to a wide variety of causes: e.g. instrumental errors, pre-processing, orbit positions or representativeness errors. By pre-processing the observations (thinning and/or superobing), the assumption that their errors are uncorrelated can be justified to some extent. This leads to a diagonal representation of the observation error covariance matrix (R). Typical sub-optimal ways to account for the effects of observation error correlation include artificially increasing the error variances, or extensively thinning the observations with the result that only a small fraction of the data are actually assimilated. These techniques are not satisfactory since they mean under-exploiting available observational information. This is especially true for data assimilation applications with high-resolution forecast models, which typically employ an aggressive data selection prior to data assimilation.

 

In this study, we propose representing correlated observation errors using a differential operator (R) that acts on vectors in observation space. The R-operator is built from the discretization of a time-implicit diffusion equation. To account for the unstructured spatial distribution of the observations, a discretization technique based on the finite element method is chosen. In order to improve the quality of the numerical solutions, the data are first projected onto a fine grid before solving the diffusion equation. The solution to the diffusion equation is then projected back onto the coarse grid defined by the observation network. As a result, the R-operator is equivalent to a series of matrix-vector products involving sparse matrices, some of which are rectangular. We show that this approach yields a robust modelling of R, at the expanse of making R1 difficult to obtain. This difficulty is obviated by defining each finite element matrix using a refinement strategy, while ensuring the computations are still performed in the space of the observations. This new method has similar accuracy to the former, and leads to an easily invertible formulation of the R-operator. Numerical experiments are performed using data from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard the Meteosat satellites. We highlight key points of our methodology exploiting some extreme-case scenario from the same sounder.

Ji-Hyun Ha - Variational bias correction of radiance data at KIAPS and associated results

Title: Variational Bias Correction of radiance data at KIAPS and Associated Results

Presenter: Ji-Hyun Ha (Korea Institute of Atmospheric Prediction Systems)

Hyo-Jong Song  (Korea Institute of Atmospheric Prediction Systems)

In-Hyuk Kwon (Korea Institute of Atmospheric Prediction Systems)

Hyoung-Wook Chun  (Korea Institute of Atmospheric Prediction Systems)

 

The satellite observations have become the predominant source of data assimilated Numerical Weather Prediction (NWP) model. However, the satellite instruments are imperfect and prone to error. The errors caused by the problem with the satellite measurements themselves, the radiative transfer model and the systematic errors in the background state provided by the NWP model are systematic (i.e. biases). The bias can damage the data assimilation system and ultimately degrade the quality of the forecast skill in a short period of the time. Therefore, a better handle of the radiance bias is needed. In recent, to correct the radiance biases, many operational NWP centers have been developed a variational bias correction (VarBC) scheme in their forecasting system and it was noted that the skill of weather forecasts benefit from the introduction of the VarBC scheme. Therefore, Korea Institute of Atmospheric Prediction System (KIAPS) has been developing the VarBC scheme being built at its data assimilation system in the cubed-sphere grid. In the quality of the analysis skill, benefits over the VarBC were found. In this workshop, the VarBC scheme at KIAPS will be described, and its performance by analyzed the differences with a static bias correction scheme will be discussed.

 

Daniel Holdaway  - Progress towards hybrid 4DVar with the FV3 dynamical core

Title: Progress towards hybrid 4DVar with the FV3 dynamical core.

 

Author: Daniel Holdaway  (Global Modeling and Assimilation Office, NASA Goddard Space Flight Center, Greenbelt, Maryland; Goddard Earth Sciences Technology and Research, Universities Space Research

Association, Columbia, Maryland)

 

NASA's Goddard Earth Observing System (GEOS) is based on the Finite Volume Cubed- Sphere (FV3) dynamical core. Although this core has been central to GEOS for several years it recently underwent significant updates as it passed through the rigorous Next Generation Global Prediction System (NGGPS) dynamical core comparison. FV3 was chosen to be the next dynamical core used by the National Weather Service. This renewed effort motivated the redevelopment of the adjoint and tangent linear versions of FV3, first to absorb the model changes and second to address issues with efficiency that had previously hindered the implementation of 4DVar. In this work we present some results from the development of this new adjoint system and discuss the measures taken to address efficiency concerns. We then present a status update of the development of a full 4DVar system based on FV3. This is divided into scientific and computational concerns. We show some stand-alone analysis and single observation tests using the Gridpoint Statistical Interpolation (GSI) data assimilation system and compare the 4DVar results with other types of data assimilation. We note some efficiency issues that exist in the GSI system but discuss the potential in using the Joint Effort for Data assimilation Integration (JEDI) system. JEDI { led by the Joint Center for Satellite Data Assimilation (JCSDA) - is an inter-organizational endeavor to develop a common framework for performing data assimilation. This extensive framework will ultimately provide solvers, observation operators, interpolation and model interfaces using object oriented modeling. JEDI offers potential efficiency improvements over GSI by performing assimilation on the native model grid. This offers a promising path forward for 4DVar data assimilation with the FV3 model and linearization. We discuss the steps underway to bring the FV3 linearized model into the object oriented framework and show some preliminary results.

Daisuke Hotta - EFSO and DFS diagnostics for JMA's global data assimilation system: their caveats and potential pitfalls

Title: EFSO and DFS diagnostics for JMA's global Data Assimilation  system: their caveats and potential pitfalls

Authors: Daisuke Hotta (Meteorological Research Institute, Japan Meteorological Agency)

Yoichiro Ota (Japan Meteorological Agency)

 

Diagnostic methods have recently been devised that allow estimation of  observational impact on a Data Assimilation System (DAS). Such diagnostics not only enable quantification of the "value" of observations but can also provide clues to improving the diagnosed DAS.  Two such diagnostic methods, Ensemble Forecast Sensitivity to  Observations (EFSO; Kalnay et al. 2012) and Degrees of Freedom for  Signal (DFS; Liu et al. 2009), have been implemented to the EnKF  component of JMA's pre-operational global hybrid DAS. This presentation will discuss our findings obtained through these diagnostics, with particular emphasis placed on caveats and potential pitfalls in interpreting their results.

The first part will present EFSO implemented at JMA. It was found that the forecast error reduction estimated by EFSO accounts for only 20% of  the actual forecast error reduction. In order to understand mechanism behind this underestimation, we conducted diagnosis where the forecast error vector is decomposed into the column and null space of the ensemble forecast perturbations, recognizing that portions of forecast errors that are in the null space are discarded during EFSO computation. The result indicated that 80% of the forecast errors are in fact in the null space, explaining the mechanism of the underestimation. Reasons for why so little of the forecast error is accounted for by the ensemble will also be discussed.

The second part will discuss DFS results obtained with JMA's global EnKF. It was found that the information content extracted from observations by EnKF as quantified by DFS is an order of magnitude smaller than that by 4D-Var. This underestimation is particularly conspicuous to dense observations. Theoretical consideration reveals that this is because, in EnKF's local analysis, DFS can never exceed the size of the background ensemble, limiting the amount of information extractable from observations if the number of locally assimilated observations is much larger than the ensemble size. Implications of this DFS underestimation on broad aspect of EnKF, including localization, inflation and observation thinning, will also be discussed.

 

 

--- Abstract #2 ---

Title: EFSR: Ensemble Forecast Sensitivity to Observation Error Covariance

Authors: Daisuke Hotta (Meteorological Research Institute, Japan Meteorological Agency)

Eugenia Kalnay (University of Maryland, College Park)

Yoichiro Ota (Japan Meteorological Agency)

Takemasa Miyoshi (RIKEN AICS)

Preference: Poster

Data assimilation (DA) methods require an estimate of observation error covariance R as an external parameter that typically is tuned in a subjective manner. To facilitate objective and systematic tuning of R within the context of ensemble Kalman filtering, we introduce a method for estimating how forecast errors would be changed by increasing or decreasing each element of R, without a need for the adjoint of the model and the DA system, by combining the adjoint-based R-sensitivity diagnostics presented by Daescu previously with the technique employed by Kalnay et al. to derive ensemble forecast sensitivity to observations (EFSO). The proposed method, termed EFSR, is shown to be able to detect and adaptively correct misspecified R through a series of toy-model experiments using the Lorenz' 96 model.

It is then applied to a quasi-operational global DA system of the National Centers for Environmental Prediction (NCEP) to provide guidance on how to tune the R. A sensitivity experiment in which the prescribed observation error variances for four selected observation types were scaled by 0.9 or 1.1 following the EFSR guidance, however, resulted in forecast improvement that is not statistically significant. This can be explained by the smallness of the perturbation given to the R. An iterative online approach to improve on this limitation is proposed. Nevertheless, the sensitivity experiment did show that the EFSO impacts from each observation type were increased by the EFSR-guided tuning of R.

 

Yasutaka Ikuta - Assimilation of GPM/DPR in km-scale hybrid-4DVar system

Title: Assimilation of GPM/DPR in Km-scale Hybrid-4DVar system

 

Author: Yasutaka Ikuta (Japan Meteorological Agency)

 

A new km-scale hybrid-4DVar data assimilation system is being developed to improve short-range precipitation forecasts at the Japan Meteorological Agency. One of the purposes of developing this system is to enable the assimilation of high spatial and temporal resolution observations related to hydrometeors. For assimilation of such observation, a new simplified 6-class 3-ice 1-moment bulk cloud microphysics scheme suitable for the tangent linearization has been developed. In addition, the background error covariance of hydrometeors is constructed using ensemble perturbations because the vertical error correlation depends strongly on meteorological situations. Using this km-scale hybrid-4DVar data assimilation system, the impact of Dual-frequency Precipitation Radar (DPR) on board the Global Precipitation Measurement (GPM) core satellite has been investigated. The DPR instrument was developed by the Japan Aerospace Exploration Agency (JAXA) in cooperation with the National Institute of Information and Communications Technology (NICT). This space-borne precipitation radar can observe three-dimensional distribution of reflectivity all over the earth. The result of GPM/DPR assimilation experiment will be presented.

 

 

Marta Janisková - Well known and less obvious applications of adjoint models: Do we explore enough their potential?

Title: Well known and less obvious applications of adjoint models: Do we explore enough their potential?

 

Authors: M. Janisková  (ECMWF, Shinfield Park, Reading, UK)

  1. Lopez (ECMWF, Shinfield Park, Reading, UK)
  2. Vána (ECMWF, Shinfield Park, Reading, UK)

 

The usefulness of tangent-linear and adjoint models is determined by how well they can represent nonlinear version of the models. Results obtained when using a too simplified linearized model with large inaccuracies or linearized models without a proper treatment of nonlinearities and discontinuities, can be incorrect. To maintain the benefit provided by different applications based on

adjoint models, it is absolutely essential to constantly check that the linearized model provides a good tangent-linear (TL) approximation to the full reference nonlinear (NL) forecast model in spite of the continuous upgrading of the later.

 

The best known application of the adjoint method is as a tool for the efficient computation of optimal initial conditions in variational data assimilation. The mismatch between the model solution and observations could remain large if only imperfect adiabatic model would be used in the minimization. Besides, many satellite observations, such as radiances, rainfall or cloud measurements, could not be directly assimilated with such oversimplified adjoint models (Janisková and Lopez, 2013).

 

Another familiar use of the adjoint technique is the computation of the fastest growing modes (i.e. singular vectors) over a finite time interval, which can be used to generate initial perturbations in Ensemble Prediction Systems. Sensitivity studies can also exploit the power of adjoint models since they enable the computation of the gradient of a selected output parameter from a numerical model with respect to all its input parameters. In practice, this is often used to obtain either sensitivities of one aspect of the forecast to initial conditions, sensitivities of the analysis to observations or sensitivity of the analysis to model parameters. In adjoint-based observation sensitivity techniques, more sophisticated adjoint model leads to more flow-dependent and more realistic sensitivities (Janisková and Cardinali 2017).

 

A less obvious application of the adjoint concept is the variational estimation of model parameters. This is similar to the problem of data assimilation, except that instead of or in addition to the standard adjustments of model initial conditions, the best fit of the forecasts with respect to the model parameters is determined. There are some limitations in such application. Only parameters that are present in both the forecast model and the linearized simplified one (used in the minimization of the cost function) can be treated in this way. Discrepancies between these two NL and linearized parametrizations might lead to suboptimal results. This method is also unsuccessful for parameters associated with more nonlinear processes (e.g. condensation, convection) or less well constrained by observations.

 

It is still rather unknown and not yet enough explored and accepted that the linearized approach can also be useful as a tool for finding and addressing issues that affect the full nonlinear model and not just its linearized version. The linearized model can help to identify potential nonlinear model instabilities efficiently, as well as to discover and solved various problems both in the NL and the TL codes by detail investigation of the TL approximation.

 

This presentation will provide a brief overview of all these applications and emphasize the strengths and weaknesses of the adjoint approach.

 

Wei Kang - Data assimilation for models with a sparse error covariance

Title: Data Assimilation for Models with a Sparse Error Covariance

 

Authors: Wei Kang (Department of Applied Mathematics, Naval Postgraduate School, Monterey, CA)

Liang Xu (Naval Research Laboratory, Monterey, CA)

 

Data assimilation for numerical weather prediction deals with numerical models that have very large dimensions. The scalability of algorithms is limited by several factors, including the required memory, computational load, communication load over bus traffic, and degree of parallelism.  For algorithms of EnKF, the ensemble size largely determines the required memory as well as computational cost in data assimilation. The number of model evaluations at each time step and the number of state vectors to be kept in memory are tied to the ensemble size. Both are increased approximately proportionally with the increase of ensemble size. For several decades, various types of parallel computation platforms have been actively developed, from massively parallel super computers to specialized parallel computers and general-purpose GPUs. Each type has its pros and cons. Each platform has its unique way to manage memory, computation and bus traffic. In this project, we explore some ideas and algorithms in which the required memory is separated from the required computational load so that trade-off between memory and computation is possible.

 

A discretization of PDE systems tends to have a relatively large error correlation among nearby gridpoints and a relatively small correlation between distant points. In some cases, the error covariance matrix is approximately sparse. In this project, we develop an algorithm of sparse Unscented Kalman Filter (UKF). Different from EnKF in which the ensemble size is the main factor that determines the accuracy, the performance of a Sparse UKF can be improved by increasing the computational load, i.e., the number of -points, without increasing the required memory size. In an example of the Lorenz-96 model, a sparse UKF algorithm is tested in which the memory is limited while the number of model evaluations is increased. In the numerical experiment, the effects of memory size vs. computational cost on the accuracy and error variation are studied. It is found that, once the memory size is higher than a threshold level, it is more effective to increase the number of -points to improve the performance of a Sparse UKF. In addition, the results are compared to a standard EnKF with localization based on various ensemble sizes. By increasing the computational load while keeping the memory unchanged, the sparse UKF achieves significantly smaller error variation than EnKF in all cases. If the ensemble size is small, the sparse UKF has a smaller median error than EnKF. If the ensemble size is relatively large, the sparse UKF has a median error comparable to EnKF, but a much smaller error variation.

 

 

Maha H. Kaouri - Gauss-Newton-type optimization methods for variational data assimilation

Title: Gauss-Newton-type Optimization Methods for Variational Data Assimilation

 

Authors: Maha H. Kaouri (University of Reading, UK)

Coralia Cartis (University of Oxford, UK)

Amos S. Lawless (University of Reading, UK and the National Centre for Earth Observation (NCEO))

Nancy K. Nichols (University of Reading, UK and the National Centre for Earth Observation (NCEO))

 

In variational data assimilation, the nonlinear least-squares problem is usually solved as a series of linear least-squares problems using an incremental method to obtain the initial conditions for a numerical weather forecast. This has been shown to be equivalent to the Gauss-Newton method under certain conditions. The Gauss-Newton method which (unlike Newton’s) uses an approximate hessian, is not globally convergent to a stationary point. As a result, the incremental method is not guaranteed to converge to a local minimum of the cost function. This is the motivation behind the investigation of newly developed, advanced numerical optimization methods such as those which use safeguards to guarantee convergence from an arbitrary starting point. The use of such methods could enable us to obtain an improvement on the estimate of the initial conditions within the limited time and computational cost available.

 

We present a numerical study of the convergence behaviour of the Gauss-Newton method when applied to a three-dimensional data assimilation problem (3D-Var). We contrast this to results obtained when applying safeguarded Gauss-Newton methods that use line-search and regularization to guide Gauss-Newton towards convergence.

 

Takuya Kawabata - A storm-scale particle filter for investigating predictability of convection initiation and development

Title: A storm-scale particle filter for investigating predictability of convection initiation and development

 

Authors: Takuya Kawabata ( Meteorological Research Institute / Japan Meteorological Agency)

Genta Ueno  (The Institute of Statistical Mathematics)

 

A particle filter (PF) with the JMA meso-scale nonhydrostatic model (NHM-PF) has been developed since 2017. The aim is to study predictability of convection initiation and development under weak forcing conditions. In general, convections without strong forcings (e.g., cold fronts, tropical cyclones, mountains) seem to be initiated randomly. Therefore, it is difficult to detect exact factors for the initiations. Moreover, PDFs of these predictability are thought to be non-Gaussian, which has made it difficult to predict and even investigate such phenomena, so far. While it is able to deal with the non-Gaussianity when PF is applied to these researches. The NHM-PF employs a sampling importance resampling (SIR) filter with advanced observations such as GNSS integrated water vapor, dual polarimetric radars and conventional observations developed for NHM-4DVAR (Kawabata et al. 2014). These rich observations are important to constrain the initiations in the model, but these may be cause of filter collapse. A short assimilation period and introduction of model error should mitigate this collapse.

 

The idea of this study is to investigate non-Gaussianities in environmental fields (winds, temperature, water vapor) before the initiations as well as interior cumulonimbus (cloud microphysics) after the initiations. Detailed descriptions on this study and the NHM-PF will be presented.

 

Min-Jeong Kim - Sensitivity of Different Types of Observations to NASA GEOS Hurricane Analyses and Forecasts

Title: Sensitivity of Different Types of Observations to NASA GEOS Hurricane Analyses and Forecasts

 

Authors: Min-Jeong Kim (NASA/GMAO, GESTAR/Morgan State University)

Daniel Holdaway (NASA/GMAO, GESTAR/USRA)

 

 

The 2017 Atlantic hurricane season was the 5th most active, featuring 17 named storms, the highest number of major hurricanes since 2005, and by far the costliest season on record. African easterly waves often serve as the seeding circulation for a large portion of hurricanes (i.e. tropical storms with wind over 74mph in the Atlantic and Northeast Pacific). Warm SST, moist air, and low wind shear are the main requirements for tropical cyclones to develop and maintain hurricane strength. In terms of hurricane propagation (so called hurricane tracks), Atlantic hurricanes typically propagate around the periphery of the subtropical ridge called the Bermuda High (Azores High), riding along its strongest winds. If the high is positioned to the east, then hurricanes generally propagate northeastward around the high’s western edge into the open Atlantic Ocean without making land fall. If the high is positioned to the west and extends far enough to the south, storms are blocked from curving north and forced to continue west towards Florida, Cuba, and the Gulf of Mexico.

 

If we have accurate atmospheric temperature distribution, which is directly related to atmospheric wave patterns, wind distributions, moisture distribution, and SST distribution in the analyses, we will have better NWP skills in hurricane analyses including hurricane intensity and tracks.  Assimilating various observation data are supposed to play these roles in the analyses.  To examine impacts of different types of observation data on NASA Goddard Earth Observing System (GEOS) model hurricane analyses and forecasts during the period of 2017 summer, this study performs data denial experiments using GEOS Atmospheric Data Assimilation System (ADAS), which is based on the hybrid 4D-EnVar GSI algorithm. Various types of observations such as microwave sounders, infrared sounders, TCvitals, and conventional data are removed in the experiments.  In addition, the interaction between the different observation groups as certain instruments are removed from the analysis is investigated in detail using adjoint based forecast sensitivity observation impact (FSOI).

 

Daryl Kleist - Scale-dependent localization and weighting in the FV3-GFS Hybrid Data Assimilation Scheme

Title: Scale-dependent localization and weighting in the FV3-GFS Hybrid Data Assimilation Scheme

Authors: Daryl Kleist (NOAA/NWS/NCEP/EMC)

Ting Lei (IMSG @ NOAA/NWS/NCEP/EMC)

Rahul Mahajan (IMSG @ NOAA/NWS/NCEP/EMC)

Cathy Thomas (IMSG @ NOAA/NWS/NCEP/EMC)

Deng-Shun Chen (Taiwan CWB and NCU)

 

Following the successful transition to hybrid 4D EnVar at NCEP for the Global Forecast System (GFS) and Global Data Assimilation System (GDAS), work has begun to improve several aspects of the algorithm. The current hybrid scheme in use for the NCEP GDAS utilizes a single global weighting parameter to prescribe the contributions from the ensemble and static error covariances. Furthermore, the localization applied to the control variable to prescribe the ensemble-based analysis increment is assumed to be Gaussian, with the horizontal localization varying only by vertical level. Several studies have already shown that spectral and scale-dependent localization can be more effective within the context of EnVar (Buehner 2012, Buehner and Shlyaeva 2015).  Here, we will describe an effort toward applying waveband-dependent localization as well as scale-dependent weighting between the static and ensemble contributions within a hybrid assimilation paradigm. A low resolution version of the Next Generation Global Prediction System (NGGPS) based on FV3 will be used for demonstration. 

 

Alexander Kurapov - Variational data assimilation in the US West Coast Ocean Forecast System (WCOFS)

Title: Variational data assimilation in the US West Coast Ocean Forecast System (WCOFS)

Authors: Alexander Kurapov (NOAA/NOS/CSDL)

  1. Moore (UCSC)
  2. Myers (NOAA/NOS/NCSDL)
  3. Bayler (NOAA/NESDIS/STAR)

 

The US West Coast Ocean Forecast System is a regional ocean model developed to predict shelf currents, oceanic fronts, coastal sea levels and other variables of interest to NOAA customers in support of navigation, environmental hazard response, search and rescue, fisheries, public health etc. Forced by the NAM atmospheric forecast fields and HYCOM model open boundary conditions, the system is designed to produce daily updates of 3-day forecasts. It is based on the Regional Ocean Modeling System (ROMS) that includes the 4DVAR data assimilation component. We are testing this with observations of sea level anomaly (SLA) from several satellite altimeters, sea surface temperature (SST) from satellites, and surface currents obtained by a network of land-based high-frequency (HR) radars. Assimilation proceeds in a series of 3-day windows overlapped by 2 days. As the result of assimilation, the forecasts of shelf currents and SST are improved compared to the model without assimilation. Challenges specific to the coastal ocean data assimilation are numerous and include, but not limited to: (A) a choice of a background error covariance (where geostrophic balances break and wind errors directly influence errors at the surface and the bottom due to the Ekman transport dynamics), (B) interpretation of the SLA data in a tide-resolving model (SLA represents relatively slowly changing non-tidal oceanic processes, SLA << tidal amplitudes), (C) limited predictability of the internal tidal currents that may contribute to 0.2 m/s error at the surface and complicate model-data match, etc.

 

Takuya Kurihana - Assimilation with faster super observation algorithm for meteorological ‘Big Data’

 

Title: Assimilation with faster super observation algorithm for meteorological ‘Big Data’

Authors: T. Kurihana (Univeristy of Tsukuba, Graduate  School of Life and Environmental Sciences, Japan)

  1. L.Tanaka (Univeristy of Tsukuba, Graduate School of Life and Environmental Sciences, Japan)

One major challenge related to the recent satellite assimilation is proposed that how to deal with a massive amount of satellite observation data more efficiently in state-of-the-art data assimilation systems. These denser and frequent satellite information is called as meteorological ‘Big  Data’,  which  are  provided by polar  orbiting and  geostationary satellites. The possible solution is to generate a superobservation (SO) system, while the large computational cost is required through assembling these high  resolution satellite observation into the nearest gridpoint. Furthermore, peculiar grid coordinates such as the Nonhydrostatic ICosahedral Atmospheric Model (NICAM) or the Consortium for Small-Scale Modeling (COSMO) take more complicated computation in order to make their SO system. This study proposed the rapid SO creation algorithm applicable to the NICAM icosahedral grid, and produced the NICAM-SO system in a dramatically shorter period of time.  With  this  new  SO  system,  our  Local  Ensemble  Transform  Kalman  Filter (LETKF)assimilates these meteorological big data. We investigate the impact of fast-SO in the assimilation, as well as evaluate the computational performance.

Patrick Laloyaux - The ECMWF weak constraint 4D-Var formulation

Title: The ECMWF weak constraint 4D-Var formulation

 

Authors: Patrick Laloyaux (ECMWF)

Jacky Goddard (ECMWF)

Simon Lang (ECMWF)

Massimo Bonavita (ECMWF)

 

In most operational implementations of four-dimensional variational data assimilation, it is assumed that the model used in the data assimilation process is perfect or, at least, that errors in the model can be neglected when compared to other errors in the system. ECMWF has been developing a weak-constraint 4D-Var formulation where a model-error forcing term is explicitly estimated to take into account model imperfections. This problem is very similar in nature to strong constraint 4D-Var as it is essentially an initial-condition problem with parameter estimation where the additional parameters represent model error.

ECMWF has implemented a new version of its forecasting system in November 2016 where the weak constraint option of 4D-Var has been reactivated using a model error forcing term active in the stratosphere above 40 hPa. The model error covariance matrix is based on statistics generated by special runs of the ensemble predication system with identical initial conditions but with different realisations of model error. The model error covariance matrix has been specified such as systematic model errors developing over the 12 hour assimilation window are corrected. Interactions between model and background errors with their respective covariance matrices will be discussed.

 

Amos Lawless - Treating sample covariances for use in strongly coupled atmosphere-ocean data assimilation

Title: Treating sample covariances for use in strongly coupled atmosphere-ocean data assimilation.

 

Authors: Amos. S. Lawless (University of Reading, U.K.)

Polly J. Smith (University of Reading, U.K.)

Nancy K. Nichols  (University of Reading, U.K.)

 

Covariance information derived from an ensemble can be used to define the a priori atmosphere-ocean forecast error cross covariances required for variational strongly coupled atmosphere-ocean data assimilation. Due to restrictions on sample size, ensemble covariances are routinely rank deficient and/ or ill-conditioned and marred by sampling noise; thus they require some level of modification before they can be used in a standard variational assimilation framework. Here, we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices in the context of strongly coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show it is better to recondition the correlation matrix rather than the covariance matrix, as this prevents small but dynamically important modes from being lost. The second method, model state-space localisation via the Schur product, effectively removes sample noise, but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.

 

Reference:

 

Smith, P. J., Lawless, A. S., & Nichols, N. K. (2018). Treating sample covariances for use in strongly coupled atmosphere-ocean data assimilation. Geophysical Research Letters, 45. https://doi.org/10.1002/2017GL075534

 

Zhijin Li - Some theoretical and practical Issues on multiscale data assimilation for high-resolution models

Title: Some Theoretical and Practical Issues on Multiscale Data Assimilation for High-Resolution Models

 

Author: Zhijin Li   (Jet Propulsion Laboratory, California Institute of Technology)

 

In recent years, a terminology of multiscale data assimilation (MSDA) has been increasingly used in the data assimilation literature. MSDA can be characterized as estimation of distinct temporal and spatial scales separately, in contrast to traditional DA algorithms that estimate all scales as a whole. The traditional DA algorithms, including three-/four-dimensional variational and Kalman filter/smoother-based algorithm, are based on optimal estimation theory and seeks an optimal estimate as a minimizer of a cost function.  In the linear framework, the cost function can be decomposed for a set of distinct spatial scales. From the decomposed cost function, we will address a set of MSDA optimal properties and practical issues. Results from idealized problems and operational systems are presented to illustrate.

 

 

 

Sujeong Lim - Sensitive experiments of the tropical cyclone bogus data assimilation depending on the background error covariance within the hybrid-4DEnVar system

Title: Sensitive experiments of the tropical cyclone bogus data assimilation depending on the background error covariance within the hybrid-4DEnVar system

Authors: Sujeong Lim (Korea Institute of Atmosphere Prediction System (KIAPS), Seoul, South Korea)

Hyo-Jong Song  (Korea Institute of Atmosphere Prediction System (KIAPS), Seoul, South Korea)

Ji-Hyun Ha  (Korea Institute of Atmosphere Prediction System (KIAPS), Seoul, South Korea)

In-Hyuk Kwon  (Korea Institute of Atmosphere Prediction System (KIAPS), Seoul, South Korea)

Hyun-Jun Han (Korea Institute of Atmosphere Prediction System (KIAPS), Seoul, South Korea)

 

 

The global model produces the improved tropical cyclone (TC) track over the past decades. However, it is still difficult to forecast TC intensity because it has insufficient observations over the oceans and coarse resolution. Therefore, the TC initialization is necessary before the prediction. As the TC initialization process, we assimilate the single bogus observation, minimum sea level pressure (MinSLP) on the TC. Through the structure of the background error covariance, the single surface pressure observation changes the three dimensional fields by producing the anti-cyclonic circulation and warm core and improves the TC forecast skills.

 

In this study, we specifically examine the background error covariance (BEC) impacts on TC initialization. Since the hybrid system combines the climatological and ensemble BEC, we could make the diverse set of BEC combinations. In ensemble BEC, we could change the portion of ensemble BEC to identify the how the ensemble helps the TC initialization. In climatological BEC, nonlinear balance equation and regressed wind-mass balance could be selected in BEC structure. Through these sensitive experiments, we could understand which combination of BEC is the best in the TC bogus data assimilation within hybrid system.

 

Nora Loose - Can existing ocean observing systems effectively constrain subsurface temperature near Greenland's outlet glaciers? - Insights from comprehensive uncertainty quantification in oceanographic inverse problems

Title: Can existing ocean observing systems effectively constrain subsurface temperature near Greenland's outlet glaciers? - Insights from comprehensive uncertainty quantification in oceanographic inverse problems

 

Authors: Nora Loose (Department of Earth Science, University of Bergen (Norway), Bjerknes Centre for Climate Research (Norway))

Patrick Heimbach  (Institute for Computational Engineering and Sciences and Jackson School of Geosciences, The University of Texas at Austin (USA), Department for Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology (USA))

Kerim H. Nisancioglu  (Department of Earth Science, University of Bergen (Norway), Bjerknes Centre for Climate Research (Norway))

 

 

The interaction of Greenland's outlet glaciers with warm ocean waters has been suggested as a dominant trigger for the glaciers' recent retreat and acceleration. It is logistically challenging to directly measure oceanic heat transport to the ice margin, but existing remote observing systems have high information potential because large-scale ocean dynamics can de- liver warm subsurface waters to the ice front. An ideal framework to assess the value of ocean observing systems is a nonlinear oceanographic inverse problem, where ocean observations are connected to dynamical principles through a numerical model in an optimal way. Here, we quantify the information value, redundancy and complementarity of existing ocean observing systems for determining subsurface temperature near two of Greenland's outlet glaciers, by means of formal uncertainty quantification in large-scale oceanographic inverse problems.

 

To quantify uncertainties and observational information value, we embed the deterministic oceanographic inverse problem in a Bayesian context, where the inverse Hessian of the regularized data-model misfit cost function becomes the error covariance matrix of the Gaussianized posterior distribution. While calculating and inverting the high-dimensional Hessian matrix is computationally intractable, we exploit the fact that observations are usually only informative about a low-dimensional subspace of the high-dimensional space of uncertain model parameters to be estimated. This subspace is spanned by the eigenvectors of the Hessian, which we extract by means of the adjoint of the MIT general circulation model within the in- verse modeling framework of the Estimating the Circulation and Climate of the Ocean version 4 (ECCO v4) project. We _nd that volume and heat transport across the Denmark Strait provide useful and complementary sources of information for determining subsurface temperature near Helheim Glacier, a marine-terminating glacier in southeast Greenland. In contrast, the combined information value of these observations is minimal for subsurface temperature near Jakobshavn Isbræ on the Greenlandic west coast, which in turn can be constrained to a significant degree by observing volume transport across the Davis Strait. We furthermore quantify the required accuracy of ocean observing systems to effectively constrain subsurface temperature near the two glaciers. While this work is limited by the Gaussian assumption and the relatively coarse resolution of the ECCO v4 setup, it is a first step to exploit powerful tools of computational science for conducting rigorous uncertainty quantification in large-scale oceanographic inverse modeling.

Carina Lopes - On the use of Landsat imagery for long-term coastal wetland monitoring

Title: On the use of Landsat imagery for long-term coastal wetland monitoring

 

Authors: Lopes, C.L. (CESAM – Centre for Environmental and Marine Studies, Physics Department, University of Aveiro; MARE - Marine and Environmental Sciences Centre, Faculty of Sciences, University of Lisbon)

Mendes, R.  (CESAM – Centre for Environmental and Marine Studies, Physics Department, University of Aveiro; CIIMAR - Interdisciplinary Centre of Marine and Environmental Research, University of Porto)

Caçador, I. (MARE - Marine and Environmental Sciences Centre, Faculty of Sciences, University of Lisbon)

Dias, J.M.  (CESAM – Centre for Environmental and Marine Studies, Physics Department, University of Aveiro)

 

Landsat represents the world's longest continuously collection of space-based moderate-resolution remote sensing data specially dedicated to land observation and provides a unique resource to monitor land cover changes. Previous works demonstrated its potential for identifying vegetation changes in estuarine environments, highlighting that Landsat archive constitutes a powerful tool to support the conservation and the management of these highly valuable ecosystems. Attending this, the main aim of this work is to detect wetland changes between 1984 to nowadays in Ria de Aveiro coastal lagoon through the Landsat dataset processing. Remote surface reflectance data were downloaded, and then Normalized Difference Water Index (NDWI), Normalized Difference Vegetation Index (NDVI), Global Environmental Monitoring Index (GEMI), Atmospheric and Soil Vegetation Index (ASVI) and Modified Soil Adjusted Vegetation Index (MSAVI) were computed for the study region. The NDWI was used to distinguish land from water while vegetation indices such as NDVI, GEMI, ASVI, and MSAVI were used to identify potential spatio-temporal vegetation changes within the lagoon. Results show that the flooded lagoon area is essentially tidaly-driven, ranging between 30 and 80 km2. Also, flooded area variability before 2005 tends to be lower than in the following years, indicating that the lagoon actually presents high flooded areas during high tide and low flooded areas during low tide. Results further highlight that these modifications possibly induced changes in halophyte vegetation. Indeed, a decreasing trend for all vegetation indices until 2005 was observed. Finally, this work reinforced the importance and potential of Landsat archive to monitor coastal wetlands, supporting management strategies regarding their conservation and restoration.

Andrew Lorenc - A comparison of hybrid variational data assimilation methods in the Met Office global NWP system

Title: A comparison of hybrid variational data assimilation methods in the Met Office global NWP system

 

Authors: Andrew C Lorenc (Met Office)

Mohamed Jardak  (Met Office)

 

Variational data assimilations methods are reviewed and compared in the Met Office global numerical weather prediction system. This supports hybrid background error covariances which are a weighted combination of modelled static ``climatological'' covariances with covariances calculated from a current ensemble of forecasts, in both 3-dimensional and 4-dimensional methods. For the latter, we compare the use of linear and adjoint models (hybrid-4DVar) with the direct use of ensemble forecast trajectories (hybrid-4DEnVar). Earlier studies had shown that hybrid-4DVar outperforms hybrid-4DEnVar, and 4DVar outperforms 3DVar. Improvement in the processing of ensemble covariances and computer enhancements mean we are now able to explore these comparisons for the full range of hybrid weights. We find that, using our operational 44~member ensemble, the static covariance is still beneficial in hybrid-4DVar, so that it significantly outperforms hybrid-4DEnVar. In schemes not using linear and adjoint models, the static covariance is less beneficial. It is shown that the time-propagated static covariance is the main cause of the better performance of 4DVar; when using pure ensemble covariances, 4DVar and 4DEnVar show similar skill. These results are consistent with nonlinear dynamics theory about assimilation in the unstable sub-space.

 

 

Rahul Mahajan - Forecast Sensitivity and Observation Impact (FSOI) Inter-comparison Experiment

Title: Forecast Sensitivity and Observation Impact (FSOI) Inter-comparison Experiment

 

Authors: Rahul Mahajan (IMSG, NCEP/Environmental Modeling Center)

Thomas Auligné (Joint Center for Satellite Data Assimilation)

Ron Gelaro (NASA/GMAO)

Rolf Langland (Naval Research Lab Monterey, Marine Meteorology)

 Forecast Sensitivity and Observation Impact (FSOI) techniques provide a practical means to estimate the forecast impact of all assimilated observations for NWP systems. In this presentation, we describe direct comparisons of FSOI quantities between different NWP systems. A common “baseline” set of FSOI experimental parameters are applied for the time period December-February (DJF) 2014/2015. An adjoint-based FSOI approach (Langland and Baker, 2004) is applied for the NWP systems at NASA/GMAO, the Naval Research Laboratory (NRL), the UK Met Office (UKMet); where as an ensemble-based FSOI approach (Kalnay et al., 2012) is applied at the National Centers for Environmental Prediction (NCEP). The Japan Meteorological Agency (JMA) applies both the adjoint-based and ensemble-based FSOI capabilities, enabling a direct comparison between the two techniques.

 Given the aforementioned experiment, we plan to describe the differences in aggregated FSOI quantities between NWP systems for the relevant observing systems. Additionally, NWP system inter-comparisons of FSOI quantities for common observation subsets within the 3-month period will be presented. The comparisons of observation subsets will provide insight as to the extent to which the aggregate results are representative in both space and time. This is an extension to the previous work done by Gelaro et al. (2010) on the THORPEX Observation Intercomparison Experiment.

 

Rohit Mangla - Evaluation of microwave radiances of GPM/GMI for the all-sky assimilation in RTTOV framework

Title: Evaluation of microwave radiances of GPM/GMI for the all-sky assimilation in RTTOV framework

 

Authors: Mangla, R (Indian Institute of Technology, Bombay, India)

  1. Indu (Indian Institute of Technology, Bombay, India)

 

This study investigates the statistical chracterstics of all-sky GPM/GMI radiances at low-tropospheric sounding channel (183+7 V). Simulations at 183+7 GHz are challenged by uncertainty in shape and size distribution of frozen hydrometerors which produces unrealistic scattering. We also evaluates the sensitivity of non-spherical Discrete Dipole Approximation (DDA) shapes for reproducing the three cyclones (hudhud, vardah and kyant) over Bay of Bengal. Simulations were carried out with four DDA shapes (thinplate, black column, 6-bullet rosette and sector snowflake) from scattering package of radiative transfer model for television infarred observartion satellite operational vertical sounder (RTTOV-SCATT). The input data used in RTTOV-SCATT includes vertical hydrometeror profiles (cloud water, ice, snow and rain), humidity and surface fluxes. In addition, the first guess simulations from Weather Reseach Forecast (WRF) model were performed at 15km resolution using ERA-Interim reanalysis datasets. Quantification of observed minus first guess (FG departures) shows the majority of samples (about 80%) were least affected with clouds (+20 K). To study the FG errors with reference to cloud amount, a symmetric error model was used. The normalized probability density function (PDF) of FG departure shows high peak and small standard deviation than Gaussian curve due to high spatially correlated errors. Positive FG departures samples also known as cold sector bias were eliminated during quality control while negative FG departure samples associated with deep convection remains unaffected. The goodness of fit test, h-statistics and skewness of observed and simulated distribution show optimum results for thinplate shape in all the convective events. These results illustrate a potential to integrate the GMI sensor data within a WRF data assimilation system. 

Keywords: GPM/GMI, all-sky radiance, assimilation, RTTOV-SCATT, WRF

 

 

Sebastien Massart - Two flavours of hybrid background error covariances for ECMWF 4D Var analysis

Title: Two flavours of hybrid background error covariances for ECMWF 4D Var analysis

 

Author: S. Massart  (European Centre for Medium-Range Weather Forecasts)

 

Deterministic weather forecast variational analyses rely on pre-computed background error covariances. These covariances usually combine a static part and an ensemble-based part. The ensemble-based part aims at bringing flow-dependent information while the static part acts as a regularisation of the possibly noisy information from the ensemble-based part.

 

This is the approach chosen at the European Centre for Medium-Range Weather Forecasts (ECMWF) for the operational 4D Var analysis. The information needed to model the ensemble-based background error covariances is derived from an ensemble of data assimilations (EDA) sharing similar characteristics as the deterministic 4D Var but at a lower resolution. Members of the EDA of the day are combined with random EDA members of the past that represent the static/climatological part. The background error covariances is finally build in the wavelet space from these combined members.

 

We are investigating a different approach to combine static and flow- dependent information in the background error covariances for the deterministic 4D Var analysis. The hybrid background error covariances matrix (BECM) is chosen to be a weighted sum of a static BECM and an EDA-based BECM. The static BECM is based on EDA members of the past using the wavelet approach. The EDA-based BECM is computed using the EDA members of the day combined with a localisation function in order to reduce the noise. This means that the flow-dependent information from the EDA members of the day is no longer combined with climatological information in the wavelet space as it is done presently. The results of this new approach will be presented. First we will assess the influence of the weight given to the static/ensemble-based parts of the hybrid BECM. Then we will compare the analysis using the new hybrid BECM (computed with an optimal weight) with an analysis using the current configuration of the BECM.

 

Richard Menard - Ensemble variance loss of in transport models and its implication to 4Dvar

 

Title: Ensemble variance loss of in transport models and its implication to 4Dvar

Authors: Richard Menard (Air Quality Research Division, Environment and Climate Change Canada)

Sergey Skachko

 

It has been argued since the early development of four-dimensional data assimilation that the propagation of error covariances is an essential component of truly optimal data assimilation schemes.  Yet the accuracy of the error covariance propagation with the numerical models has almost is rarely examined or challenged.  Indeed, because of moment closure issues, analytical solution of the propagation of error covariances exists only in rare cases, such as with linear Gaussian dynamics, which turn out to be applicable to chemical transport problems.  This study examines the convergence of the numerical simulation of error covariances to an exact solution of the propagation of error covariance as a function of model resolution, time-step, correlation length-scales and ensemble size.   In this work we use a linear 3D chemical transport model from the BASCOE system to generate ensemble-based error covariances and compare with the analytical solution of the corresponding PDE.  The results show a significant loss of variance in the ensembles compared with the solution of the PDE.  The variance loss is a numerical artefact that we can understand by using a Liouville evolution equation of phase space.  Since there is equivalence between the Kalman smoother and 4d-Var how does it manifest in 4DVar? is a question that we will address

 

Benjamin Ménétrier - The Normalized Interpolated Convolution on an Adaptive Subgrid (NICAS) method, a new implementation of localization for EnVar applications

Title: The Normalized Interpolated Convolution on an Adaptive Subgrid (NICAS) method, a new implementation of localization for EnVar applications

 

Authors : Benjamin Ménétrier (Météo-France/CNRS, CNRM)

Etienne Arbogast (Météo-France/CNRS, CNRM)

Loïk Berre (Météo-France/CNRS, CNRM)

Yannick Trémolet (JCSDA)

 

Localization is a key aspect of ensemble data assimilation. Its purpose is to reduce the sampling noise affecting covariances estimated with ensembles of limited size. In the Ensemble-Variational (EnVar) methods, localization is implemented as a smoother of a grid-point field, which is applied once for each ensemble member and for each iteration of the minimization process. Thus, applying the localization operator must be fast enough, especially for operational applications. Moreover, the localization operator has to be normalized (i.e. with diagonal elements equal to one), which is not straightforward for heterogeneous localization length-scales, irregular grids or domains with complex boundaries (e.g. ocean, sea-ice). Depending on the model grid, several usual implementations are available, either based on spectral/wavelet transforms, recursive filters or diffusion methods (explicit or implicit).  The Normalized Interpolated Convolution on an Adaptive Subgrid (NICAS) method is a new implementation of the localization operator based on an explicit convolution. To make it affordable for high-dimensional systems, the convolution function is compactly supported and applied on a reduced grid. The reduced grid sampling depends on the local length-scale and on the desired accuracy. Thus, NICAS can handle heterogeneous localization length-scales, complex boundaries and is exactly normalized. Besides, it has a generic implementation that can be used with any grid type, structured or unstructured.  This method is consistently coupled with the objective localization diagnostic developed in Ménétrier et al. (2015 a,b) and extended by Ménétrier and Auligné (2015) to diagnose consistent hybridation weights for EnVar applications. An open-source code implementing these methods has been released. It is fully interfaced with the Object-Oriented Prediction System (OOPS) developed at ECMWF, Météo-France and JCSDA. Tests are underway, showing promising results for both atmospheric and oceanic EnVar systems.

 

Yann Michel - Block methods for solving an ensemble of data assimilations

 

Title: Block methods for solving an ensemble of data assimilations

Authors: F. Mercier

  1. Michel (Météo France)
  2. Jolivet
  3. Gurol
  4. Montmerle

 

In numerical weather prediction systems, the initialization of the state is made through data assimilation, which determines the best initial state of the atmosphere using notably a background state (a previous short range forecast) and a set of observations. This implies an accurate representation of background error statistics, which can be estimated by running Ensembles of variational Data Assimilations (EDA). This consists in a set of data assimilation experiments with perturbed backgrounds and observations, and also allows to initialize ensemble prediction systems. Running EDA leads to the minimization of a set of cost functions. However, these systems have very large dimensions (state vector size around 10⁸ together with 10⁴ – 10⁵ assimilated observations for the limited area model of Météo France, AROME-France), so the computational cost of EDA generally limits the ensemble size.

We propose a new class of algorithms for speeding up the minimizations of an EDA. It consists in using block Krylov methods to perform simultaneously the minimization for all members of the ensemble, instead of performing each minimization separately. We have developed preconditioned block versions of the Full Orthogonal Method both in primal and in dual spaces. The latter works in observation space that is usually of smaller dimension than the state space, giving thus an advantage in terms of memory requirements and computational cost. Those developments have been implemented in the Oriented Object Prediction System (OOPS), a framework for data assimilation implementations developed by the European Centre for Medium-Range Weather Forecasts, Météo France and their partners. Parallelization strategies have also been developed for accelerating the minimization and limiting the amount of communications.

These algorithm has been applied to the EDA system of AROME-France, both in its standard version (1 – 25 members) and in an extended version simulating future instrumental and computational developments (1 – 75 members, 10⁵ – 10⁶ observations). The experiments performed show that the number of iterations needed to converge is drastically reduced when using the block Krylov approaches, with a relative further reduction when the condition number of the Hessian of the problem increases. Moreover, working in dual space allows to reduce the computational time of the minimization by a factor of 1.5 – 3 (with 25 members) compared to non-block methods, making our approach attractive for operational use.

Andrew Moore - Reduced-rank array modes of the California Current ocean observing system

Title: Reduced-rank array modes of the California Current ocean observing system

 

Authors: Andrew Moore (Dept. of Ocean Sciences, University of California Santa Cruz, USA)

Hernan Arango (Dept. of Marine and Coastal Science, Rutgers University, USA)

Christopher Edwards (Dept. of Ocean Sciences, University of California Santa Cruz, USA)

 

A reduced-rank formulation of the array modes of an observing system is presented that spans the sub-space explored by a 4D-Var data assimilation system. Like the array modes, the reduced-rank array modes depend only on the observation locations and are independent of the measurement values. The array modes are closely related to the degrees of freedom of the observing system, and provide a quantitative measure of the degree to which the observations span the model control space, thus providing information about the efficacy of the observing system. They also yield a useful stopping criteria for the iterative 4D-Var procedure. These ideas are explored using a 31 year sequence of historical 4D-Var analyses of the California Current system using ROMS.

 

Michael Morgan  - Using adjoint-informed optimal initial condition perturbations  to study tropical cyclone intensity change.

Title: Using adjoint-informed optimal initial condition perturbations  to study tropical cyclone intensity change.

Authors: Michael C. Morgan  (University of Wisconsin –Madison)

Zhaoxiangrui He  (University of Wisconsin –Madison)

Tropical  cyclone  intensity  change  remains  a  significant  forecast  challenge  due  to  model deficiencies and poor initial conditions.  Hurricane Irma (2017) posed a significant forecast challenge as it rapidly intensified in the central Atlantic prior to landfall in the northeastern Carribean. This presentation will explore sensitivities to tropical cyclone intensity using the ARF WRF adjoint and forward models. Adjoint-informed initial condition perturbations are created to intensify Irma’s life-cycle with the goal of exploring the possible changes to physical  processes  that would  allow  the  cyclone to  more  rapidly  deepen.  Of particular interest are whether the perturbations are realistic in amplitude (comparable to “typical” analysis errors ) and what physical mechanisms are supporting the perturbation growth.. Related to  intensity Change is  the  additional question of “why do few  tropical  cyclones reach their maximum potential intensity?” Adjoint-informed initial optimal perturbations are  created  to  explore also this  issue  and  their evolution  is  diagnosed. In  addition, sensitivities  to  kinematic  properties of  simulated  two-dimensional  flows (vorticity, divergence,  shearing  and  stretching  deformation) are  calculated  using  a  variational  minimization approach and applied in this case study of Hurricane Irma (2017). 

Dimitri Mottet - Interaction between ensemble filter/smoother and model dynamics for stiff ODEs

Title: Interaction between ensemble filter/smoother and model dynamics for stiff ODEs

 

Authors: Jean-Philippe Argaud (EDF R&D Paris-Saclay, Palaiseau, France)

Serge Gratton  (Université de Toulouse, INP, IRIT, Toulouse, France)

Dimitri Mottet (EDF R&D Paris-Saclay, Palaiseau, France, Université de Toulouse, INP, IRIT, Toulouse, France)

Ehouarn Simon  (Université de Toulouse, INP, IRIT, Toulouse, France)

 

 

Stiff systems can be commonly found in the description of physical processes. The implicit methods commonly used for the numerical integration are often of the multi-step category, the number of steps used being called the order of the method. One source of stiffness are variables within the physical process described with time constants of strongly differing orders of magnitude. This leads to numerical integrators based on time-step refinement strategies and variable order implicit multi-step integration methods. Time-step refinement and order variation strategies are based on error indicators, determined using the current iterate for one integration step.

 

In data assimilation context, the current iterate often represents the system’s state. Thus, the assimilation step will interact with the integration by updating the state with observations and results in model’s and filter’s dynamics that are closely intertwined. Therefore, our interest is to have better knowledge about this interaction, its impact on each dynamics, so as to improve the data assimilation process’ quality for stiff systems.

 

For this, emblematic or simplified scenarios, where observations and/or integration could be problematic, are imagined to illustrate the issues. The performances of ensemble-based Kalman filters and smoothers are assessed with twin experiments in stiff problems. Several experimental configurations involving observations’ availability, frequency, error and model’s characteristics (dynamics, non-linearities) are investigated. This makes us able to provide realistic insight on the data assimilation behaviour in various observation and model configurations for ensemble algorithms. It opens way in the future to better quantify this interaction by introducing new useful indicators.

 

 

Nancy Nichols - Incorporating Correlated Observation Errors in Variational Data Assimilation

 

Title: Incorporating Correlated Observation Errors in Variational Data Assimilation

 

Authors:  N.K. Nichols   (University of Reading, UK)

J.M. Tabeart   (University of Reading, UK)

 S.L. Dance   (University of Reading, UK)

A.S. Lawless   (University of Reading, UK)

 J.A Waller   (University of Reading, UK)

  1. Migliorini (Met Office, UK)

F, Smith   (Met Office, UK)

S.P. Ballard    (Met Office, UK)

 

 

With the development of convection-permitting numerical weather prediction, the efficient use of high resolution observations in data assimilation is becoming increasingly important.  Although the diagnosis of observation error statistics is difficult, idealized and operational studies have shown that a better treatment of observation error correlations gives improved forecast skill.  Here we investigate the incorporation of correlated observation errors in a variational system and establish that the computational work needed to solve the assimilation problem increases as:  the observations become more accurate;  the observation spacing decreases;  the prior (background) becomes less accurate;  the prior error correlation length scales increase;  and the observation error covariance matrix becomes ill-conditioned.   In particular we show that the rate of convergence of the assimilation scheme depends on the minimum eigenvalue of the observation error correlation matrix.  To reduce operational costs of the assimilation, the error correlation matrix is reconditioned by altering  its eigenstructure.  We implement the observation error correlations in a 1D-Var variational system used operationally at the Met Office for satellite retrievals.  Experiments with IASI data demonstrate that incorporating the reconditioned observation error correlation matrices in the assimilation improves convergence and has an impact on humidity retrievals but has minimal effect on temperature retrievals. 

 

 

 

Craig Oswald - Understanding the sensitivity of cyclogenesis using adjoint analysis

Title: Understanding the Sensitivity of Cyclogenesis Using Adjoint Analysis

 

Authors: Craig Oswald (Dept. of Atmospheric and Oceanic Sciences, University of Wisconsin-Madison)

Michael Morgan (Dept. of Atmospheric and Oceanic Sciences, University of Wisconsin-Madison)

 

While dynamical processes associated with extratropical cyclogenesis are well known, it remains unclear what effects certain perturbations have on cyclone development. From a modeling perspective, these effects are important to understand, as slight changes in the pertinent dynamics have the ability to change drastically a NWP model forecast. Previous work has tried to address this question by arbitrarily making a slight perturbation to a dynamical initial condition in a NWP forecast and examining the outcome compared to some control. But through the use of adjoint modeling, a set of optimal perturbations can be calculated and then used to perturb a cyclone simulation’s initial conditions to determine how the modeled cyclone is affected. These optimal perturbations are designed to perturb the initial conditions of a cyclone simulation in specific regions where the initial cyclone state is most sensitive. Once the perturbation has been inserted in the model, synoptic diagnosis of the deviation of the forecast trajectory from the control can be used to examine the dynamical impacts of the perturbation on cyclone development. A case study of an explosively deepening October 2010 North American cyclone is conducted using adjoint sensitivities in conjunction with other synoptic diagnostics. An examination of the changes produced in a 48-hr WRF-ARW simulation by adjoint informed perturbations made to the initial model state within the jet level, mid-level, and near surface, is conducted. Wind perturbations are made at both the jet level and mid-level, and temperature perturbations are made near surface in the most sensitive regions. This allows for an examination of both barotropic and baroclinic development processes of the perturbed cyclone life-cycle to achieve a better understanding of how different dynamical perturbations at different levels in the troposphere, such as upper level PV for example, develop from the initial state perturbations and how they impact the life-cycle of a rapidly developing cyclone. The results inform not only the type, but also the location of initial state perturbations most important in producing a cyclone of differing intensity.

Olivier Pannekoucke - Parametric Kalman filter : toward an alternative to the EnKF ?

 

Title: Parametric Kalman filter : toward an alternative to the EnKF ?

Authors: O. Pannekoucke (CNRM UWMR 3589, CERFACS, INPT-ENM, Météo-France, Toulouse, France.)

  1. Ricci (CECI UMR 5318n CERFACS, Toulouse, France.)
  2. Ménard (ARQI/Air Quality Research Division, Environment and Climate Change Canada, Dorval (Québec), Canada.)
  3. Bocquet (CEREA, joint laboratory École des Ponts ParisTech and EDF R\&D, Université Paris-Est, Champs-sur-Marne, France.)
  4. Thual (INPT, CNRS, IMFT, Université de Toulouse, Toulouse, France)

 

The ensemble Klaman filter (EnKF) has been designed as a practical implementation for the extended Kalman filter. In particular, EnKF is able to time propagate huge size covariance matrices thanks to the sampling of the initial distribution and its update during the analysis step. In this contribution we introduce and explore an alternative for the EnKF to implement the extended Kalman filter: the parametric Kalman filter (PKF). The basic idea is to approximate a covariance matrix by a parametric formulation (in place of the ensemble), then to design the evolution of these parameters along the tangent linear propagation and the analysis update. In the forecast step, the numerical cost of the PKF is equivalent to the cost of a single member of an EnKF. We present the formalism for the linear advection-diffusion equation with applications in chemical transport model, and then for the one-dimensional nonlinear advection-diffusion dynamics (Burgers equation). For these illustrations, the parametric model considered is the covariance model based on the diffusion equation, where the parametric Kalman filter describes the dynamics of the forecast error variance and of the local diffusion tensor.

 

Seon Ki Park - Assimilating synthetic all-sky radiances of GEMS using a coupled meteorology-chemistry prediction and data assimilation system

Title: Assimilating Synthetic All-Sky Radiances of GEMS Using a Coupled Meteorology-Chemistry Prediction and Data Assimilation System

 

Authors: Seon Ki Park  (Department of Environmental Science and Engineering, Department of Climate and Energy Systems Engineering, and  Center for Climate/Environment Change Prediction Research, Ewha Womans University, Seoul, Korea)

Ebony Lee, (Department of Climate and Energy Systems Engineering, and Center for Climate/Environment Change Prediction Research, Ewha Womans University, Seoul, Korea)

Milija Zupanski  (CIRA, Colorado State University, Fort Collins, Colorado, USA)

 

The Geostationary Environmental Monitoring Spectrometer (GEMS), a UV-visible scanning spectrometer, is planned to be launched in 2019 by the Korean government. In this study, we investigate the potential impact of assimilating all-sky radiance from GEMS on air quality prediction over the Korean Peninsula and adjacent areas, following the previous study using clear-sky radiances. The nature run and forecast run are performed using the Weather Research and Forecasting model with chemistry (WRF-Chem) ¾ a fully coupled meteorology-chemistry model. We employ an ensemble-based data assimilation method, called the Maximum Likelihood Ensemble Filter (MLEF). As the GEMS observations are not available yet, we generated synthetic radiances from the nature run and a radiative transfer model (VLIDORT). We will discuss our results based on the comparison between assimilation of clear-sky radiance and all-sky radiance, focusing on some of the GEMS target air pollutants (e.g., O3, NO2, SO2, HCHO, aerosol, etc.).

 

 

 

Ivo Pasmans - Ensemble-variational data assimilation in the coastal ocean circulation model off Oregon-Washington (at the US West Coast)

Title: Ensemble-variational data assimilation in the coastal ocean circulation model off Oregon-Washington (at the US West Coast)

Authors:  Ivo Pasmans (Oregon State University)

Alexander Kurapov,(Oregon State University / NOAA)

Abstract:

4DVAR implementations for ocean forecasting traditionally proceed in a series of relatively short time windows and assume that the covariance of background errors in the initial conditions is static in time. Rapidly changing background conditions in the coastal ocean can challenge this assumption. For example, ocean shelf dynamics along the Oregon (OR) and Washington (WA) coasts in the US Pacific region are influenced in summer by the wind-driven upwelling and fresh water discharge from the Columbia River.  The hydrographic conditions and the shape and location of the river plume change on 2-10 day time scales in conjunction with the winds.  To capture this variability in the background error covariance, we have implemented ensemble-variational (E4DVAR) data assimilation in the OR-WA coastal ocean forecast system. In this system the initial conditions at the beginning of each 3-day window are corrected by combining the previous 3-day model forecast from a 2-km ROMS (Regional Ocean Modeling System) model with observations of GOES sea-surface temperatures, high-frequency radar surface current observations and satellite altimetry using 4DVAR. For the tangent linear and adjoint parts of the 4DVAR algorithm the system uses codes developed in-house. In the E4DVAR system the background error covariance is estimated from a 39-member ensemble. The members of this ensemble are generated by running the system using different wind fields and perturbed observations. Several new innovations had to be introduced to make it practically feasible to run the new E4DVAR system. These innovations include a newly developed localization method which deploys a Monte Carlo approximation to rapidly estimate the background error covariance from a large ensemble of localized ensemble members, a parallel conjugate-gradient method to speed-up minimization of the cost-function and an additional penalty term in the cost-function that constrains salinity corrections on different spatial scales. The latter was necessary to prevent large E4DVAR corrections in the total plume volume from happening. These corrections occasionally occur due to the combination of background errors in the sea-surface temperature field with large salinity-temperature background error covariances found in the vicinity of the plume. Results show that the new system provides better forecasts for the subsurface temperature and gives a more accurate representation of the temperature-salinity relationship. Furthermore, the E4DVAR system produces more accurate forecasts for sea-surface velocities than the system using the static background error covariance. However, comparison with buoy salinity measurements shows that on local scales the new method does not conclusively yield better predictions for the position of the plume front than the 4DVAR method with static background error covariance or a model without data assimilation. 

 

 

 

 

Tim Payne - Rapid update cycling with delayed observations

Title: Rapid update cycling with delayed observations

 

Author: T J Payne (Met Office, Exeter, UK)

 

In this talk we examine the fundamental issues associated with the cycling of data assimilation and prediction in the case where observations are received after a delay, but we seek to assimilate them immediately on receipt, or within a short time of receipt. We obtain the optimal solution to this problem in the linear and non-linear cases, and explore the relation of this solution to simplified strategies which are adaptations of contemporary methods for large-scale data assimilation. We also discuss the motivations for this type of cycling, practical considerations, and some results, in the context of operational numerical weather prediction.

 

Reference: Tellus A (2017) http://www.tandfonline.com/doi/abs/10.1080/16000870.2017.1409061

 

Asia Pelc - Accelerating local ensemble tangent linear models with order reduction

Title: Accelerating Local Ensemble Tangent Linear Models with order reduction

Authors:  Joanna S. Pelc (Delft University of Technology, Delft, The Netherlands)
Craig H. Bishop (Marine Meteorology Division, Naval Research Laboratory, Monterey, CA)


A leading Data Assimilation (DA) technique in meteorology is 4D-Var which relies on the Tangent Linear Model (TLM) of the non-linear model and its adjoint. The difficulty of building and maintaining traditional TLMs and adjoints of coupled ocean-wave-atmosphere-etc models is daunting.  On the other hand, coupled model ensemble forecasts are readily available. Here, we show how a previously described ensemble based method for generating TLMs can be accelerated via a rank reduction method previously used in model reduced 4D-Var. The resulting Local Ensemble TLM Accelerated with order Reduction (LETLM-R) features a low rank projection of some local influence region containing all the variables that could possibly influence the time evolution of some target variable(s) near the center of the region. We prove that high accuracy is guaranteed provided that (i) the ensemble perturbations are governed by linear dynamics, and (ii) the number of ensemble members exceeds the number of vectors required to describe the climatological vector sub-space of LETLMs. The LETLM-R approach is faster than the LETLM approach because the required ensemble size is smaller by the factor a, and the cost of each LETLM is smaller by a factor a3. The approach is illustrated in a simple coupled model in which the linear reduction parameter a ranges from 0.4 to 0.1. Hence, in this case, the LETLM-R is several orders of magnitude faster than the LETLM. We show that the LETLM-R is just as accurate as the LETLM. Similar speed ups are anticipated for the transpose or Adjoint of the LETLM-R.

 

 

Nikki Privé  - Adjoint estimation of observation impact explored with an Observing System Simulation Experiment

Title: Adjoint estimation of observation impact explored with an Observing System Simulation Experiment

Authors: Nikki Privé (Morgan State University, GESTAR)

R.M. Errico (Morgan State University, GESTAR)

 

In an Observing System Simulation Experiment, the full, true state of the simulated atmosphere is known. This knowledge allows the direct calculation of analysis and forecast errors, and also may be used in conjunction with an adjoint tool to calculate metrics that are unachievable in the real world. For example, the errors that result from the use of the analysis state as verification when running adjoint calculations of observation impact on a forecast can be quantified. The adjoint itself can also be used to operate on the analysis state rather than on the forecast state. Results from these and other adjoint experiments using the NASA/GMAO OSSE framework will be presented.

 

Yvonne Ruckstuhl - Joint parameter and state estimation with ensemble Kalman filter based algorithms for convective scale applications

 

Title: Joint parameter and state estimation with ensemble Kalman filter based algorithms for convective scale applications

 

Authors: Yvonne Ruckstuhl (Meteorological Institute Munich, Ludwig-Maximilians-Universität München, Germany)

Tijana Janjic (Hans Ertel Centre for Weather Research, Deutscher Wetterdienst, Germany)

 

Representation of clouds in convection permitting models is sensitive to NWP model parameters that are often very crudely known (for example roughness length). Our goal is to allow for uncertainty in these parameters and estimate them from data using the ensemble Kalman filter (EnKF) approach. However, to deal with difficulties associated with convective scale applications, such as non-Gaussianity and constraints on state and parameter values, modifications to the classical EnKF are necessary.

 

In this study, we evaluate several recently developed EnKF based algorithms that either explicitly incorporate constraints such as mass conservation and positivity of precipitation, or introduce higher order moments on the joint state and parameter estimation problem. We compare their results to the localized EnKF on a common idealized test case. The test case uses perfect model experiments with the one dimensional modified shallow water model that was designed to mimic important properties of convection.

 

 

Adrian Sandu  - Solving robust 4D-Var data assimilation

Title: Solving robust 4D-Var data assimilation

 

Authors: Adrian Sandu (Virginia Tech)

Vishwas Rao (Argonne National Laboratory)

Elias Nino (Univesidad del Norte, Colombia)

Michael Ng  (Hong Kong Baptist University)

 

Presence of outliers in data is a common occurrence in data assimilation. These outliers negatively affect the quality of the solution. Data quality control by rejecting observations on the basis of background departure statistics leads to the inability to capture small scales in the analysis.

 

Robust data assimilation is needed to overcome this issue. We discuss approaches to rigorously formulate, and numerically solve robust 4D-Var data assimilation problems using L1 and Huber norms.

 

 

Elizabeth A. Satterfield - Observation informed generalized hybrid error covariance models

Title: Observation Informed Generalized Hybrid Error Covariance Models

 

Authors: Elizabeth A. Satterfield (Naval Research Laboratory, Monterey, CA, USA)

Daniel Hodyss (Naval Research Laboratory, Monterey, CA, USA)

David D. Kuhl (Naval Research Laboratory, Washington, DC, USA)

Craig H. Bishop (Naval Research Laboratory, Monterey, CA, USA)

 

Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble derived covariance matrix is equal to the true error covariance matrix. Here, we describe a simple and intuitively compelling method to fit calibration functions of the ensemble sample variance to the mean of the distribution of true error variances given an ensemble sample variance. Once the calibration function has been fitted, it can be combined with ensemble-based and climatologically based error correlation information to obtain a generalized hybrid error covariance model. When the calibration function is chosen to be a linear function of the ensemble variance, the generalized hybrid error covariance model is the widely used linear hybrid consisting of a weighted linear sum of a climatological and an ensemble-based forecast error covariance matrix. However, when the calibration function is chosen to be, say, a cubic function of the ensemble sample variance, the generalized hybrid error covariance model is a non-linear function of the ensemble sample estimate. Consistent with earlier work, it is shown that the linear hybrid is optimal in the case where the climatological distribution of true forecast error variances is an inverse-gamma probability density function (pdf) and the distribution of ensemble sample variances is a gamma pdf. However, when these conditions are not met, the mean of the distribution of true error variances given an ensemble sample variance will, in general, be a non-linear function of the ensemble sample variance. To aid understanding, a hierarchy of cases in which the generalized hybrid outperforms the linear hybrid are given. It is shown that in the case of the Lorenz ‘96 model, data assimilation performance is improved considerably by using the generalized hybrid instead of the linear hybrid.

 

 

Rui Silva - Regional climate model's cloud microphysics and spatial resolution role in precipitation simulation during an atmospheric river event in Portugal

Title:  Regional climate model's cloud microphysics and spatial resolution role in precipitation simulation during an atmospheric river event in Portugal

Authors: Rui Silva (CESAM, University of Aveiro)

Irina Gorodetskaya (CESAM, University of Aveiro)

 

In an increasingly warmer climate, rare intense precipitation events are becoming less rare and more intense, mainly because of the increase in the evaporation rates and in the capacity of the atmosphere holding moisture. In Iberian Peninsula and, more specifically in Portugal, the most extreme precipitation events affecting the country are frequently associated with atmospheric rivers. This study examines one of these events, when an atmospheric river formed during the explosive development of the cyclone Gong hit Portugal mainland on January 19, 2013. The aim of this study is to better understand the processes involved in the development of this event by performing a set of regional climate simulations, using different domain sizes, resolutions and cloud microphysics schemes. The simulations were carried out using the Weather Research and Forecasting model (WRF), version 3.9, forced by ERA-Interim reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF), with a horizontal resolution of 0.75° x 0.75°. The model was run in one-way, three-nested domain configuration, centred in Portugal mainland, with domains horizontal resolutions 27 km, 9 km, and 3 km. The cloud microphysical schemes applied in the simulations include the WRF Single-moment 6-class scheme (WSM6), and two double-moment schemes - the new Thompson scheme, and the Morrison scheme.

 

Results show the impact of the atmospheric river intensity, trajectory and landfall location on the amount of precipitation produced. Furthermore, high topography continental regions also play an important role in the enhancement of precipitation, especially in regions higher than 600 m a.s.l., where precipitation occurs mostly in the solid phase. The water paths for each hydrometeor type show major discrepancies between all three cloud schemes. The choice of the cloud scheme has a greater influence than the domain resolution in the predicted cloud water paths, while the total amount of precipitation is more dependent on the domain resolution. To assess which microphysical scheme is better suited for the study of extreme precipitation events in Portugal, we perform an evaluation of the simulated cloud and precipitation properties using ground-based precipitation radars, as well as satellite products.

Ehouarn Simon - On the use of the saddle formulation in weakly-constrained 4D-Var

Title: On the use of the saddle formulation in weakly-constrained 4D-Var

 

Authors: Serge Gratton (Université de Toulouse, INP, IRIT, France)

Selime Gürol (CERFACS, France)

Ehouarn Simon (Université de Toulouse, INP, IRIT, France)

Philipe Toint (NAXYS, University of Namur, Belgium)

 

 

We discuss the practical use of the saddle variational formulation for the weakly-constrained 4D-VAR method in data assimilation. It is shown that the method, in its original form, may produce erratic results or diverge because of the inherent lack of monotonicity of the produced objective function values. Convergent, variationaly coherent variants of the algorithm are then proposed whose practical performance is compared to that of other formulations (original saddle, state and forcing). This comparison is conducted on toy models by the mean of twin experiments. Because these variants essentially retain the parallelization advantages of the original proposal, they often - but not always - perform best, even for moderate numbers of computing processes.

 

Polly Smith - Estimating forecast error covariances for strongly coupled atmosphere-ocean 4D-Var data assimilation

 

Title: Estimating forecast error covariances for strongly coupled atmosphere-ocean 4D-Var data assimilation

 

Authors: Polly Smith (School of Mathematical, Physical and Computational Sciences, University of Reading)

Amos Lawless (School of Mathematical, Physical and Computational Sciences, University of Reading)

Nancy Nichols (School of Mathematical, Physical and Computational Sciences, University of Reading)

 

        

Strongly coupled atmosphere-ocean data assimilation emulates the real world pairing of the atmosphere and ocean by solving the assimilation problem in terms of a single combined atmosphere-ocean state. A significant challenge in strongly coupled variational atmosphere-ocean data assimilation is a priori specification of the cross-domain forecast error covariances. These covariances must capture the correct physical structure of interactions across the air-sea interface as well as the different scales of evolution in the atmosphere and ocean; if prescribed correctly, they will allow observations in one fluid to improve the analysis in the other.

 

Here we investigate the nature and structure of atmosphere-ocean cross-domain forecast error correlations using an idealised single-column coupled atmosphere-ocean incremental 4D-Var assimilation system. We present results from a set of identical twin experiments that use ensembles of cycled strongly coupled 4D-Var assimilations to derive estimates of the atmosphere-ocean cross-domain forecast error correlations for summer and winter test cases. Our results show significant variation in the strength and structure of the error cross-correlations in the atmosphere-ocean boundary layer between summer and winter and between day and night. These differences provide a valuable insight into the nature of coupled atmosphere-ocean error correlations for different seasons and points in the diurnal cycle and can be explained by considering the underlying model physics, forcing and knownatmosphere-ocean feedback mechanisms.

 

Introducing improved cross-domain error covariance information between the atmosphere and ocean will enable greater use of near surface observations and should in turn produce more accurate and balanced atmosphere-ocean analysis states and more reliable coupled model forecasts and reanalyses. We also present results from initial experiments in which we explore the effect of including the ensemble derived atmosphere-ocean forecast error correlation information within our simple 4D-Var assimilation system.

 

 

 

Timothy Smith - A dynamical reconstruction of AMOC Variability at the mouth of the South Atlantic

Title: A dynamical reconstruction of AMOC Variability at the mouth of the South Atlantic

 

Authors: Timothy Smith  (Institute for Computational Engineering and Sciences, The University of Texas at Austin, USA)

Patrick Heimbach   (Institute for Computational Engineering and Sciences, The University of Texas at Austin, USA; Jackson School of Geosciences, The University of Texas at Austin, USA)

 

The main dedicated Atlantic meridional overturning circulation (AMOC) observing system, RAPID-MOCHA, is located in the Northern hemisphere. Insights from this system have motivated a recent focus on the South Atlantic, where water masses are exchanged with neighboring ocean basins. Here we study the South Atlantic MOC (SAMOC) at 34_S using an inverse modeling approach to show the linear dynamics which carry atmospheric perturbations to this latitude. In particular, we compute linear sensitivities of the SAMOC to global atmospheric forcing using the adjoint of the MIT general circulation model, which is _t to 20 years of ocean observation data in a dynamically consistent framework. The dynamical pathways highlighted by these sensitivity patterns show that the domain of influence for the SAMOC is quite broad, covering neighboring ocean basins even on short time scales. This result is unique to the South Atlantic, as it has been shown that the AMOC in the Northern hemisphere is largely governed by dynamics confined to that basin. We use these sensitivities along with an atmospheric state provided by the ERAInterim reanalysis product to attribute the influence of each external forcing variable (e.g. wind stress, precipitation) on the SAMOC at seasonal to interannual timescales from 1992-2011. At this latitude, we find that wind stress dominates variability at these frequencies and that buoyancy forcing plays a relatively minor role, confirming results from past forward sensitivity experiments. Additionally, we show that the SAMOC seasonal cycle is largely explained by local perturbations in the westerlies. Interannual variability during this time period, however, is shown to have originated from remote locations across the globe, including a nontrivial component originating from the tropical Pacific which is attributed to the El Niño Southern Oscillation. The influence of widespread atmospheric anomalies emphasizes the importance of continuous widespread observations of the global atmospheric state for attributing observed SAMOC variability.

 

Magda Sousa - Analysis of global sea surface temperature changes under future scenarios

Title: Analysis of global sea surface temperature changes under future scenarios

 

Authors: Magda C. Sousa (CESAM, Departamento de Física, Universidade de Aveiro)

Rui Ruela (Departamento de Física, Universidade de Aveiro)

Ines Alvarez (CESAM, Departamento de Física, Universidade de Aveiro; EPhysLab (Environmental Physics Laboratory), Universidade de Vigo, Facultade de Ciencias, Ourense, Spain)

Maite deCastro  (EPhysLab (Environmental Physics Laboratory), Universidade de Vigo, Facultade de Ciencias, Ourense, Spain)

Moncho Gomez-Gesteira  (EPhysLab (Environmental Physics Laboratory), Universidade de Vigo, Facultade de Ciencias, Ourense, Spain)

João M. Dias (CESAM, Departamento de Física, Universidade de Aveiro)

 

Sea surface temperature (SST) changes are one of the most important sources of uncertainty in future climate change. The general warming of ocean temperatures is also an important process to consider when analyzing the dynamics controlling upwelling. This work aims to assess the ability of the CMIP5 models in simulating the worldwide SST and to present detailed higher accuracy estimations of the spatio-temporal trends of SST along the southern limit of the Canary Upwelling System (SLCUS) under climate change context. In this sense, a comparative analysis between CMIP5 models and Era-Interim dataset using historical simulations (1979–2005) was carried out, in order to identify the climate models that best reproduce the worldwide SST patterns. This analysis was done comparing the probability distributions between CMIP5 models and Era-Interim dataset as well as through Taylor diagrams inside domains obtained with K-Means cluster analysis. This was performed on the monthly subsets, resulting in a spatial subdivision of the domain in regions with similar SST magnitude and variability. After, data from the selected climate models is used to assess global future changes in the selected domains and specifically along the SLCUS. The differences between past and future patterns was also estimated using statistical tests. In general, results for future SST trends reveals a general warming throughout the domains, although the warming rate is considerably lower near the shore than at open ocean locations due to coastal upwelling effects. In addition, SST projections show higher warming rates from May to August than from October to April in response to the future decreasing trend in the upwelling index during the summer months along the SLCUS.

Răzvan Ştefănescu  - Accuracy improvement of hybrid 4DEnVar and MLEF methods

Title: Accuracy improvement of hybrid 4DEnVar and MLEF methods

Authors: Răzvan Ştefănescu  (Spire Global, GVM Model Department, Boulder, CO, USA)

Dusanka Zupanski  (Spire Global, GVM Model Department, Boulder, CO, USA)

 

Hybrid 4DEnVar and MLEF are two ensemble-variational data assimilation methods that model the increments as a linear combination of forward ensemble perturbations and no tangent linear or adjoint models are required. In contrast, hybrid 4DVar method employs the tangent linear model to advance the increments in time starting from a linear combination of the background term and forward ensemble perturbations. In this study, we propose to enhance the accuracy of the solutions obtained by hybrid 4DEnVar and MLEF methods by enriching the increments model with adjoint and gradient ensemble perturbations in addition to the forward ensemble perturbations. This agrees with the reduced-order 4DVar method proposed in [1], where approximating well all the three high-fidelity optimality conditions lead to more accurate solutions than in the case of only approximating the first optimality condition. By using only forward ensembles to model the increments, hybrid 4DEnVar approximates well only the first high-fidelity optimality condition. Experiments using the 1D Burgers model revealed superior solutions when the increments model was enhanced with adjoint and gradient information.

 

References

[1] POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation, R.Ştefănescu, A.Sandu and I.M.Navon, Journal of Computational Physics, Volume 295, Pages 569-595, 2015.

Jemima M. Tabeart - Improving the conditioning of estimated covariance matrices

Title: Improving the conditioning of estimated covariance matrices

 

Authors: Jemima M. Tabeart  (University of Reading)

Sarah L. Dance (University of Reading)

Amos S. Lawless (University of Reading and NCEO Reading)

Nancy K. Nichols (University of Reading and NCEO Reading)

Joanne A. Waller (University of Reading)

 

New developments in the treatment of observation uncertainties have shown that correctly accounting for correlated observation errors in data assimilation can improve analysis accuracy and forecast skill. In practice, sample covariance matrices are often used to estimate observation error covariance matrices. For high dimensional problems, such as in numerical weather prediction, these sample covariance matrices are likely to be very ill-conditioned, causing problems for practical applications.

 

Conditioning of the observation error covariance matrix is important for two reasons. Firstly, the data assimilation procedure requires the inversion of the observation error covariance matrix. The condition number of the covariance matrix is related to the size of the numerical error we expect when computing its inverse. Additionally, it has been shown that the eigenvalues of the observation error covariance matrix are important for determining the conditioning of the variational data assimilation problem. For conjugate gradient methods, the speed of convergence relates to the conditioning of the system. It is therefore of interest to investigate ways of reducing the condition number of a covariance matrix while preserving its correlation structure.

 

We study two “reconditioning” methods which can be used to reduce the condition number of any covariance matrix: ridge regression, and the minimum eigenvalue method. In particular, we compare both methods theoretically for the first time, investigating their impact on the variances and correlations of a general covariance matrix. Using this new theory, we find that both methods increase variances, and that the ridge regression method results in a larger increase to the variances than the minimum eigenvalue method for any covariance matrix. We then apply the reconditioning methods to two examples. We observe that the minimum eigenvalue method results in smaller overall changes to the covariance matrix, and retains more of the correlation structure of the original covariance matrix. Understanding how different reconditioning methods affect variance and correlation structure will allow users to select the most appropriate method for their application. Future work in this area is expected to inform the choice of target condition number of the reconditioned covariance matrix.

 

 

Catherine Thomas - Adopting NCEP’s Hybrid 4DEnVar data assimilation system to the FV3GFS

Title: Adopting NCEP’s Hybrid 4DEnVar Data Assimilation System to the FV3GFS

 

Authors: Catherine Thomas (IMSG/EMC)

Rahul Mahajan (IMSG/EMC)

Daryl Kleist (EMC)

Jeffrey Whitaker (ESRL)

Russ Treadon (EMC)

 

Through an National Weather Service (NWS) led community effort, several dynamical cores were evaluated as part of the Next Generation Global Prediction System (NGGPS) and the Finite-Volume Cubed-Sphere Dynamical Core (FV3) was chosen as the replacement of the Global Spectral Model (GSM) for the upcoming fully coupled weather-scale system. It is computationally efficient, conservative, and non-hydrostatic, making it suitable across the many spatio-temporal scales of the weather and climate predictability. The first step towards this unified system is to replace the spectral dynamical core of the Global Forecast System (GFS) with the new dynamical core. The Gridpoint Statistical Interpolation (GSI), which forms the basis of data assimilation for the GFS, needs to be adopted to the new dynamical core as well.

 

Development of a prototype FV3GFS system is complete and several components of the legacy GFS system have being incorporated or transitioned into the FV3GFS system including the updates required to the data assimilation system. Several updates are necessary; e.g. addition of use of stochastic physics to represent model uncertainty, initialization of forecasts with the use of an Incremental Analysis Update (IAU), use of a FV3 climatological static covariance matrix. In addition to the dynamical core, the cloud microphysics parameterization of Zhao-Carr scheme is being replaced by a more advanced GFDL cloud microphysics to predict the individual hydrometeors. Testing and evaluation of these different components in comparison with the operational GFS version of the model will be presented and a final summary with a review of a realtime pre-operational FV3GFS beta system will be shown.

 

Ricardo Todling - Preliminary experiments extending the assimilation window of the GMAO Hybrid 4DEnVar

Title: Preliminary experiments extending the assimilation window of the GMAO Hybrid 4DEnVar

Author: R. Todling  (NASA/GMAO) 

  1. Akella (NASA/GMAO)
  2. El Akkraoui (NASA/GMAO)
  3. Guo (NASA/GMAO)
  4. L. Takacs (NASA/GMAO)

 

Most major operational weather centers around the world now use some type of hybrid ensemble-variational data assimilation supporting their NWP systems. Many reanalysis efforts are also headed in the same hybrid direction. The large majority of implementations, for either purpose, use a 6-hour assimilation window. This presentation looks into extending the length of the assimilation window of the NASA/GMAO Hybrid 4DEnVar system to 12- and possibly 24-hours. In doing so, we investigate different approaches to extend the 3D IAU-based assimilation procedure implemented for the members of the ensemble into flavors of 4D IAU. The possibility of using an 4D extension of the ensemble square-root filter will be compared with a filter-free approach that simply inflates the central analysis to initialize the members of the ensemble.

For NWP applications, the hope of this work is to reduce diurnal biases by having an assimilation window where the model is more evenly initialized and becomes less sensitive to the 6-hourly differences of the conventional observing network. For reanalysis, more specifically, couple reanalysis the hope is to help bridge the gap between the relatively short assimilation window typical of atmospheric systems and the relatively long window of ocean systems. The preliminary work to be presented at the time of the Workshop will focus on the atmospheric system and on flavors of extending its ensemble assimilation component.

Victor Trappler  - Parameter control in presence of uncertainties: robust estimation of bottom friction

Title: Parameter control in presence of uncertainties: robust estimation of bottom friction

 

Authors: Victor Trappler (Université Grenoble-Alpes)

Elise Arnaud (Université Grenoble-Alpes)

Laurent Debreu (Inria)

Arthur Vidard (Inria).

 

Many physical phenomena are modelled numerically in order to better understand and/or to predict their behaviour. However, some complex and small scale phenomena can not be fully represented in the models. The introduction of ad-hoc correcting terms, can represent these unresolved processes, but they need to be properly estimated.

 

A good example of this type of problem is the estimation of bottom friction parameters of the ocean floor. This is important because it affects the general circulation. This is particularly the case in coastal areas, especially for its influence on wave breaking. Because of its strong spatial disparity, it is impossible to estimate the bottom friction by direct observation, so it requires to do so indirectly by observing its effects on surface movement. This task is further complicated by the presence of uncertainty in certain other characteristics linking the bottom and the surface (eg boundary conditions). The techniques currently used to adjust these settings are very basic and do not take into account these uncertainties, thereby increasing the error in this estimate.

 

Classical methods of parameter estimation usually imply the minimisation of an objective function, that measures the error between some observations and the results obtained by a numerical model. The optimum is directly dependant on the fixed nominal value given to the uncertain parameter; and therefore, may not be relevant in other conditions.

 

Classical methods can be extended to account for such uncertainties using so-called robust control theory, that mix optimum control approaches and sampling and estimation principles.

 

Such approaches will be presented and applied on an academic model of a coastal area. The control parameter is the bottom friction, and uncertainties probability distribution are assumed on the boundary conditions.

 

 

Yannick Trémolet - The Joint Effort for Data assimilation Integration (JEDI)

Title: The Joint Effort for Data assimilation Integration (JEDI)

 

Author: Yannick Trémolet (JCSDA)

 

The Joint Effort for Data assimilation Integration (JEDI) aims at providing a unified data assimilation framework for all partners of the Joint Center for Satellite Data Assimilation (JCSDA) and the data assimilation community in general. The long term objective is to provide a unified framework for research and operational use, for different components of the Earth system, and for different applications, with the objective of reducing or avoiding redundant work within the community and increasing efficiency of research and of the transition from development teams to operations.

 

One area where this is particularly important is the use of observations. As Earth observing systems are constantly evolving and new systems launched, continuous scientific developments for exploiting the full potential of the data are necessary. Given the cost of new observing systems, it is important that this process happens quickly. Reducing duplication of work and increasing collaboration between agencies in this domain can be achieved through Unified Forward Operators (UFO).

 

Over the last decade or two, software development technology has advanced significantly, making routine the use of complex software in everyday life. The key concept in modern software development for complex systems is the separation of concerns. In a well-designed architecture, teams can develop different aspects in parallel without interfering with other teams’ work and without breaking the components they are not working on. Scientists can be more efficient focusing on their area of expertise without having to understand all aspects of the system. This is similar to the concept of modularity.  However, modern techniques (such as Object Oriented programming) extend this concept and, just as importantly, help enforce it uniformly throughout a code.

 

JEDI is based on the Object Oriented Prediction System (OOPS), encapsulating models and observations, which will be described. Extensions towards sharing observations operators and observation related operations such as quality control across models using the UFO will also be described.

 

JEDI is a collaborative project with developers distributed across agencies and in several locations in different time zones. In order to facilitate collaborative work, modern software development tools are used. These tools include version control, bug and feature development tracking, automated regression testing and provide utilities for exchanging this information. The collaborative development process in JEDI will be presented before concluding with the status of the project.

 

 

Francois Vandenberghe  - Variational assimilation of GPS radio-occultation observations in rainy conditions

Title: Variational assimilation of GPS radio-occultation observations in rainy conditions

 

Authors: François Vandenberghe   (JCSDA, Boulder CO)

Thomas Auligné   (JCSDA, Boulder CO)

 

 

Assimilation of GPS Radio-Occultation (GPSRO) observations typically requires the integration of the GPS phase along the ray path between the emitting GPS satellite and the low earth orbit receiving satellite. Evaluation of the atmospheric refractivity along the ray path is therefore part of any GPSRO observation operator. At GPS frequency, the atmospheric refractivity is function of pressure, temperature and water vapor. The dependence on water vapor, however, vanishes in saturated atmosphere. This introduces a switch in the GPSRO observations operator. The problem is therefore twofold:

 

  • Find those rainy regions along the ray path,
  • Mitigate the on/off process in the forward operator during the minimization.

 

1) assumes an accurate description of the cloud field in available from the model background. This can be sometime questionable for longer forecasts. In this talk, we are interested in the second problem and will discuss our attempt to represent the refractivity equation by a smooth numerical process. We use for that purpose techniques similar to those introduced in the 90s for the development of the adjoint operators of moist physical schemes. Data assimilation experiments with real GPSRO data are presented and the benefit of the smoothed forward operator is assessed.

 

 

Sanita Vetra-Carvalho - On improving urban flood prediction through data assimilation using CCTV images

Title: On improving urban flood prediction through data assimilation using CCTV images

 

Authors: Sanita Vetra-Carvalho, (Department of Meteorology, University of Reading, Reading, UK.)

Sarah L. Dance, (Department of Meteorology, University of Reading, Reading, UK.; 2Department of Mathematics and Statistics, University of Reading, Reading, UK.)

 David C. Mason (Department of Geography and Environmental Science, University of Reading, Reading, UK.)

Javier Garcia-Pintado (Department of Meteorology, University of Reading, Reading, UK.; Marum, Research Faculty University of Bremen, Germany)

 

Recent use of synthetic aperture radar (SAR) images in rural flood forecasting has allowed assimilation of high spatial resolution observations of water levels over a large area into river flood forecasting models. This rich source of observational information has offered a valuable improvement in flood forecasting accuracy and increase in the lead time. In urban areas it is even more important to have dense and high spatial resolution observations due to the complexity of the landscape and interactions of buildings, sewers, rivers etc. However, currently SAR images in cities only provide limited spatial observations e.g. due to building shadows. Even when used together with river gauge measurements, these observations are not sufficient to constrain the dynamical models for urban flooding, with spatial resolutions of 1-2 meters. To remedy this lack of information in urban areas and to make use of the abundance of technology in cities, our research is concentrating on using novel and easily available data from cities, e.g. images of CCTV cameras.

 

Our aim is to assess the impact of such data on improving flood forecasts through correcting the state variables as well as uncertain boundary conditions such as inflows. Further, another very important aspect of our work is to understand observation errors and how to assimilate observations from such sources. To investigate both of these questions we first consider a well understood case from the Tewkesbury, UK floods in 2012 where we assimilate data obtained from Farson Digital Cameras from four cameras on the rivers Avon and Severn. We present our findings using CCTV images in data assimilation applied to urban flood forecasting and discuss how such images can offer valuable information in such complex systems as cities.

Arthur Vidard - Assessment of approximate 4D-Var schemes for ocean reanalysis

Title: Assessment of approximate 4D-Var schemes for ocean reanalysis

 

Author: A. Vidard (Inria)

 

Due to the heavy developments it requires and its significant additional computing cost compared to 3D-Var. 4D-Var represents quite an investment. Incremental 4Dvar formulation allows to some extent for the use of degraded tangent and adjoint model in the inner loops. 3D-Fgat being an extreme simplification with both model set to the Identity, other common approaches are using simplified physics and/or degraded resolution. The relevance of such simplifications depends on many factors, the main one being the resolution of the model, the assimilation window length and the type of observations assimilated.

 

The convergence of such algorithms is not guaranteed, but one can obtain sufficient conditions from the theory and then derive useful diagnostics to assess the quality of said approximation. This is illustrated in an ocean context, using several combinations of model configurations and data to be assimilated, and to above-mentioned inner approximations, in order to foresee their performance in a reanalysis context. One year long ocean reanalyses in the same contexts will also be shown to illustrate the pertinence of such approach.

Joanne Waller - Doppler radial wind spatially correlated observation error: operational implementation and initial results

Title: Doppler radial wind spatially correlated observation error: operational implementation and initial results

 

Authors:   D.  Simonin (Met Office)

  1. A. Waller (University of Reading)
  2. P. Ballard (Met Office)
  3. L. Dance (University of Reading)
  4. K. Nichols (University of Reading)

 

In recent years the improved treatment of correlated inter-channel observation errors in data assimilation has been shown to improve the analysis accuracy and forecast skill scores at a number of operational centers. This has motivated research that shows that observation errors also exhibit spatial correlations. One such set of observations that are routinely assimilated into the Met Office convection-permitting numerical weather prediction model are Doppler radar radial winds (DRWs). Currently, DRW errors are assumed uncorrelated and to avoid violating this assumption the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast requires the introduction of full, potentially correlated, error statistics.

We describe the Met Office software developments that allow the use of spatially correlated observation error statistics for DRWs. These developments include a change of parallelization strategy and modification of the existing code to permit the use of non-diagonal observation error covariance matrices. Initial experiments using correlated DRW error statistics are then presented. We find that the inclusion of correlated DRW error statistics has little impact on the wall-clock time taken for data assimilation computations, even with a four-fold increase in the number of DRW observations assimilated.  The use of correlated observation error statistics with denser observations produces increments with shorter length scales than the control.   Initial forecast trials show a neutral to positive impact on forecast skill overall, and also in particular for quantitative precipitation forecasts

 

 

Akira Yamazaki - Using the Ensemble Forecast Sensitivity to Observations (EFSO) technique for global observing system experiments (OSEs)

Title: Using the Ensemble Forecast Sensitivity to Observations (EFSO) technique for global observing system experiments (OSEs)

Authors: Akira Yamazaki (Japan Agency for marine-Earth Science and Technology (JAMSTEC))

Takemasa Miyoshi (RIKEN Advanced Institute for Computational Science (AICS))

Takeshi Enomoto (Kyoto University)

Nobumasa Komori (JAMSTEC)

Jun Inoue (National Institute of Polar Research (NIPR))

 

A global atmospheric data assimilation system called ALEDAS comprised of AFES (Atmospheric GCM) and LETKF (an ensemble Kalman Filter) has been developed, generated an experimental global ensemble reanalysis, and used to conduct several OSE studies to assess impacts of special observations obtained during some observational campaigns, especially on the Arctic and subtropical oceans. We have also performed some weather predictability studies by using the ensemble reanalysis and/or the OSE reanalyses as initial values for AFES. Recently, a diagnostic technique called Ensemble Forecast Sensitivity to Observations (EFSO) which can quantify how much each observation has improved or degraded the forecast without a data denial OSE experiment (in offline) has been implemented into ALEDAS. In our presentation, estimation by EFSO is compared with actual data denial experiments and whether the estimation can be helpful for our future global OSE studies is discussed.

Nedjeljka Žagar - Growth of forecast errors in global NWP models and inertia-gravity wave dynamics

Title: Growth of forecast errors in global NWP models and inertia-gravity wave dynamics

 

Author: Nedjeljka Žagar  (University of Ljubljana, Slovenia)

 

One way to analyze the scale-dependent growth of errors in global NWP and ensemble prediction systems is the representation of  model dynamical fields in terms of the 3D orthogonal normal modes built by the Hough harmonics. An advantage of the Hough harmonics in comparison to other spectral bases is their representation of dynamics associated with Rossby waves and inertia-gravity waves. This is particularly suitable in the tropics where only a few Hough modes describe a large part of variability.

 

In this study, the spectra of forecast errors derived from ensembles of forecasts and from analysis-forecast data of ECMWF are compared for the Rossby waves and inertia-gravity components. Differences in the spectra are related to the method used to create 4D-Var analysis ensemble that produces largest initial uncertainties in the tropics. The comparison of the two 2D spectral distributions reveals a missing variability in the ensemble prediction system in realtion to dynamics. The main goal is to explain a different direction of the error growth in Rossby and inertia-gravity waves in synoptic and subsynoptic scales.

 

 

 

 

Žiga Zaplotnik - Inertio-gravity waves in 4D-Var

Title: Inertio-gravity waves in 4D-Var

 

Authors: Žiga Zaplotnik  (University of Ljubljana, Faculty of Mathematics and Physics,)

Nedjeljka Žagar (University of Ljubljana, Faculty of Mathematics and Physics)

 

 

Practical applications of four-dimensional variational assimilation require simplifications of the background error covariance matrix. In a special case of a diagonal B-matrix, forecast error variances are associated with the role of eigensolutions of the linearized model equations.  Mass-wind coupling due to eigensolutions of the linearized primitive equations has been employed for the modelling of the B-matrix in a number of ways. In most cases, it has been applied only to the Rossby waves, either on the sphere or on the beta-plane. We explore 4D-Var dynamics in the case when the inertio-gravity (IG) wave mass-wind coupling is explicitly applied for the error covariances in addition to the Rossby wave coupling

The Rossby waves are slow propagating, vorticity dominated waves whereas the IG waves are fast propagating, divergence dominated oscillations. This makes their individual impact on 4D-Var dynamics in tangent linear and adjoint integration substantially different. We explore these effects in a moist atmosphere using a simplified framework of a single vertical mode in the tropics. Our moist tropical model allows for a rich spectrum of 4D-Var effects as it supports two additional wave motions, the Kelvin and mixed Rossby-gravity waves propagating along the equator. All effects, present in the dry or unsaturated atmosphere are amplified in the presence of moisture. The mirroring effect of the adjoint integration on the IG waves is relevant if one would like to impose any IG-related constraint in the B-matrix modelling. We use sensitivity experiments to study the impact of the IG mass-wind coupling on analysis increments in moist 4D-Var in relation to the flow nonlinearity.

 

Zoë Brooke Zibton - Adjoint sensitivity diagnosis of the intensification of Hurricane Harvey

Title: Adjoint Sensitivity Diagnosis of the Intensification of Hurricane Harvey

 

Authors: Zoë Brooke Zibton (Department of Atmospheric and Oceanic Sciences, University of Wisconsin-Madison)

Michael Morgan (Department of Atmospheric and Oceanic Sciences, University of Wisconsin-Madison)

Brett Hoover (Cooperative Institute for Meteorological Satellite Studies (CIMSS), Madison, WI)

 

Hurricane Harvey was a record setting tropical cyclone that underwent rapid intensification prior to landfall in Texas. While numerical weather prediction models anticipated some intensification prior to landfall, the intensity and the rate of intensification were underestimated by most models. This presentation will present the results of adjoint-derived forecast sensitivity diagnosis of the sensitivity of intensity and intensification rate to the initial conditions of simulations of the hurricane for 24 to 36h before landfall. The response functions chosen for this study include the perturbation dry air mass in a column above the tropical cyclone center, the vorticity averaged around the TC center, and the pressure tendency following the cyclone center. Regions of high amplitude sensitivity indicate where small initial condition uncertainties or errors could impact the intensity or intensification rate.  Optimal perturbations are derived from the forecast sensitivities to intentionally influence the intensification rate of the simulated hurricane. Diagnosis of the perturbation evolution points to key dynamical processes influencing TC intensification (rate). The implications of the results of these adjoint studies on the efficacy of targeted observing for this cyclone will be discussed.