the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Revising the stochastic iterative ensemble smoother
Patrick Nima Raanes
Andreas Størksen Stordal
Geir Evensen
Ensemble randomized maximum likelihood (EnRML) is an iterative (stochastic) ensemble smoother, used for large and nonlinear inverse problems, such as history matching and data assimilation. Its current formulation is overly complicated and has issues with computational costs, noise, and covariance localization, even causing some practitioners to omit crucial prior information. This paper resolves these difficulties and streamlines the algorithm without changing its output. These simplifications are achieved through the careful treatment of the linearizations and subspaces. For example, it is shown (a) how ensemble linearizations relate to average sensitivity and (b) that the ensemble does not lose rank during updates. The paper also draws significantly on the theory of the (deterministic) iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks are obtained with the Lorenz 96 model with these two smoothers and the ensemble smoother using multiple data assimilation (ESMDA).
 Article
(1309 KB)  Fulltext XML
 BibTeX
 EndNote
Ensemble (Kalman) smoothers are approximate methods used for data assimilation (state estimation in geoscience), history matching (parameter estimation for petroleum reservoirs), and other inverse problems constrained by partial differential equations. Iterative forms of these smoothers, derived from optimization perspectives, have proven useful in improving the estimation accuracy when the forward operator is nonlinear. Ensemble randomized maximum likelihood (EnRML) is one such method.
This paper rectifies several conceptual and computational complications with EnRML, detailed in Sect. 1.1. As emphasized in Sect. 1.2, these improvements are largely inspired by the theory of the iterative ensemble Kalman smoother (IEnKS). Readers unfamiliar with EnRML may jump to the beginning of the derivation, starting in Sect. 2, which defines the inverse problem and the idea of the randomized maximum likelihood method. Section 3 derives the new formulation of EnRML, which is summarized by Algorithm 1 of Sect. 3.7. Section 4 shows benchmark experiments obtained with various iterative ensemble smoothers. Appendix A provides proofs of some of the mathematical results used in the text.
1.1 Ensemble randomized maximum likelihood (EnRML): obstacles
The Gauss–Newton variant of EnRML was given by Gu and Oliver (2007) and Chen and Oliver (2012), with an important precursor from Reynolds et al. (2006). This version explicitly requires the ensembleestimated “model sensitivity” matrix, herein denoted ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$. As detailed in Sect. 3, this is problematic because ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ is noisy and requires the computation of the pseudoinverse of the “anomalies”, ${\mathbf{X}}_{i}^{+}$, for each iteration, i.
A Levenberg–Marquardt variant was proposed in the landmark paper of Chen and Oliver (2013b). Its main originality is a partial resolution to the above issue by modifying the Hessian matrix (beyond the standard trustregion step regularization): the prior ensemble covariance matrix is replaced by the posterior covariance (of iteration i): ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}\leftarrow {\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x},i}$. Now the Kalman gain form of the likelihood increment is “vastly simplified” because the linearization ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ only appears in the product ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}{\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x},i}{\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}^{\mathsf{T}}$, which does not require ${\mathbf{X}}_{i}^{+}$. For the prior increment, on the other hand, the modification breaks its Kalman gain form. Meanwhile, the precision matrix form, i.e. their Eq. (10), is already invalid because it requires the inverse of ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x},i}$. Still, in their Eq. (15), the prior increment is formulated with an inversion in ensemble space and also unburdened of the explicit computation of ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$. Intermediate explanations are lacking but could be construed to involve approximate inversions. Another issue is that the pseudoinverse of ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}$ is now required (via X), and covariance localization is further complicated.
An approximate version was therefore also proposed in which the prior mismatch term is omitted from the update formula altogether. This is not principled and severely aggravates the chance of overfitting and poor prediction skill. Therefore, unless the prior mismatch term is relatively insignificant, overfitting must be prevented by limiting the number of steps or by clever stopping criteria. Nevertheless, this version has received significant attention in history matching.
This paper revises EnRML; without any of the above tricks, we formulate the algorithm such that there is no explicit computation of ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ and show how the product ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}\mathbf{X}$ may be computed without any pseudoinversions of the matrix of anomalies. Consequently, the algorithm is simplified, computationally and conceptually, and there is no longer any reason to omit the prior increment. Moreover, the Levenberg–Marquardt variant is a trivial modification of the Gauss–Newton variant. The above is achieved by improvements to the derivation, notably by (a) improving the understanding of the sensitivity (i.e. linearizations) involved, (b) explicitly and rigorously treating issues of rank deficiency and subspaces, and (c) avoiding premature insertion of singular value decompositions (SVDs).
1.2 Iterative ensemble Kalman smoother (IEnKS)
The contributions of this paper (listed by the previous paragraph) are original but draw heavily on the theory of the IEnKS of Sakov et al. (2012) and Bocquet and Sakov (2012, 2014). Relevant precursors include Zupanski (2005) as well as the iterative, extended Kalman filter (e.g. Jazwinski, 1970).
It is informally known that EnRML can be seen as a stochastic flavour of the IEnKS (Sakov et al., 2012). Indeed, while the IEnKS update takes the form of a deterministic, “squareroot” transformation, based on a single objective function, EnRML uses stochastic “perturbed observations” associated with an ensemble of randomized objective functions.
Another notable difference is that the IEnKS was developed in the atmospheric literature, while EnRML was developed in the literature on subsurface flow. Thus, typically, the IEnKS is applied to (sequential) state estimation problems such as filtering for chaotic dynamical systems, while EnRML is applied to (batch) parameter estimation problems, such as nonlinear inversion for physical constants and boundary conditions. For these problems, EnRML is sometimes referred to as the iterative ensemble smoother (IES). As shown by Gu and Oliver (2007), however, EnRML is easily reformulated for the sequential problem, and, vice versa, the IEnKS may be formulated for the batch problem.
The improvements to the EnRML algorithm herein render it very similar to the IEnKS also in computational cost. It thus fully establishes that EnRML is the stochastic “counterpart” to the IEnKS. In spite of the similarities, the theoretical insights and comparative experiments of this paper should make it interesting also for readers already familiar with the IEnKS.
Randomized maximum likelihood (RML; Kitanidis, 1995; Oliver, 1996; Oliver et al., 2008) is an approximate solution approach to a class of inverse problems. The form of RML described here is a simplification, common for large inverse problems, without the use of a correction step (such as that of Metropolis–Hastings). This restricts the class of problems for which it is unbiased but makes it more tractable (Oliver, 2017). Similar methods were proposed and studied by Bardsley et al. (2014), Liu et al. (2017), and Morzfeld et al. (2018).
2.1 The inverse problem
Consider the problem of estimating the unknown, highdimensional state (or parameter) vector x∈ℝ^{M} given the observation y∈ℝ^{P}. It is assumed that the (generic and typically nonlinear) forward observation process may be approximated by a computational model, ℳ, so that
where the error, δ, is random and gives rise to a likelihood, p(yx).
In the Bayesian paradigm, prior information is quantified as a probability density function (PDF) called the prior, denoted p(x), and the truth, x, is considered to be a draw thereof. The inverse problem then consists of computing and representing the posterior which, in principle, is given by pointwise multiplication:
quantifying the updated estimation of x. Due to the noted high dimensionality and nonlinearity, this can be challenging, necessitating approximate solutions.
The prior is assumed Gaussian (normal), with mean μ_{x} and covariance C_{x}, i.e.
For now, the prior covariance matrix, ${\mathbf{C}}_{\mathit{x}}\in {\mathbb{R}}^{M\times M}$, is assumed invertible such that the corresponding norm, $\Vert \mathit{x}{\Vert}_{{\mathbf{C}}_{\mathit{x}}}^{\mathrm{2}}={\mathit{x}}^{\mathrm{T}}{\mathbf{C}}_{\mathit{x}}^{\mathrm{1}}\mathit{x}$, is defined. Note that vectors are taken to have column orientation and that x^{T} denotes the transpose.
The observation error, δ, is assumed drawn from
whose covariance, ${\mathbf{C}}_{\mathit{\delta}}\in {\mathbb{R}}^{P\times P}$, will always be assumed invertible. Then, assuming that δ and x are independent and recalling Eq. (1),
2.2 Randomize, then optimize
The Monte Carlo approach offers a convenient representation of distributions as samples. Here, the prior is represented by the “prior ensemble”, $\mathit{\{}{\mathit{x}}_{n}{\mathit{\}}}_{n=\mathrm{1}}^{N}$, whose members (sample points) are assumed independently drawn from it. RML is an efficient method to approximately “condition” (i.e. implement Eq. 2 on) the prior ensemble, using optimization. Firstly, an ensemble of perturbed observations, $\mathit{\{}{\mathit{y}}_{n}{\mathit{\}}}_{n=\mathrm{1}}^{N}$, is generated as ${\mathit{y}}_{n}=\mathit{y}+{\mathit{\delta}}_{n}$, where δ_{n} is independently drawn according to Eq. (4).
Then, the nth “randomized log posterior”, J_{x,n}, is defined by Bayes' rule (Eq. 2), except with the prior mean and the observation being replaced by the nth members of the prior and observation ensembles:
The two terms are referred to as the model mismatch (log prior) and data mismatch (log likelihood), respectively.
Finally, these log posteriors are minimized. Using the Gauss–Newton iterative scheme (for example) requires (Eq. 7a) its gradient and (Eq. 7b) its Hessian approximated by firstorder model expansions, both evaluated at the current iterate, labelled x_{n,i} for each member n and iteration i. To simplify the notation, define ${\mathit{x}}_{\times}={\mathit{x}}_{n,i}$. Objects evaluated at x_{×} are similarly denoted; for instance, ${\mathbf{M}}_{\times}={\mathcal{M}}^{\prime}\left({\mathit{x}}_{\times}\right)\in {\mathbb{R}}^{P\times M}$ denotes the Jacobian matrix of ℳ evaluated at x_{×}, and
Application of the Gauss–Newton scheme yields
where the prior (or model) and likelihood (or data) increments are respectively given by
which can be called the “precision matrix” form.
Alternatively, by corollaries of the wellknown Woodbury matrix identity, the increments can be written in the “Kalman gain” form:
where ${\mathbf{I}}_{M}\in {\mathbb{R}}^{M\times M}$ is the identity matrix, and ${\mathbf{K}}_{\times}\in {\mathbb{R}}^{M\times P}$ is the gain matrix,
with
As the subscript suggests, C_{y} may be identified (in the linear case) as the prior covariance of the observation, y, of Eq. (1); it is also the covariance of the innovation, y−ℳ(μ_{x}). Note that if P≪M, then the inversion of ${\mathbf{C}}_{\mathit{y}}\in {\mathbb{R}}^{P\times P}$ for the Kalman gain form, Eq. (10a) and (10b), is significantly cheaper than the inversion of ${\mathbf{C}}_{\times}\in {\mathbb{R}}^{M\times M}$ for the precision matrix form, Eq. (9a) and (9b).
EnRML is an approximation of RML in which the ensemble is used in its own update by estimating C_{x} and M_{×}. This section derives EnRML and gradually introduces the new improvements.
Computationally, compared to RML, EnRML offers the simultaneous benefits of working with lowrank representations of covariances and not requiring a tangentlinear (or adjoint) model. Both advantages will be further exploited in the new formulation of EnRML.
Concerning their sampling properties, a few points can be made. Firstly (due to the ensemble covariance), EnRML is biased for finite N, even for a linearGaussian problem for which RML will sample the posterior correctly. This bias arises for the same reasons as in the ensemble Kalman filter (EnKF; van Leeuwen, 1999; Sacher and Bartello, 2008). Secondly (due to the ensemble linearization), EnRML effectively smoothes the likelihood. It is therefore less prone to getting trapped in local maxima of the posterior (Chen and Oliver, 2012). Sakov et al. (2018) explain this by drawing an analogy to the secant method, as compared to the Newton method. Hence, it may reasonably be expected that EnRML yields constructive results if the probability mass of the exact posterior is concentrated around its global maximum. Although this regularity condition is rather vague, it would require that the model be “not too nonlinear” in this neighbourhood. Conversely, EnRML is wholly inept at reflecting multimodality introduced through the likelihood, so RML may be better suited when local modes feature prominently, as is quite common in problems of subsurface flow (Oliver and Chen, 2011). However, while RML has the ability to sample multiple modes, it is difficult to predict the extent to which their relative proportions will be accurate (without the costly use of a correction step such as Metropolis–Hastings). Further comparison of the sampling properties of RML and EnRML was done by Evensen (2018).
3.1 Ensemble preliminaries
For convenience, define the concatenations:
which are known as the “ensemble matrix” and the “perturbation matrix”, respectively.
Projections sometimes appear through the use of linear regression. We therefore recall (Trefethen and Bau, 1997) that a (square) matrix Π is an orthogonal projector if
For any matrix A, let Π_{A} denote the projector whose image is the column space of A, implying that
Equivalently, ${\mathbf{\Pi}}_{\mathbf{A}}^{\u27c2}\mathbf{A}=\mathbf{0}$, where ${\mathbf{\Pi}}_{\mathbf{A}}^{\u27c2}=\mathbf{I}{\mathbf{\Pi}}_{\mathbf{A}}$, is called the complementary projector. The (Moore–Penrose) pseudoinverse, A^{+}, may be used to express the projector
Here, the second equality follows from the first by Eq. (15) and $({\mathbf{A}}^{+}{)}^{\mathsf{T}}=({\mathbf{A}}^{\mathsf{T}}{)}^{+}$. The formulae simplify further in terms of the SVD of A.
Now, denote 1∈ℝ^{N} the (column) vector of ones. The matrix of anomalies, $\mathbf{X}\in {\mathbb{R}}^{M\times N}$, is defined and computed by subtracting the ensemble mean, $\stackrel{\mathrm{\u203e}}{\mathit{x}}=\mathbf{E}\mathbf{1}/N$, from each column of E. It should be appreciated that this amounts to the projection
where ${\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}={\mathbf{I}}_{N}{\mathbf{\Pi}}_{\mathbf{1}}$, with ${\mathbf{\Pi}}_{\mathbf{1}}={\mathbf{11}}^{\mathsf{T}}/N$.
Definition 1: the ensemble subspace. The flat (i.e. affine subspace) given by $\mathit{\{}\mathit{x}\in {\mathbb{R}}^{M}\phantom{\rule{0.25em}{0ex}}:\phantom{\rule{0.25em}{0ex}}[\mathit{x}\stackrel{\mathrm{\u203e}}{\mathit{x}}]\in \mathrm{col}(\mathbf{X}\left)\mathit{\right\}}$.
Similarly to Sect. 2, the iteration index (i>0) subscripting on E, X, and other objects is used to indicate that they are conditional (i.e. posterior). The iterations are initialized with the prior ensemble: ${\mathit{x}}_{n,\mathrm{0}}={\mathit{x}}_{n}$.
3.2 The constituent estimates
The ensemble estimates of C_{x} and M_{×} are the building blocks of the EnRML algorithm. The canonical estimators are used, namely the sample covariance (Eq. 19a) and the leastsquares linearregression coefficients (Eq. 19b). They are denoted with the overhead bar:
The anomalies at iteration i are again given by ${\mathbf{X}}_{i}={\mathbf{E}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$, usually computed by subtraction of ${\stackrel{\mathrm{\u203e}}{\mathit{x}}}_{i}$. The matrix ℳ(E_{i}) is defined by the columnwise application of ℳ to the ensemble members. Conventionally, ℳ(E_{i}) would also be centred in Eq. (19b), i.e. multiplied on the right by ${\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$. However, this operation (and notational burden) can be neglected because ${\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}{\mathbf{X}}_{i}^{+}={\mathbf{X}}_{i}^{+}$, which follows from $\mathbf{\Pi}(\mathbf{A}\mathbf{\Pi}{)}^{+}=(\mathbf{A}\mathbf{\Pi}{)}^{+}$ (valid for any matrix A and projector Π, as shown by Maciejewski and Klein, 1985).
Note that the linearization (previously M_{×}, now ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$) no longer depends on the ensemble index, n. Indeed, it has been called “average sensitivity” since the work of Zafari and Reynolds (2005), Reynolds et al. (2006), and Gu and Oliver (2007). However, this intuition has not been rigorously justified.^{1} This is accomplished by the following theorem.
Suppose the ensemble is drawn from a Gaussian. Then
with “almost sure” convergence, and expectation (𝔼) in x, which has the same distribution as the ensemble members. Regularity conditions and proof may be found in Appendix A.
A corollary of Theorem 1 is that $\stackrel{\mathrm{\u203e}}{\mathbf{M}}\approx \frac{\mathrm{1}}{N}{\sum}_{n=\mathrm{1}}^{N}{\mathcal{M}}^{\prime}\left({\mathit{x}}_{n}\right)$, justifying the “average sensitivity, derivative, or gradient” description. The theorem applies for the ensemble of any Gaussian, and hence also holds for ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$. On the other hand, the generality of Theorem 1 is restricted by the Gaussianity assumption. Thus, for generality and precision, ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ should simply be labelled the “leastsquares (linear) fit” of ℳ, based on E_{i}.
Note that the computation (Eq. 19b) of ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ seemingly requires calculating a new pseudoinverse, ${\mathbf{X}}_{i}^{+}$, at each iteration, i; this is addressed in Sect. 3.6.
The prior covariance estimate (previously C_{x}, now ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}$) is not assumed invertible, in contrast to Sect. 2. It is then not possible to employ the precision matrix forms, Eq. (9a) and (9b), because ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}^{\mathrm{1}}$ is not defined. Using the ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}^{+}$ in its stead is flawed and damaging because it is zero in the directions orthogonal to the ensemble subspace, so its use would imply that the prior is assumed infinitely uncertain (i.e. flat) as opposed to infinitely certain (like a delta function) in those directions. Instead, one should employ ensemble subspace formulae or, equivalently (as shown in the following, using corollaries of the Woodbury identity), the Kalman gain form.
3.3 Estimating the Kalman gain
The ensemble estimates, Eq. (19a) and (19b), are now substituted into the Kalman gain form of the update (Eq. 10a and 10b to Eq. 12). The ensemble estimate of the gain matrix, denoted ${\stackrel{\mathrm{\u203e}}{\mathbf{K}}}_{i}$, thus becomes
where ${\mathbf{Y}}_{i}\in {\mathbb{R}}^{P\times N}$ has been defined as the prior (i.e. unconditioned) anomalies under the action of the ith iterate linearization:
A Woodbury corollary can be used to express ${\stackrel{\mathrm{\u203e}}{\mathbf{K}}}_{i}$ as
with
The reason for labelling this matrix with the subscript w is revealed later. For now, note that, in the common case of N≪P, the inversion in Eq. (24) is significantly cheaper than the inversion in Eq. (21). Another computational benefit is that ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w},i}$ is nondimensional, improving the conditioning of the optimization problem (Lorenc, 1997).
In conclusion, the likelihood increment (Eq. 10b) is now estimated as
This is efficient because ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ does not explicitly appear in ${\stackrel{\mathrm{\u203e}}{\mathbf{K}}}_{i}$ (neither in Eq. 21 nor Eq. 23) even though it is implicitly present through Y_{i} (Eq. 22), where it multiplies X. This absence (a) is reassuring, as the product Y_{i} constitutes a less noisy estimate than just ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ alone (Chen and Oliver, 2012; Emerick and Reynolds, 2013b, their Figs. 2 and 27, respectively), (b) constitutes a computational advantage, as will be shown in Sect. 3.6, and (c) enables leaving the type of linearization made for ℳ unspecified, as is usually the case in EnKF literature.
3.4 Estimating the prior increment
In contrast to the likelihood increment (Eq. 10b), the Kalman gain form of the prior increment (Eq. 10a) explicitly contains the sensitivity matrix, M_{×}. This issue was resolved by Bocquet and Sakov (2012) in their refinement of Sakov et al. (2012) by employing the change of variables:
where w∈ℝ^{N} is called the ensemble “controls” (Bannister, 2016), also known as the ensemble “weights” (Ott et al., 2004) or “coefficients” (Bocquet and Sakov, 2013).
Denote w_{×}, an ensemble coefficient vector, such that $\mathit{x}\left({\mathit{w}}_{\times}\right)={\mathit{x}}_{\times}$, and note that x(e_{n})=x_{n}, where e_{n} is the nth column of the identity matrix. Thus, $[{\mathit{x}}_{n}{\mathit{x}}_{\times}]=\mathbf{X}[{\mathit{e}}_{n}{\mathit{w}}_{\times}]$, and the prior increment (Eq. 10a) with the ensemble estimates becomes
where there is no explicit ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$, which only appears implicitly through ${\mathbf{Y}}_{i}={\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}\mathbf{X}$, as defined in Eq. (22) Alternatively, applying the subspace formula (Eq. 23) and using ${\mathbf{I}}_{N}={\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w},i}({\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w},i}{)}^{\mathrm{1}}$ yields
3.5 Justifying the change of variables
Lemma 1 (closure). Suppose E_{i} is generated by EnRML. Then, each member (column) of E_{i} is in the (prior) ensemble subspace. Moreover, col(X_{i})⊆col(X).
Lemma 1 may be proven by noting that X is the leftmost factor in ${\stackrel{\mathrm{\u203e}}{\mathbf{K}}}_{i}$ and using induction on Eqs. (10a) and (10b). Alternatively, it can be deduced (Raanes et al., 2019) as a consequence of the implicit assumption on the prior that $\mathit{x}\sim \mathcal{N}(\stackrel{\mathrm{\u203e}}{\mathit{x}},{\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}})$. A stronger result, namely col(X_{i})=col(X), is conjectured in Appendix A, but Lemma 1 is sufficient for the present purposes: it implies that ${\mathit{w}}_{\times}\in {\mathbb{R}}^{N}$ exists such that $\mathit{x}\left({\mathit{w}}_{\times}\right)={\mathit{x}}_{\times}$ for any ensemble member and any iteration. Thus, the lemma justifies the change of variables (Eq. 26).
Moreover, using the ensemble coefficient vector (w) is theoretically advantageous, as it inherently embodies the restriction to the ensemble subspace. A practical advantage is that w is relatively lowdimensional compared to x, which lowers storage and accessing expenses.
3.6 Simplifying the regression
Recall the definition of Eq. (22): ${\mathbf{Y}}_{i}={\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}\mathbf{X}$. Avoiding the explicit computation of ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ used in this product between the iterationi estimate ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ and the initial (prior) X was the motivation behind the modification ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}\leftarrow {\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x},i}$ by Chen and Oliver (2013b). Here, instead, by simplifying the expression of the regression, it is shown how to compute Y_{i} without first computing ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$.
3.6.1 The transform matrix
Inserting the regression ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ (19b) into the definition (22),
where ${\mathbf{T}}_{i}^{+}={\mathbf{X}}_{i}^{+}\mathbf{X}$ has been defined, apparently requiring the pseudoinversion of X_{i} for each i. But, as shown in Appendix A2,
which only requires the onetime pseudoinversion of the prior anomalies, X. Then, since the pseudoinversion of ${\mathbf{T}}_{i}\in {\mathbb{R}}^{N\times N}$ for Y_{i} (29) is a relatively small calculation, this saves computational time.
The symbol T has been chosen in reference to deterministic, squareroot EnKFs. Indeed, multiplying Eq. (30) on the left by X and recalling Eq. (17) and Lemma 1 produces X_{i}=XT_{i}. Therefore, the “transform matrix”, T_{i}, describes the conditioning of the anomalies (and covariance).
Conversely, Eq. (29) can be seen as the “deconditioning” of the posterior observation anomalies. This interpretation of Y_{i} should be contrasted to its definition (22), which presents it as the prior state anomalies “propagated” by the linearization of iteration i. The two approaches are known to be “mainly equivalent” in the deterministic case (Sakov et al., 2012). To our knowledge, however, it has not been exploited for EnRML before now, possibly because the proofs (Appendix A2) are a little more complicated in this stochastic case.
3.6.2 From the ensemble coefficients
The ensemble matrix of iteration i can be written as follows:
where the columns of ${\mathbf{W}}_{i}\in {\mathbb{R}}^{N\times N}$ are the ensemble coefficient vectors (Eq. 26). Multiplying Eq. (31) on the right by ${\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$ to get the anomalies produces
This seems to indicate that ${\mathbf{W}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$ is the transform matrix, T_{i}, discussed in the previous subsection. However, they are not fully equal: inserting X_{i} from Eq. (32) into Eq. (30) yields
showing that they are distinguished by ${\mathbf{\Pi}}_{{\mathbf{X}}^{\mathsf{T}}}={\mathbf{X}}^{+}\mathbf{X}$: the projection onto the row space of X.
Appendix A3 shows that, in most conditions, this pesky projection matrix vanishes when T_{i} is used in Eq. (29):
In other words, the projection ${\mathbf{\Pi}}_{{\mathbf{X}}^{\mathsf{T}}}$ can be omitted unless ℳ is nonlinear and the ensemble is larger than the unknown state's dimensionality.
A wellknown result of Reynolds et al. (2006) is that the first step of the EnRML algorithm (with W_{0}=I_{N}) is equivalent to the EnKF. However, this is only strictly true if there is no appearance of ${\mathbf{\Pi}}_{{\mathbf{X}}^{\mathsf{T}}}$ in EnRML. The following section explains why EnRML should indeed always be defined without this projection.
3.6.3 Linearization chaining
Consider applying the change of variables 26 to w at the very beginning of the derivation of EnRML. Since X1=0, there is a redundant degree of freedom in w, meaning that there is a choice to be made in deriving its density from the original one, given by J_{x,n}(x) in Eq. (6). The simplest choice (Bocquet et al., 2015) results in the log posterior:
Application of the Gauss–Newton scheme with the gradients and Hessian matrix of J_{w,n}, followed by a reversion to x, produces the same EnRML algorithm as above.
The derivation summarized in the previous paragraph is arguably simpler than that of the last few pages. Notably, (a) it does not require the Woodbury identity to derive the subspace formulae, (b) there is never an explicit ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ to deal with, and (c) the statistical linearization of leastsquares regression from W_{i} to ℳ(E_{i}) directly yields Eq. (34), except that there are no preconditions.
While the case of a large ensemble ($N\mathrm{1}>M$) is not typical in geoscience, the fact that this derivation does not produce a projection matrix (which requires a pseudoinversion) under any conditions begs the following questions. Why are they different? Which version is better?
The answers lie in understanding the linearization of the map $\mathit{w}\mapsto \mathcal{M}(\stackrel{\mathrm{\u203e}}{\mathit{x}}+\mathbf{X}\mathit{w})$ and noting that, similarly to analytical (infinitesimal) derivatives, the chain rule applies for leastsquares regression. In effect, the product ${\mathbf{Y}}_{i}={\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}\mathbf{X}$, which implicitly contains the projection matrix ${\mathbf{\Pi}}_{{\mathbf{X}}^{\mathsf{T}}}$, can be seen as an application of the chain rule for the composite function ℳ(x(w)). By contrast, Eq. (34) – but without the precondition – is obtained by direct regression of the composite function. Typically, the two versions yield identical results (i.e. the chain rule). However, since the intermediate space, col(X), is of lower dimensions than the initial domain ($M<N\mathrm{1}$), composite linearization results in a loss of information, manifested by the projection matrix. Therefore, the definition ${\mathbf{Y}}_{i}=\mathcal{M}\left({\mathbf{E}}_{i}\right)\phantom{\rule{0.125em}{0ex}}({\mathbf{W}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}{)}^{+}$ is henceforth preferred to ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}\mathbf{X}$.
Numerical experiments, as in Sect. 4 but not shown, indicate no statistically significant advantage for either version. This corroborates similar findings by Sakov et al. (2012) for the deterministic flavour. Nevertheless, there is a practical advantage: avoiding the computation of ${\mathbf{\Pi}}_{{\mathbf{X}}^{\mathsf{T}}}$.
3.6.4 Inverting the transform
In squareroot ensemble filters, the transform matrix should have 1 as an eigenvector (Sakov and Oke, 2008; Livings et al., 2008). By construction, this also holds true for ${\mathbf{W}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$, with eigenvalue 0. Now, consider adding 0=XΠ_{1} to Eq. (32), yielding another valid transformation:
The matrix Ω_{i}, in contrast to ${\mathbf{W}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$ and T_{i}, has eigenvalue 1 for 1 and is thus invertible. This is used to prove Eq. (34) in Appendix A3, where Y_{i} is expressed in terms of ${\mathbf{\Omega}}_{i}^{\mathrm{1}}$.
Numerically, the use of Ω_{i} in the computation (34) of Y_{i} was found to yield stable convergence of the new EnRML algorithm in the trivial example of ℳ(x)=αx. By contrast, the use of $(\mathbf{W}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}{)}^{+}$ exhibited geometrically growing (in i) errors when α>1. Other formulae for the inversion are derived in Appendix A4. The one found to be the most stable is $(\mathbf{W}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}{)}^{+}={\mathbf{W}}^{\mathrm{1}}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$; it is therefore preferred in Algorithm 1.
Irrespective of the inverse transform formula used, it is important to retain all nonzero singular values. This absence of a truncation threshold is a tuning simplification compared with the old EnRML algorithm, where X and/or X_{i} was scaled, decomposed, and truncated. If, by extreme chance or poor numerical subroutines, the matrix W_{i} is not invertible (this never occurred in any of the experiments except by our manual intervention; see the conjecture in Appendix A), its pseudoinversion should be used; however, this must also be accounted for in the prior increment by multiplying the formula on line 8 on the left by the projection onto W_{i}.
3.7 Algorithm
To summarize, Algorithm 1 provides pseudocode for the new EnRML formulation. The increments ${\stackrel{\mathrm{\u203e}}{\mathbf{\Delta}}}^{\mathrm{lklhd}}$ (Eq. 25) and ${\stackrel{\mathrm{\u203e}}{\mathbf{\Delta}}}^{\mathrm{prior}}$ (Eq. 28) can be recognized by multiplying line 10 on the left by X. For aesthetics, the sign of the gradients has been reversed. Note that there is no need for an explicit iteration index. Nor is there an ensemble index, n, since all N columns are stacked into the matrix W. However, in case M is large, Y may be computed column by column to avoid storing E.
Line 6 is typically computed by solving ${\mathbf{Y}}^{\prime}\mathbf{W}=\mathcal{M}\left(\mathbf{E}\right)$ for Y^{′} and then subtracting its column mean. Alternative formulae are discussed in Sect. 3.6.4. Line 9 may be computed using a reduced (or even truncated) SVD of ${\mathbf{C}}_{\mathit{\delta}}^{\mathrm{1}/\mathrm{2}}\mathbf{Y}$, which is relatively fast for N both larger and smaller than P. Alternatively, the Kalman gain forms could be used.
The Levenberg–Marquardt variant is obtained by adding the trustregion parameter λ>0 to (N−1) in the Hessian matrix (line 9), which impacts both the step length and direction.
Localization may be implemented by local analysis (Hunt et al., 2007; Sakov and Bertino, 2011; Bocquet, 2016; Chen and Oliver, 2017). Here, tapering is applied by replacing the localdomain ${\mathbf{C}}_{\mathit{\delta}}^{\mathrm{1}/\mathrm{2}}$ (implicit on lines 7 and 9) by $\mathit{\rho}\circ {\mathbf{C}}_{\mathit{\delta}}^{\mathrm{1}/\mathrm{2}}$, with ∘ being the Schur product and ρ a square matrix containing the (squareroot) tapering coefficients, ${\mathit{\rho}}_{m,l}\in [\mathrm{0},\mathrm{1}]$. If the number of local domains used is large so that the number of W matrices used becomes large, then it may be more efficient to revert to the original state variables and explicitly compute the sensitivities ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ using the local parts of ℳ(E_{i}) and X_{i}.
Inflation and model error parameterizations are not included in the algorithm but may be applied outside of it. We refer to Sakov et al. (2018) and Evensen (2019) for model error treatment with iterative methods.
The new EnRML algorithm produces results that are identical to the old formulation, at least up to roundoff and truncation errors, and for $N\mathrm{1}\le M$. Therefore, since there are already a large number of studies of EnRML with reservoir cases (e.g. Chen and Oliver, 2013a; Emerick and Reynolds, 2013b), adding to this does not seem necessary.
However, there do not appear to be any studies of EnRML with the Lorenz 96 system (Lorenz, 1996) in a data assimilation setting. The advantages of this case are numerous: (a) the model is a surrogate of weather dynamics and as such holds relevance in geoscience, (b) the problem is (exhaustively) sampled from the system's invariant measure rather than being selected by the experimenter, (c) the sequential nature of data assimilation inherently tests prediction skill, which helps avoid the pitfalls of point measure assessment, such as overfitting, and (d) its simplicity enhances reliability and reproducibility and has made it a literature standard, thus facilitating comparative studies.
Comparison of the benchmark performance of EnRML will be made to the IEnKS and to ensemble multiple data assimilation (ESMDA)^{2}. Both the stochastic and the deterministic (squareroot) flavours of ESMDA are included, which in the case of only one iteration (not shown), results in exactly the same ensembles as EnRML and IEnKS, respectively. Not included in the benchmark comparisons is the version of EnRML in which the prior increment is dropped (see Sect. 1.1). This is because the chaotic, sequential nature of this case makes it practically impossible to achieve good results without propagating prior information. Similarly, as they lack a dynamic prior, this precludes “regularizing, iterative ensemble smoothers”, as in Iglesias (2015), Luo et al. (2015)^{3}, and Mandel et al. (2016)^{4}, even if their background is welltuned and their stopping condition judicious. Because they require the tangentlinear model, M_{×}, RML and EDA/En4DVar (Tian et al., 2008; Bonavita et al., 2012; Jardak and Talagrand, 2018) are not included. For simplicity, neither localization nor covariance hybridization will be used. Other related methods may be found in the reviews of Bannister (2016) and Carrassi et al. (2018).
4.1 Setup
The performances of the iterative ensemble smoother methods are benchmarked with “twin experiments”, using the Lorenz 96 dynamical system, which is configured with standard settings (e.g. Ott et al., 2004; Bocquet and Sakov, 2014), detailed below. The dynamics are given by the M=40 coupled ordinary differential equations:
for $m=\mathrm{1},\mathrm{\dots},M$, with periodic boundary conditions. These are integrated using the fourthorder Runge–Kutta scheme, with time steps of 0.05 time units and no model noise, to yield the truth trajectory, x(t). Observations of the entire state vector are taken Δt_{obs} time units apart with unit noise variance, meaning that $\mathit{y}\left(t\right)=\mathit{x}\left(t\right)+\mathit{\delta}\left(t\right)$, for each $t=k\cdot \mathrm{\Delta}{t}_{\text{obs}}\phantom{\rule{0.125em}{0ex}}$, with $k=\mathrm{0},\mathrm{1},\mathrm{\dots},\mathrm{20}\phantom{\rule{0.125em}{0ex}}\mathrm{000}$ and C_{δ}=I_{M}.
The iterative smoothers are employed in the sequential problem of filtering, aiming to estimate x(t) as soon as y(t) comes in. In so doing, they also tackle the smoothing problem for x(t−Δt_{DAW}), where the length of the data assimilation window, Δt_{DAW} , is fixed at a nearoptimal value (inferred from Figs. 3 and 4 of Bocquet and Sakov, 2013) that is also costefficient (i.e. short). This window is shifted by 1⋅Δt_{obs} each time a new observation becomes available. A postanalysis inflation factor is tuned for optimal performance for each smoother and each ensemble size, N. Also, random rotations are used to generate the ensembles for the squareroot variants. The number of iterations is fixed either at 3 or 10. No tuning of the step length is undertaken: it is 1∕3 or 1∕10 for ESMDA and 1 for EnRML and the IEnKS.
The methods are assessed by their accuracy, as measured by the rootmeansquare error (RMSE):
which is recorded immediately following each analysis of the latest observation y(t). The “smoothing” error (assessed with x(t−Δt_{DAW})) is also recorded. After the experiment, the instantaneous RMSE(t) values are averaged for all t>20. The results can be reproduced using Pythoncode scripts hosted online at https://github.com/nansencenter/DAPPER/tree/paper_StochIEnS (last access: 12 September 2019). This code reproduces previously published results in the literature. For example, our benchmarks obtained with the IEnKS can be crossreferenced with the ones reported by Bocquet and Sakov (Fig. 7a; 2014).
4.2 Results
A table of RMSE averages is compiled for a range of N, and then plotted as curves for each method, in Fig. 1. The upper panels report the analysis RMSE scores, while the lower panels report the smoothing RMSE scores. The smoothing scores are systematically lower, but the relative results are highly similar. Moving right among the panels increases Δt_{obs} and thus the nonlinearity; naturally, all of the RMSE scores also increase. As a final “sanity check”, note that the performances of all of the ensemble methods improve with increasing N, which needs to be at least 15 for tolerable performance, corresponding to the rank of the unstable subspace of the dynamics plus 1 (Bocquet and Carrassi, 2017).
For experiments with Δt_{obs} ≤0.4, using 3 iterations is largely sufficient, since its markers are rarely significantly higher than those of 10 iterations. On the other hand, for the highly nonlinear experiment where Δt_{obs} =0.6, there is a significant advantage in using 10 iterations.
The deterministic (squareroot) IEnKS and ESMDA score noticeably lower RMSE averages than the stochastic IEnKS (i.e. EnRML) and ESMDA, which require N closer to 30 for good performance. This is qualitatively the same result as that obtained for noniterative methods (e.g. Sakov and Oke, 2008). Also tested (not shown) was the firstorderapproximate deterministic flavour of ESMDA (Emerick, 2018); it performed very similarly to the squareroot flavour.
Among the stochastic smoothers, the one based on Gauss–Newton (EnRML) scores noticeably lower averages than the one based on annealing (ESMDA) when the nonlinearity is strong (Δt_{obs} ≥0.4) and for small N. A similar trend holds for the deterministic smoothers: the IEnKS performs better than ESMDA for Δt_{obs} =0.6. The likely explanation for this result is that EnRML and IEnKS can iterate indefinitely, while ESMDA may occasionally suffer from not “reaching” the optimum.
Furthermore, the performance of EnRML and IEnKS could possibly be improved by lowering the step lengths to avoid causing “unphysical” states and to avoid “bouncing around” near the optimum. The tuning of the parameter that controls the step length, (e.g. the trustregion parameter and the MDAinflation parameter) has been the subject of several studies (Chen and Oliver, 2012; Bocquet and Sakov, 2012; Ma et al., 2017; Le et al., 2016; Rafiee and Reynolds, 2017). However, our superficial trials with this parameter (not shown) yielded little or no improvement.
This paper has presented a new and simpler (on paper and computationally) formulation of the iterative, stochastic ensemble smoother known as ensemble randomized maximum likelihood (EnRML). Notably, there is no explicit computation of the sensitivity matrix ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$, while the product ${\mathbf{Y}}_{i}={\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}\mathbf{X}$ is computed without any pseudoinversions of the matrix of state anomalies. This fixes issues of noise, computational cost, and covariance localization, and there is no longer any temptation to omit the prior increment from the update. Moreover, the Levenberg–Marquardt variant is now a trivial modification of the Gauss–Newton variant.
The new EnRML formulation was obtained by improvements to the background theory and derivation. Notably, Theorem 1 established the relation of the ensembleestimated, leastsquares linearregression coefficients, ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$, to “average sensitivity”. Section 3.6 then showed that the computation of its action on the prior anomalies, ${\mathbf{Y}}_{i}={\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}\mathbf{X}$, simplifies into a deconditioning transformation, ${\mathbf{Y}}_{i}=\mathcal{M}\left({\mathbf{E}}_{i}\right)\phantom{\rule{0.125em}{0ex}}{\mathbf{T}}_{i}^{+}$. Further computational gains resulted from expressing T_{i} in terms of the coefficient vectors, W_{i}, except that it also involves the “annoying” ${\mathbf{\Pi}}_{{\mathbf{X}}^{\mathsf{T}}}$. Although it usually vanishes, the appearance of this projection is likely the reason why most expositions of the EnKF do not venture to declare that its implicit linearization of ℳ is that of leastsquares linear regression. Section 3.6.3 showed that the projection is merely the result of using the chain rule for indirect regression to the ensemble space and argued that it is preferable to use the direct regression of the standard EnKF.
The other focus of the derivation was rank issues, with ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}$ not assumed invertible. Using the Woodbury matrix lemma, and avoiding implicit pseudoinversions and premature insertion of SVDs, it was shown that the rank deficiency invalidates the Hessian form of the RML update, which should be restricted to the ensemble subspace. On the other hand, the subspace form and Kalman gain form of the update remain equivalent and valid. Furthermore, Theorem 2 of Appendix A proves that the ensemble does not lose rank during the updates of EnRML (or EnKF).
The paper has also drawn significantly on the theory of the deterministic counterpart to EnRML: the iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks using the Lorenz 96 model with these two and the ensemble multiple data assimilation (ESMDA) smoother were shown in Sect. 4. In the case of small ensembles and large nonlinearity, EnRML and IEnKS achieved better accuracy than the stochastic and deterministic, respectively, ESMDA. Similarly to the trend for noniterative filters, the deterministic smoothers systematically obtained better accuracy than the stochastic smoothers.
As mentioned in the paper, the experimental results may be reproduced with code hosted at https://github.com/nansencenter/DAPPER/tree/paper_StochIEnS (last access: 12 September 2019).
A1 Preliminary
Proof of Theorem 1 . Assume that $\mathrm{0}<\left{\mathbf{C}}_{\mathit{x}}\right<\mathrm{\infty}$ and that each element of C_{ℳ(x),x} and 𝔼[ℳ^{′}(x)] is finite. Then ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}$ is a strongly consistent estimator of C_{x}. Likewise, ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathcal{M}\left(\mathit{x}\right),\mathit{x}}\to {\mathbf{C}}_{\mathcal{M}\left(\mathit{x}\right),\mathit{x}}$ almost surely, as N→∞. Thus, since $\stackrel{\mathrm{\u203e}}{\mathbf{M}}={\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathcal{M}\left(\mathit{x}\right),\mathit{x}}\phantom{\rule{0.125em}{0ex}}{\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{x}}^{\mathrm{1}}$ for sufficiently large N, Slutsky's theorem yields $\stackrel{\mathrm{\u203e}}{\mathbf{M}}\to {\mathbf{C}}_{\mathcal{M}\left(\mathit{x}\right),\mathit{x}}\phantom{\rule{0.125em}{0ex}}{\mathbf{C}}_{\mathit{x}}^{\mathrm{1}}$ almost surely. The equality to 𝔼[ℳ^{′}(x)] follows directly from “Stein's lemma” (Liu, 1994).
The posterior ensemble's covariance, obtained using the EnKF, has the same rank as the prior's, almost surely (a.s.).
Proof. The updated anomalies, both for the squareroot and the stochastic EnKF, can be written as X^{a}=XT^{a} for some ${\mathbf{T}}^{a}\in {\mathbb{R}}^{N\times N}$.
For a deterministic EnKF, ${\mathbf{T}}^{a}=\sqrt{N\mathrm{1}}{\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w}}^{\mathrm{1}/\mathrm{2}}$ for the symmetric positive definite square root of ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w}}$ or an orthogonal transformation thereof (Sakov and Oke, 2008). Hence rank(X^{a})=rank(X).
For the stochastic EnKF, Eqs. (25) and (23) may be used to show that ${\mathbf{T}}^{a}=(N\mathrm{1}){\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w}}\mathbf{{\rm Y}}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$, with $\mathbf{{\rm Y}}={\mathbf{I}}_{N}+{\mathbf{Y}}^{\mathsf{T}}{\mathbf{C}}_{\mathit{\delta}}^{\mathrm{1}}\mathbf{D}/(N\mathrm{1})$. Hence, for rank preservation, it will suffice to show that Υ is a.s. full rank.
We begin by writing Υ more compactly:
From Eqs. (4), (14), and (A1), it can be seen that column n of Z follows the law ${\mathit{z}}_{n}\sim \mathcal{N}(\mathbf{0},{\mathbf{I}}_{P}/(N\mathrm{1}\left)\right)$. Hence, column n of Υ follows ${\mathit{\upsilon}}_{n}\sim \mathcal{N}({\mathit{e}}_{n},{\mathbf{S}}^{\mathsf{T}}\mathbf{S}/(N\mathrm{1}\left)\right)$ and has the sample space
Now consider, for $n=\mathrm{0},\mathrm{\dots},N$, the hypothesis (H_{n})
where Υ_{:n} denotes the first n columns of Υ, and I_{n:} denotes the last N−n columns of I_{N}. Clearly, H_{0} is true. Now, suppose H_{n−1} is true. Then the columns of $[{\mathbf{{\rm Y}}}_{:n\mathrm{1}},\phantom{\rule{0.33em}{0ex}}{\mathbf{I}}_{n\mathrm{1}:}]$ are all linearly independent. For column n, this means that ${\mathit{e}}_{n}\phantom{\rule{0.33em}{0ex}}\mathrm{col}\left(\right[{\mathbf{{\rm Y}}}_{:n\mathrm{1}},\phantom{\rule{0.33em}{0ex}}{\mathbf{I}}_{n:}\left]\right)$. By contrast, from Eq. (A2), e_{n}∈𝒮_{n}. The existence of a point in ${\mathcal{S}}_{n}\setminus \mathrm{col}\left(\right[{\mathbf{{\rm Y}}}_{:n\mathrm{1}},\phantom{\rule{0.33em}{0ex}}{\mathbf{I}}_{n:}\left]\right)$ means that
Since υ_{n} is absolutely continuous with sampling space 𝒮_{n}, Eq. (A4) means that the probability that ${\mathit{\upsilon}}_{n}\in \mathrm{col}\left(\right[{\mathbf{{\rm Y}}}_{:n\mathrm{1}},\phantom{\rule{0.33em}{0ex}}{\mathbf{I}}_{n:}\left]\right)$ is zero. This implies H_{n} a.s., establishing the induction. Identifying the final hypothesis (H_{N}) with rank(Υ)=N concludes the proof.
A corollary of Theorem 2 and Lemma 1 is that the ensemble subspace is also unchanged by the EnKF update. Note that both the prior ensemble and the model (involved through Y) are arbitrary in Theorem 2. However, C_{δ} is assumed invertible. The result is therefore quite different from the topic discussed by Kepert (2004) and Evensen (2004), where rank deficiency arises due to a reducedrank C_{δ}.
Conjecture 1: the rank of the ensemble is preserved by the EnRML update (a.s.), and W_{i} is invertible.
We were not able to prove Conjecture 1, but it seems a logical extension of Theorem 2 and is supported by numerical trials. The following proofs utilize Conjecture 1, without which some projections will not vanish. Yet, even if Conjecture 1 should not hold (due to bugs, truncation, or really bad luck), Algorithm 1 is still valid and optimal, as discussed in Sect. 3.6.3 and 3.6.4.
A2 The transform matrix
$({\mathbf{X}}^{+}{\mathbf{X}}_{i}{)}^{+}={\mathbf{X}}_{i}^{+}\mathbf{X}$.
Proof. Let $\mathbf{T}={\mathbf{X}}^{+}{\mathbf{X}}_{i}$ and $\mathbf{S}={\mathbf{X}}_{i}^{+}\mathbf{X}$. The following shows that S satisfies the four properties of the Moore–Penrose characterization of the pseudoinverse of T:

$\begin{array}{rl}{\displaystyle}\mathbf{TST}& {\displaystyle}=\left({\mathbf{X}}^{+}{\mathbf{X}}_{i}\right)\left({\mathbf{X}}_{i}^{+}\mathbf{X}\right)\left({\mathbf{X}}^{+}{\mathbf{X}}_{i}\right)\\ {\displaystyle}& {\displaystyle}={\mathbf{X}}^{+}{\mathbf{\Pi}}_{{\mathbf{X}}_{i}}{\mathbf{\Pi}}_{\mathbf{X}}{\mathbf{X}}_{i}& {\displaystyle}[{\mathbf{\Pi}}_{\mathbf{A}}={\mathbf{AA}}^{+}]\\ {\displaystyle}& {\displaystyle}={\mathbf{X}}^{+}{\mathbf{\Pi}}_{{\mathbf{X}}_{i}}{\mathbf{X}}_{i}& {\displaystyle}\left[\text{Lemma1}\right]\\ {\displaystyle}& {\displaystyle}=\mathbf{T}& {\displaystyle}[{\mathbf{\Pi}}_{\mathbf{A}}\mathbf{A}=\mathbf{A}].\end{array}$

STS=S, as may be shown similarly to point 1.

$\mathbf{TS}={\mathbf{X}}^{+}\mathbf{X}$, as may be shown similarly to point 1, using Conjecture 1. The symmetry of TS follows from that of X^{+}X.

The symmetry of ST is shown as for point 3.
This proof was heavily inspired by appendix A of Sakov et al. (2012). However, our developments apply for EnRML (rather than the deterministic, squareroot IEnKS). This means that T_{i} is not symmetric, which complicates the proof in that the focus must be on X^{+}X_{i} rather than ${\mathbf{X}}_{i}^{+}$ alone. Our result also shows the equivalence of S^{+} and T in general, while the additional result of the vanishing projection matrix in the case of $N\mathrm{1}\le M$ is treated separately in Appendix A3.
A3 Proof of Eq. (34)
Lemma 2: Ω_{i} is invertible (provided that W_{i} is invertible).
Proof. We show that Ω_{i}u≠0 for any u≠0, where ${\mathbf{\Omega}}_{i}={\mathbf{W}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}+{\mathbf{\Pi}}_{\mathbf{1}}$. For u∈col(1), Ω_{i}u=u. For $\mathit{u}\in \mathrm{col}(\mathbf{1}{)}^{\u27c2}$, ${\mathbf{\Omega}}_{i}\mathit{u}={\mathbf{W}}_{i}\mathit{u}\ne \mathrm{0}$ (Conjecture 1).
Recall that Eq. (33) was obtained by inserting X_{i} into the expression (Eq 30) for T_{i}. By contrast, the following inserts X from Eq. (35) into the expression (Eq. 29) for ${\mathbf{T}}_{i}^{+}$, yielding ${\mathbf{T}}_{i}^{+}={\mathbf{X}}_{i}^{+}\mathbf{X}={\mathbf{X}}_{i}{\mathbf{X}}_{i}{\mathbf{\Omega}}_{i}^{\mathrm{1}}={\mathbf{\Pi}}_{{\mathbf{X}}_{i}^{\mathsf{T}}}{\mathbf{\Omega}}_{i}^{\mathrm{1}}={\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}{\mathbf{\Pi}}_{{\mathbf{X}}_{i}^{\mathsf{T}}}{\mathbf{\Omega}}_{i}^{\mathrm{1}}$, and hence
Next, it is shown that, under certain conditions, the projection matrix ${\mathbf{\Pi}}_{{\mathbf{X}}_{i}^{\mathsf{T}}}$ vanishes:
Thereafter, Eq. (A11) of Appendix A4 can be used to write ${\mathbf{\Omega}}_{i}^{\mathrm{1}}$ in terms of $({\mathbf{W}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}{)}^{+}$, reducing Eq. (A6) to (34).
The case of $N\mathrm{1}\le M$
In the case of $N\mathrm{1}\le M$, the null space of X is in the range of 1 (with probability 1; Muirhead, 1982, Theorem 3.1.4). By Lemma 2, the same applies for X_{i}; thus ${\mathbf{\Pi}}_{{\mathbf{X}}_{i}^{\mathsf{T}}}$ in Eq. (A5) reduces to ${\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$.
The case of linearity
Let M be the matrix of the observation model ℳ, here assumed linear: ℳ(E_{i})=ME_{i}. By Eq. (A5), ${\mathbf{Y}}_{i}={\mathbf{ME}}_{i}{\mathbf{\Pi}}_{{\mathbf{X}}_{i}^{\mathsf{T}}}{\mathbf{\Omega}}_{i}^{\mathrm{1}}$. But ${\mathbf{E}}_{i}{\mathbf{\Pi}}_{{\mathbf{X}}_{i}^{\mathsf{T}}}={\mathbf{X}}_{i}={\mathbf{E}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}$.
A4 Inverse transforms
Recall from Eq. (22) that Y_{i}1=0. Therefore
where ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w},i}$ is defined in Eq. (24), and the identity for ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w},i}$ follows from that of ${\stackrel{\mathrm{\u203e}}{\mathbf{C}}}_{\mathit{w},i}^{\mathrm{1}}$. Similarly, the following identities are valid also when W_{i} and ${\mathbf{W}}_{i}^{\mathrm{1}}$ are swapped:
Equation (A8) is proven inductively (in i) by inserting Eq. (A7) into line 10 of Algorithm 1. It enables showing Eq. (A9), using ${\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}={\mathbf{I}}_{N}{\mathbf{\Pi}}_{\mathbf{1}}$. This enables showing Eq. (A10), similarly to Theorem 3. Note that this implies that Y_{i}1=0 also for ${\mathbf{Y}}_{i}=\mathcal{M}\left({\mathbf{E}}_{i}\right)\phantom{\rule{0.125em}{0ex}}({\mathbf{W}}_{i}{\mathbf{\Pi}}_{\mathbf{1}}^{\u27c2}{)}^{+}$ and hence that the identities of this section also hold with this definition. Eqs. (A9) and (A10) can be used to show (by multiplying with Ω_{i}) that
The new and simpler EnRML algorithm was derived by PNR and further developed in consultation with GE (who also developed much of it independently) and ASS. Theorems 1 and 2 were derived by PNR, prompted by discussions with GE, and verified by ASS. The experiments and the rest of the writing were done by PNR and revised by GE and ASS.
The authors declare that they have no conflict of interest
The authors thank Dean Oliver, Kristian Fossum, Marc Bocquet, and Pavel Sakov for their reading and comments and Elvar Bjarkason for his questions concerning the computation of the inverse transform matrix.
This work has been funded by DIGIRES, a project sponsored by industry partners and the PETROMAKS2 programme of the Research Council of Norway.
This paper was edited by Takemasa Miyoshi and reviewed by Marc Bocquet, Pavel Sakov, and one anonymous referee.
Bannister, R. N.: A review of operational methods of variational and ensemblevariational data assimilation, Q. J. Roy. Meteor. Soc., 143, 607–633, 2016. a, b
Bardsley, J. M., Solonen, A., Haario, H., and Laine, M.: Randomizethenoptimize: A method for sampling from posterior distributions in nonlinear inverse problems, SIAM J. Sci. Comput., 36, A1895–A1910, 2014. a
Bocquet, M.: Localization and the iterative ensemble Kalman smoother, Q. J. Roy. Meteor. Soc., 142, 1075–1089, 2016. a
Bocquet, M. and Carrassi, A.: Fourdimensional ensemble variational data assimilation and the unstable subspace, Tellus A, 69, 1304504, https://doi.org/10.1080/16000870.2017.1304504, 2017. a
Bocquet, M. and Sakov, P.: Combining inflationfree and iterative ensemble Kalman filters for strongly nonlinear systems, Nonlin. Processes Geophys., 19, 383–399, https://doi.org/10.5194/npg193832012, 2012. a, b, c
Bocquet, M. and Sakov, P.: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlin. Processes Geophys., 20, 803–818, https://doi.org/10.5194/npg208032013, 2013. a, b
Bocquet, M. and Sakov, P.: An iterative ensemble Kalman smoother, Q. J. Roy. Meteor. Soc., 140, 1521–1535, 2014. a, b, c, d
Bocquet, M., Raanes, P. N., and Hannart, A.: Expanding the validity of the ensemble Kalman filter without the intrinsic need for inflation, Nonlin. Processes Geophys., 22, 645–662, https://doi.org/10.5194/npg226452015, 2015. a
Bonavita, M., Isaksen, L., and Hólm, E.: On the use of EDA background error variances in the ECMWF 4DVar, Q. J. Roy. Meteor. Soc., 138, 1540–1559, 2012. a
Carrassi, A., Bocquet, M., Bertino, L., and Evensen, G.: Data assimilation in the geosciences: An overview of methods, issues, and perspectives, Wiley Interdisciplinary Reviews: Climate Change, 9, e535, 2018. a
Chen, Y. and Oliver, D. S.: Ensemble randomized maximum likelihood method as an iterative ensemble smoother, Math. Geosci., 44, 1–26, 2012. a, b, c, d
Chen, Y. and Oliver, D. S.: History Matching of the Norne Full Field Model Using an Iterative Ensemble Smoother(SPE164902), in: 75th EAGE Conference & Exhibition incorporating SPE EUROPEC, 2013a. a
Chen, Y. and Oliver, D. S.: Levenberg–Marquardt forms of the iterative ensemble smoother for efficient history matching and uncertainty quantification, Computat. Geosci., 17, 689–703, 2013b. a, b
Chen, Y. and Oliver, D. S.: Localization and regularization for iterative ensemble smoothers, Computat. Geosci., 21, 13–30, 2017. a
Emerick, A. A.: Deterministic ensemble smoother with multiple data assimilation as an alternative for historymatching seismic data, Computat. Geosci., 22, 1–12, 2018. a
Emerick, A. A. and Reynolds, A. C.: Ensemble smoother with multiple data assimilation, Computat. Geosci., 55, 3–15, 2013a. a
Emerick, A. A. and Reynolds, A. C.: Investigation of the sampling performance of ensemblebased methods with a simple reservoir model, Computat. Geosci., 17, 325–350, 2013b. a, b
Evensen, G.: Sampling strategies and square root analysis schemes for the EnKF, Ocean Dynam., 54, 539–560, 2004. a
Evensen, G.: Analysis of iterative ensemble smoothers for solving inverse problems, Computat. Geosci., 22, 885–908, 2018. a
Evensen, G.: Accounting for model errors in iterative ensemble smoothers, Computat. Geosci., 23, 761–775, https://doi.org/10.1007/s105960199819z, 2019. a
Fillion, A., Bocquet, M., and Gratton, S.: Quasistatic ensemble variational data assimilation: a theoretical and numerical study with the iterative ensemble Kalman smoother, Nonlin. Processes Geophys., 25, 315–334, https://doi.org/10.5194/npg253152018, 2018. a
Gu, Y. and Oliver, D. S.: An iterative ensemble Kalman filter for multiphase fluid flow data assimilation, SPE J., 12, 438–446, 2007. a, b, c
Hunt, B. R., Kostelich, E. J., and Szunyogh, I.: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter, Physica D, 230, 112–126, 2007. a
Iglesias, M. A.: Iterative regularization for ensemble data assimilation in reservoir models, Computat. Geosci., 19, 177–212, 2015. a
Jardak, M. and Talagrand, O.: Ensemble variational assimilation as a probabilistic estimator – Part 1: The linear and weak nonlinear case, Nonlin. Processes Geophys., 25, 565–587, https://doi.org/10.5194/npg255652018, 2018. a
Jazwinski, A. H.: Stochastic Processes and Filtering Theory, vol. 63, Academic Press, 1970. a
Kepert, J. D.: On ensemble representation of the observationerror covariance in the Ensemble Kalman Filter, Ocean Dynam., 54, 561–569, 2004. a
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P.: Optimization by simulated annealing, Science, 220, 671–680, 1983. a
Kitanidis, P. K.: Quasilinear geostatistical theory for inversing, Water Resour. Res., 31, 2411–2419, 1995. a
Le, D. H., Emerick, A. A., and Reynolds, A. C.: An adaptive ensemble smoother with multiple data assimilation for assisted history matching, SPE J., 21, 2–195, 2016. a
Liu, J. S.: Siegel's formula via Stein's identities, Stat. Probabil. Lett., 21, 247–251, 1994. a
Liu, Y., Haussaire, J.M., Bocquet, M., Roustan, Y., Saunier, O., and Mathieu, A.: Uncertainty quantification of pollutant source retrieval: comparison of Bayesian methods with application to the Chernobyl and Fukushima Daiichi accidental releases of radionuclides, Q. J. Roy. Meteor. Soc., 143, 2886–2901, 2017. a
Livings, D. M., Dance, S. L., and Nichols, N. K.: Unbiased ensemble square root filters, Physica D, 237, 1021–1028, 2008. a
Lorenc, A. C.: Development of an Operational Variational Assimilation Scheme, Journal of the Meteorological Society of Japan, Series. II, 75 (Special issue: data assimilation in meteorology and oceanography: theory and practice), 339–346, 1997. a
Lorenz, E. N.: Predictability: A problem partly solved, in: Proc. ECMWF Seminar on Predictability, vol. 1, 1–18, Reading, UK, 1996. a
Luo, X., Stordal, A. S., Lorentzen, R. J., and Naevdal, G.: Iterative Ensemble Smoother as an Approximate Solution to a Regularized MinimumAverageCost Problem: Theory and Applications, SPE J., 20, 962–982, 2015. a
Ma, X., Hetz, G., Wang, X., Bi, L., Stern, D., and Hoda, N.: A robust iterative ensemble smoother method for efficient history matching and uncertainty quantification, in: SPE Reservoir Simulation Conference, Society of Petroleum Engineers, 2017. a
Maciejewski, A. A. and Klein, C. A.: Obstacle avoidance for kinematically redundant manipulators in dynamically varying environments, The international journal of robotics research, 4, 109–117, 1985. a
Mandel, J., Bergou, E., Gürol, S., Gratton, S., and Kasanický, I.: Hybrid Levenberg–Marquardt and weakconstraint ensemble Kalman smoother method, Nonlin. Processes Geophys., 23, 59–73, https://doi.org/10.5194/npg23592016, 2016. a
Morzfeld, M., Hodyss, D., and Poterjoy, J.: Variational particle smoothers and their localization, Q. J. Roy. Meteor. Soc., 144, 806–825, 2018. a
Muirhead, R. J.: Aspects of multivariate statistical theory, John Wiley & Sons, Inc., New York, wiley Series in Probability and Mathematical Statistics, 1982. a
Oliver, D. S.: On conditional simulation to inaccurate data, Math. Geol., 28, 811–817, 1996. a
Oliver, D. S.: Metropolized randomized maximum likelihood for improved sampling from multimodal distributions, SIAM/ASA Journal on Uncertainty Quantification, 5, 259–277, 2017. a
Oliver, D. S. and Chen, Y.: Recent progress on reservoir history matching: a review, Computat. Geosci., 15, 185–221, 2011. a
Oliver, D. S., Reynolds, A. C., and Liu, N.: Inverse Theory for Petroleum Reservoir Characterization and History Matching, Cambridge University Press, 2008. a
Ott, E., Hunt, B. R., Szunyogh, I., Zimin, A. V., Kostelich, E. J., Corazza, M., Kalnay, E., Patil, D. J., and Yorke, J. A.: A local ensemble Kalman filter for atmospheric data assimilation, Tellus A, 56, 415–428, 2004. a, b
Pires, C., Vautard, R., and Talagrand, O.: On extending the limits of variational assimilation in nonlinear chaotic systems, Tellus A, 48, 96–121, 1996. a
Raanes, P. N., Bocquet, M., and Carrassi, A.: Adaptive covariance inflation in the ensemble Kalman filter by Gaussian scale mixtures, Q. J. Roy. Meteor. Soc., 145, 53–75, https://doi.org/10.1002/qj.3386, 2019. a
Rafiee, J. and Reynolds, A. C.: Theoretical and efficient practical procedures for the generation of inflation factors for ESMDA, Inverse Problems, 33, 115003, https://doi.org/10.1088/13616420/aa8cb2, 2017. a
Reynolds, A. C., Zafari, M., and Li, G.: Iterative forms of the ensemble Kalman filter, in: 10th European Conference on the Mathematics of Oil Recovery, 2006. a, b, c
Sacher, W. and Bartello, P.: Sampling errors in ensemble Kalman filtering. Part I: Theory, Mon. Weather Rev., 136, 3035–3049, 2008. a
Sakov, P. and Bertino, L.: Relation between two common localisation methods for the EnKF, Computat. Geosci., 15, 225–237, 2011. a
Sakov, P. and Oke, P. R.: Implications of the form of the ensemble transformation in the ensemble square root filters, Mon. Weather Rev., 136, 1042–1053, 2008. a, b, c
Sakov, P., Oliver, D. S., and Bertino, L.: An Iterative EnKF for Strongly Nonlinear Systems, Mon. Weather Rev., 140, 1988–2004, 2012. a, b, c, d, e, f
Sakov, P., Haussaire, J.M., and Bocquet, M.: An iterative ensemble Kalman filter in the presence of additive model error, Q. J. Roy. Meteor. Soc., 144, 1297–1309, 2018. a, b
Stordal, A. S.: Iterative Bayesian inversion with Gaussian mixtures: finite sample implementation and large sample asymptotics, Computat. Geosci., 19, 1–15, 2015. a
Tian, X., Xie, Z., and Dai, A.: An ensemblebased explicit fourdimensional variational assimilation method, J. Geophys. Res.Atmos., 113, https://doi.org/10.1029/2008JD010358, 2008. a
Trefethen, L. N. and Bau III, D.: Numerical linear algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997. a
van Leeuwen, P. J.: Comment on “Data assimilation using an ensemble Kalman filter technique”, Mon. Weather Rev., 127, 1374–1377, 1999. a
Zafari, M. and Reynolds, A. C.: Assessing the uncertainty in reservoir description and performance predictions with the ensemble Kalman filter, Master's thesis, University of Tulsa, 2005. a
Zupanski, M.: Maximum likelihood ensemble filter: Theoretical aspects, Month. Weather Rev., 133, 1710–1726, 2005. a
The formula (Eq. 19b) for ${\stackrel{\mathrm{\u203e}}{\mathbf{M}}}_{i}$ is sometimes arrived at via a truncated Taylor expansion of ℳ around ${\stackrel{\mathrm{\u203e}}{\mathit{x}}}_{i}$. This is already an approximation and still requires further indeterminate approximations to obtain any other interpretation than ${\mathcal{M}}^{\prime}\left({\stackrel{\mathrm{\u203e}}{\mathit{x}}}_{i}\right)$: the Jacobian evaluated at the ensemble mean.
Note that this is MDA in the sense of Emerick and Reynolds (2013a), Stordal (2015), and Kirkpatrick et al. (1983), where the annealing itself yields iterations, and not in the sense of quasistatic assimilation (Pires et al., 1996; Bocquet and Sakov, 2014; Fillion et al., 2018), where it is used as an auxiliary technique.
Their Lorenz 96 experiment only concerns the initial conditions.
Their Lorenz 96 experiment seems to have failed completely, with most of the benchmark scores (their Fig. 5) indicating divergence, which makes it pointless to compare benchmarks. Also, when reproducing their experiment, we obtain much lower scores than they report for the EnKF. One possible explanation is that we include, and tune, inflation.