Precision Annealing Monte Carlo Methods for Statistical Data Assimilation: Metropolis-Hastings Procedures

Statistical Data Assimilation (SDA) is the transfer of information from field or laboratory observations to a user selected model of the dynamical system producing those observations. The data is noisy and the model has errors; the information transfer addresses properties of the conditional probability distribution of the states of the model conditioned on the observations. The quantities of interest in SDA are the conditional expected values of functions of the model state, and these require the approximate evaluation of high dimensional integrals. We introduce a conditional probability distribution and use the Laplace method with annealing to identify the maxima of the conditional probability distribution. The annealing method slowly increases the precision term of the model as it enters the Laplace method. In this paper, we extend the idea of precision annealing (PA) to Monte Carlo calculations of conditional expected values using Metropolis-Hastings methods.


Abstract
Statistical Data Assimilation (SDA) is the transfer of information from field or laboratory observations to a user selected model of the dynamical system producing those observations. The data is noisy and the model has errors; the information transfer addresses properties of the conditional probability distribution of the states of the model conditioned on the observations. The quantities of interest in SDA are the conditional expected values of functions of the model state, and these require the approximate evaluation of high dimensional integrals. We introduce a conditional probability distribution and use the Laplace method with annealing to identify the maxima of the conditional probability distribution. The annealing method slowly increases the precision term of the model as it enters the Laplace method. In this paper, we extend the idea of precision annealing (PA) to Monte Carlo calculations of conditional expected values using Metropolis-Hastings methods.

Introduction
We begin with a description of a framework within which we will discuss transfer of information from data to a model of the processes producing the data.
Within an observation window in time, [t 0 ≤ t ≤ t F ], we make a set of measurements at times t = {τ 1 , τ 2 , ..., τ k , τ F }; t 0 ≤ τ k ≤ t F . At each of these measurement times, we observe L quantities y(τ k ) = {y 1 (τ k ), y 2 (τ k ), ..., y L (τ k )}. The number L of observations at each measurement time τ k is typically less, often much less, than the number of degrees of freedom D in the model of the observed system; D L. The quantitative characterization of the dynamical processes is through a model we choose. It describes the interactions among the states of the observed system. From the data {y(τ k )} we want to estimate the unmeasured states of the model as a function of time as well as estimate any time independent physical parameters in the model. At the end of the observation window t = t F , we use the estimated values of all model states and parameters to predict the model response to new forcing of the system for t ≥ t F . The predictions are used to validate the model (or not) as well as the estimation procedure.
The D-dimensional state of the model we call x a (t); a = 1, 2, ..., D ≥ L. These are selected by the user to describe the dynamical behavior of the observations through a set of differential equations in continuous time Equivalently, in discrete time t n = t 0 + n∆t; n = 0, 1, ..., N ; t N = t F , the dynamics is written as x a (t n+1 ) = f a (x(t n ), p) or x a (n + 1) = f a (x(n), p), where p is a set of parameters, fixed in time, associated with the model. f (x(n), p) is related to F(x(t), p) through the choice the user makes for solving the continuous time flow for x(t) through a numerical solution method of choice [PTVF07].
To make the discussion here a bit more compact, we will work henceforth in discrete time t n = t 0 + n∆t; n = 0, 1, ..., N ; t N = t F , and we will choose the observation times τ k to be multiples of ∆t as well: τ k = t 0 +k[nτ ] ∆t; k = 1, 2, ..., F .
As we proceed from the initiation of observations at t 0 , we must use our model equations to move the state variables x(t 0 ) = x(0), Eq. (2), from t 0 to τ 1 = t 0 + 1[nτ ] ∆t where the first measurement is made. Then we use the model dynamics again to move along to τ 2 = t 0 + 2[nτ ] ∆t, where the second measurement is made, and so forth until we reach the time of the last measurement t = τ F = t 0 +F [nτ ] ∆t and finally move the model from x(τ F ) to x(t F ).
We collect the x(t n ) for all n into the path of the state of the model through The dimension of the path is (N + 1)D + N p , where N p is the number of parameters p in our model. In X we do not explicitly show the fixed parameters p. This notation is illustrated in Fig. (1).
We now have two of the three required ingredients to effect our transfer of the information in the collection of all measurements Y = {y(τ 1 ), y(τ 2 ), ..., y(τ F )} to the model f (x(n), p) along the path X through the observation window [t 0 , t F ]: • (1) our noisy data Y and • (2) a model of the processes producing the Y. This model is devised by our experience and knowledge of those processes. The notation and a visual presentation of this is found in Fig. (1).
The third ingredient is comprised of methods to generate the transfer from Y to properties of the model. This will command our attention throughout this paper.
If the transfer methods are successful and, according to some metric of success, we arrange matters so that at the measurement times τ k , the L model variables x(t) associated with y(τ k ) are such that x l (τ k ) ≈ y l (τ k ); l = 1, 2, ..., L, we are not finished. We have then only demonstrated that the model is consistent with the known data Y. We must further use the model, completed by the estimates of the p and the state of the model at t F , x(t F ), to predict forward for t > t F , and we should succeed in comparison with measurements for y(τ r ) for τ r > t F . As the measure of success for predictions, we may use the same metric as utilized in the Figure 1: A visual representation of the time window t 0 ≤ t ≤ t F during which L-dimensional observations y(τ k ) are performed at observation times t = τ k ; k = 1, , ..., F ; t 0 ≤ τ k ≤ t F . We also show times at which the D-dimensional model developed by the user x(n + 1) = f (x(n), p) is used to move forward from time n to time n + 1: t n = t 0 + n∆t; n = 0, 1, ..., N ; t F = t N . D ≥ L. observation window. In the prediction window no further information from the observations is passed to the model.
As a small aside, the same overall setup applies to supervised machine learning networks [ARS18] where the observation window is called the training set; the prediction window is called the test set, and prediction is called generalization.

The Data are Noisy; the Model has Errors
Inevitably, the data we collect is noisy, and with equal assurance the model we select to describe the production of those data has errors. This means we must, at the outset, address a conditional probability distribution P (X|Y) as our goal in the data assimilation transfer from Y to the model. In [Aba13] we describe how to use the Markov nature of the model dynamics x(n) → x(n + 1) = f (x(n), p) and the definition of conditional probabilities to derive the recursion relation connecting observations and dynamics at times t n+1 and t n : P (X(n + 1)|Y(n + 1)) = P (y(n + 1), x(n + 1), X(n)|Y(n)) P (y(n + 1)|Y(n)) P (x(n + 1), X(n + 1)|Y(n)) • P (x(n + 1)|x(n))P (X(n)|Y(n)) = exp[CM I(y(n + 1), x(n + 1), X(n)|Y(n))] • = P (y(n + 1)|x(n + 1), X(n), Y(n)) P (y(n + 1)|Y(n)) • P (x(n + 1)|x(n))P (X(n)|Y(n)), where we have identified CM I(a, b|c) = log[ (P (a,b|c) P (a|c) P (a|c) ]. This is Shannon's conditional mutual information [Fan61] telling us how many bits (for log 2 ) we know about a when observing b conditioned on c. For us a = {y(n + 1)}, b = {x(n + 1), X(n + 1)}, c = {Y(n)}.
Using this recursion relation to move backwards from the end of the observation window from t F = t 0 + N ∆t through the measurements at times τ k to the start of the window at t 0 , we may write, up to factors independent of X P (x(n + 1)|x(n)) P (x(0)). (4) If we now write P (X|Y) ∝ exp[−A(X)]. A(X), the negative of the log likelihood, we call the action. Conditional expected values for functions G(X) along the path X are defined by , and all factors in the action independent of X cancel out here. The action takes the convenient expression which is the sum of the terms which modify the conditional probability distribution when an observation is made at t = τ k and the sum of the stochastic version of x(n) → x(n + 1) − f (x(n), p) and finally the distribution when the observation window opens at t 0 . What quantities G(X) are of interest? One natural one is the path of model states and parameters G(X) = X µ ; µ = {a, n}; a = 1, 2, ..., D; n = 0, 1, 2, ...N itself; another is the covariance around that mean X µ =X µ : (X µ −X µ )(X ν − X ν ) . Other moments are of interest, of course. If one has an anticipated form for the distribution at large X, then G(X) may be chosen as a parametrized version of that form and those parameters determined near the maximum of P (X|Y).
The action simplifies to what we call the 'standard model' of data assimilation when (1) observations y are related to their model counterparts via Gaussian noise with zero mean and diagonal precision matrix R m , and (2) model errors are associated with Gaussian errors of mean zero and diagonal precision matrix R f : If we have knowledge of the distribution P (x(0)) at t 0 we may add it to this action, Eq. (6). If we have no knowledge of P (x(0)), we may take its distribution to be uniform over the dynamic range of the model variables, then it, as here, is absent, canceling numerator and denominator in Eq. (5).

The Goal of SDA
Our challenge is to perform integrals such as Eq. (5). One should anticipate that the dominant contribution to the expected value comes from the maxima of P (X|Y) or, equivalently the minima of A(X). We note, as before, that when f (x(n), p) is nonlinear in X, as it always is in interesting examples, the expected value integral Eq. (5) is not Gaussian. So, some thinking is in order before approximating this high dimensional integral. We turn to that now. After consideration of methods to do the integral, we will return to an example taken from an instructional model often used in the geosciences.
Two generally useful methods available for evaluating this kind of high-dimensional integral are Laplace's method [Lap74,Lap86] and Monte Carlo techniques [PTVF07, KTM + 12, Nea11]. The Laplace methods, including the idea of precision annealing for the model error term are discussed in [Qui10, Ye16, YRK + 15, YKR + 15].
The drawbacks of using Laplace methods, including precision annealing methods, include the need for evaluating very high dimensional derivatives of A(X) with respect to X and using them in the nonlinear optimization algorithms selected. Further, when successful in identifying the path yielding the smallest value of A(X), thus the potentially dominant contribution to Eq. (5), we do not sample the desired conditional probability distribution away from its maximum. Evaluating corrections to the leading Laplace contributions is familiar as perturbation theory in statistical physics. The convergence of such perturbation methods can depend sensitively on the functional form of the action in X.
We now turn to extending the annealing techniques that explore the variation of G(X) in the magnitude of the precision matrix R f for the model error from Laplace's method to Monte Carlo methods for approximating the path integral for G(X) .

Precision Annealing Monte Carlo Methods
Monte Carlo methods for the approximate evaluation of quantities such as G(X) via Eq. (5) have been intensively explored and utilized for decades [MRR + 53, Has70,Nea11].
Standard MC calculations, following many years of developments from [MRR + 53, Has70], seek to estimate the conditional probability distribution P (X|Y) by starting somewhere in path space X[init], making moves in path space from this initial path and accepting and rejecting proposed moves according to a criterion based on detailed balance.
The folklore about these calculations is that one can begin more-or-less anywhere in path space and after a large enough number of steps leading to rejected paths and accepted paths proceeding from X[init], one will arrive at a good expected value in Eq. (5). Indeed the error is order the inverse square root of the number of accepted paths with the numerator essentially the variance in the function G(X) whose expected value one wishes to estimate.
In practice, if one can choose X[init] 'close' to the maximum of P (X|Y) the more efficient the procedure is expected to be; namely high accuracy may be achieved with fewer steps. Of course, if we knew where the maximum of P (X|Y) were located [Shi18], we'd start there and sample, through proposals for acceptable paths, a sufficient neighborhood of that minimum action path to arrive at a good estimation of G(X) . It is not hard to see that as we do not know the global minimum of the action, there is a lot of room for algorithms that make good proposals for new acceptable paths and clever choices for X [init].
Our idea in this paper is to follow the suggestions of [Qui10, Ye16, YRK + 15, YKR + 15] about how we can 'anneal' the precision of the model error term of the action starting with R f = 0, at which the global minimum of the standard model action is clear. From there, we slowly increase R f until it is very large and imposes the underlying dynamical model more and more precisely. This method was developed in the context of Laplace approximations to the expected value integrals [Qui10, Ye16, YRK + 15, YKR + 15] and has been extensively tested in several areas of application of SDA.
3.1 R f = 0; Choosing Initial Paths X q [init]; q = 1, 2, ..., N I for the PAMC Procedure Our strategy in this paper is to vary the 'hyperparameter' R f that sets the scale for the precision of the model error term in Eq. (7). When R f → ∞ the model is very precise and deterministic. In our precision annealing strategy, we start at the other end of the scale where R f = 0. At this value the model error term is absent, and the 'standard' model action is quadratic in the measured variables x l (n). At R f = 0 the action is a minimum when we select x l (τ k = t 0 + k[nτ ]∆t) = y l (τ k ); l = 1, 2, ..., L. This is the global minimum of the action at R f = 0, and it is quite degenerate as the action does not depend on the unmeasured model state variables or the parameters in the model.
The path of the model state (not showing the N p fixed parameters p) is comprised of (8) In our N I initial paths for the Monte Carlo search, X q [init], we always choose x l (τ k = t 0 + [nτ k]∆t) = y l (τ k ); l = 1, 2, ..., L, and we wish to select the other components of X[init] in a manner that is 'close' to a minimum action path. We select q = 1, 2, ..., N I initial paths X q [init] so we will be tracking an ensemble of paths using various Monte Carlo protocols.
To complete our choice of initial paths, we now split the state variables x a (n) into those observed a = 1, 2, ..., L and those unobserved a > L. The latter we call the 'rest' and write them as x R (n); R = L + 1, L + 2, ..., D. The dynamical equations (in discrete time) can now be written Starting with any initial condition {x q l (0), x q R (0)} we generate solutions to these dynamical equations by using Eq. (9) . We proceed by choosing q = 1, 2, ..., N I initial conditions {x q l (0), x q R (0)} from a uniform distribution over the ranges of {x l (0), x R (0)} which we can infer from the data and from forward integration of the model. Using the N I {x q l (0), x q R (0)} we generate N I paths. However, we substitute for x l (t 0 + k[nτ ]), whenever it occurs in the equations Eq. (9), the observed value y l (τ This generates q = 1, 2, ..., N I initial paths X q [init], one from each selection of {x q l (0), x q R (0)}, everyone of which has zero standard action. Each of these paths corresponds to an initial action at the global minimum for R f = 0, namely A(X q [init]) = 0.

Precision Annealing Procedure
We next move from R f = 0 → R f 0 > 0 and using the N I X q [init] paths, perform an MCMC procedure.
Our first procedure is to use a fixed number of iterations of Metropolis-Hastings (M-H) proposals/acceptance steps comprised of a fixed number of "burn-in" steps followed by a fixed number of iteration steps. The M-H step size is changed as we go along to assure a good acceptance rate.
At the termination of the M-H steps, we will have j = 1, 2, ..., N A (q, 0) accepted paths X q j [init] for each of the q = 1, 2, ..., N I initial paths. We use these N A (q, 0) accepted paths to estimate N I expected pathsX These N I paths,X q [0], evaluated at R f = R f 0 α 0 are set aside and retained for use as initial paths for the next step in the PA procedure. This completes the first step of the PAMC process; R f = R f 0 α 0 at this step. The PA strategy is exposed now: at R f = 0 choose a dynamically selected set of N I initial paths X q [init]. All these paths have zero action. Then raise the value of R f to a small positive number R f → R f 0 > 0, thus introducing the model error into the action, but keeping R f quite small, and at this value of R f use the N I paths X q [init] in the selected M-H procedure resulting in a set of paths 'near' the X q [init] as R f is small. The resulting N I paths at this small value of R f are then used as initial paths when we raise R f → R f 0 α. This sequential use of accepted paths from the previous value of R f comprises the precision annealing approach. Now we describe this in a bit more detail.
As the second step in PAMC we move R f from R f 0 → R f 0 α 1 with α > 1. At this increased value of R f we use the same MCMC procedure but now starting at theX q [0] as N I initial paths. This results in j = 1, 2, ..., N A (q, 1) accepted paths X q j [0]for each q. Again we form N I expected paths usinḡ This completes the second step of the PAMC process; R f = R f 0 α 1 at this step.
Next we move R f from R f 0 α 1 → R f 0 α 2 with α > 1. At this increased value of R f we use the same MCMC procedure but now starting at theX q [1] as N I initial paths. This results in j = 1, 2, ..., N A (q, 2) accepted pathsX q j [1] for each q. Again we form N I expected paths This completes the third step of the PAMC process; R f = R f 0 α 2 at this step.

Continue on in this manner increasing the value of
This completes the β th step of the PAMC process; R f = R f 0 α β at this step. This 'stepping in β' continues until β is 'large enough'; we will discuss a criterion for that shortly. At this value of 'large enough' β, we will have performed the MCMC procedure one last time (at R f = R f 0 α β ) to collect, for each q, N A (q, β) accepted pathsX[β] j ; j = 1, 2, ..., N A (q, β).
Finally, we estimate G(X) as the average (expected value) over the N I paths and this completes our PA Monte Carlo procedure. Note that at each increment of β we use as initial paths the N I expected paths from the previous β.
We evaluate the action on each of the N I paths at each value of R f and plot A(X q ) versus log[R f /R f 0 ]. In such an 'action level' plot, as the precision of the model is increased, if the model is consistent with the data and the number of observed measurements, L, at each τ k is large enough, the action level plot values will become independent of R f and one will stand out as lower than the rest. The path corresponding to that lowest action level will dominate the expected value integral of interest.
We will see this happen in the example discussed in the next section. It also happens in the Laplace approximation to finding the largest values of P (X|Y) [Qui10, Ye16, YRK + 15, YKR + 15]. The interpretation of this transition is that the number of directions in model state space that are explored by the L, independent measurements at each τ k , y l (τ k ); l = 1, 2, ..., L reveal, and through the estimation procedure (PAMC), 'cure' the intrinsic local unstable directions in the nonlinear model x(n + 1) = f (x(n), p). This happens with higher precision as R f becomes larger and larger.

Example of PAMC Calculations
We explore the instructional model [Lor06] widely used in numerical weather prediction analyses as a test bed for methods of data assimilation. This model has a D-dimensional state variable in which x −1 (t) = x D−1 (t), x 0 (t) = x D (t), and x 1 (t) = x D+1 (t). ν is a constant forcing term; the solutions of these equations for D ≥ 4 are chaotic when ν ≈ 8.0 or more. We will report on calculations with D = 5 and with D = 20 with ν = 8.17. Our numerical calculations are 'twin experiments' in which for a selected D we choose x(t 0 ) = x(0) and using a time step ∆t = 0.025 generate solutions x(t) over an observation window [t 0 , t F ] : t 0 ≤ t ≤ t 0 + N ∆t = t F . To each x a (t) we add Gaussian noise with mean zero and variance σ 2 , these now comprise our library of 'observed data;' y a (t) = x a (t) + σN (0, 1). We then select L ≤ D of these noisy data, and form the action and R m (n) is nonzero only when there is a measurement at t n , and at each of these times L quantities are observed. The first term on the right in Eq. (16) is the measurement error, and the second, the model error.
In Fig. (2) we display the action levels as a function of β at L = 5. We can see that PAMC identifies many action levels, corresponding to many peaks in the conditional probability distribution P (X|Y) ∝ exp[−A(X)], Eq. (16). From β ≈ 30 we see one level moving away from the collection of larger action levels as β increases. However, no action level has become essentially independent of R f suggesting that the accuracy with which the model is enforced remains too small. We expect that as the number of measurements at each τ k is increased more information about the phase space instabilities will be passed from the data to the model and that the structure of the action level plot will change.
In Fig. (3) we now display the action levels and its components, the measurement errors and the model errors, when L = 12. Here the behavior of the action levels is quite different. The model error decreases over a large range of R f until the numerical stability of the evaluation of this term is reduced as small errors in x(n + 1) − f (x(n), p) are magnified by large values of R f . As this result appears, the action for each of the N I paths at each β levels off, becoming essentially independent of R f , and matches the measurement error, as it must do for consistency [Qui10, Ye16, YRK + 15, YKR + 15].
The PAMC procedure, as does the Laplace approximation version of precision annealing [Qui10, Ye16, YRK + 15, YKR + 15], permits the estimation of the parameter ν at each value of β. In Fig. (4) we display all N I = 50 estimated values of ν at each value of β. As PAMC is an ensemble method sampling in the neighborhood of a peak (or peaks) of the conditional probability distribution, we do not arrive at a single value for ν. Taking the N I values of ν(β) and evaluating the means and standard deviation at each β, we show the result in Fig. (5) in which it is clear that the estimated value of ν becomes essentially independent of β for β ≈ 40 and larger.
Until this point we have examined outcomes of the PAMC estimation procedure. All of the state variables, measured and unmeasured, as well as the forcing parameter were reported over the observation window [0 ≤ t ≤ 5.0]. In a 'twin experiment' as here, we have generated the data by solving a known dynamical equation and adding noise to the output of the D = 20 times series with a known value of ν. The point of a twin experiment is to test the method of transfer of information in SDA. As we have D − L unobserved state variables at each L, and an unobserved parameter ν, the only tool to determine how well the estimation procedure has done in its task is to predict for t > 5 into a prediction window where no information from observations is passed back from the model. We now examine how well the estimation has been performed by predicting both an observed and an unobserved time series among the D available. We already see from Fig. (5) that the input value of ν = 8.17 has accurately been estimated; the apparent bias in this parameter estimation has also been seen in an earlier Monte  x 2 (t), x 4 (t), x 6 (t), x 7 (t), x 9 (t), x 11 (t), x 12 (t), x 14 (t), x 16 (t), x 17 (t), x 19 (t)]. The actions, the measurement error, and the model error are evaluated as a function of β = log α [R f /R f 0] where α = 1.4 and R f 0 = 1.0. We perform the Precision Annealing Monte Carlo (PAMC) calculation starting with N I initial paths at each R f . We used N I = 50 in these calculations; on display here are N I action, measurement error, and model error values at each R f (or β). These are evaluated along the expected path resulting from the accepted paths generated during the Metropolis-Hastings procedures from each of the N I initial paths. In this case, when L = 12, the model error becomes much smaller than the measurement error as β is increased. This leads the action to become effectively equal to the action itself and essentially independent of R f . We have seen this before in the precision annealing variational principle calculations [Qui10, Ye16, YRK + 15, YKR + 15].  Carlo twin experiment [KTM + 12, Kos12], and its origins are discussed there. Fig. (6) shows the observed model variable x 2 (t) for the Lorenz96 model with D = 20, L = 12 and ∆t = 0.025. The noisy data from solutions of the model equations from the 'observed' variables [1,2,4,6,7,9,11,12,14,16,17,19]. The estimation of x 2 (t) during the observation window using PAMC to transfer information from the data to the model is shown in red, and the prediction using all the estimated states of the model, x(t = 5), and the estimated model parameter, is shown in green x(t ≥ 5). Our knowledge of this dynamical system [Kos12] indicates that the largest global Lyapunov exponent is approximately 1.2 in the time units indicated by ∆t. The deviation of the predicted trajectory x 2 (t) from t ≈ 6.0 is consistent with the accuracy of the estimated state x(t) and this Lyapunov exponent. Fig. (7) shows the unobserved model variable x 20 (t) for the Lorenz96 model with D = 20, L = 12 and ∆t = 0.025. The noisy data from solutions of the model equations from the 'observed' variables [1,2,4,6,7,9,11,12,14,16,17,19]. The estimation of x 20 (t) during the observation window using PAMC to transfer information from the data to the model is shown in red, and the prediction using all the estimated states of the model, x(t = 5), and the estimated model parameter, is shown in blue x(t ≥ 5). Our knowledge of this dynamical system [Kos12] indicates that the largest global Lyapunov exponent is approximately 1.2 in the time units indicated by ∆t. The deviation of the predicted trajectory x 20 (t) from t ≈ 6.4 is consistent with the accuracy of the estimated state x(t) and this Lyapunov exponent.

Discussion and Summary
In statistical data assimilation, one transfers information from a set of noisy data Y to models of the observations. The models have errors and the probability P (X|Y) of the model states, conditioned on the data, plays a central role. From this conditional probability distribution, we want to approximate conditional expected values of functions G(X) on the model state where A(X) ∝ − log[P (X|Y)] is the 'action' associated with the information transfer process during an observation window in time, when the information transfer occurs. Observations of the dynamical system underlying the measurements may be sparse; the number of measurements one is able to accomplish at any moment in time is typically small compared to the degrees of freedom in the model. However, one requires some approximate knowledge of the full state of the model Figure 6: We display the observed dynamical variable x 2 (t) for the time interval 0 ≤ t ≤ 10.0. In black is the full set of data. In red is the estimated x 2 (t) over the observation window 0 ≤ t ≤ 5.0, and in green is the predicted x 2 (t) over the prediction window 5.0 < t ≤ 10.0. The prediction uses the values of x(t = 5.0) for the full estimated state at the end of the observation window as well as the parameter ν estimated in the PAMC procedure. This calculation uses the Lorenz96 model with D = 20 and L = 12. ∆t = 0.025. Figure 7: We display the unobserved dynamical variable x 20 (t) for the time interval 0 ≤ t ≤ 10.0. In black is the full set of data. In red is the estimated x 20 (t) over the observation window 0 ≤ t ≤ 5.0, and in blue is the predicted x 20 (t) over the prediction window 5.0 < t ≤ 10.0. The prediction uses the values of x(t = 5.0) for the full estimated state at the end of the observation window as well as the parameter ν estimated in the PAMC procedure. This calculation uses the Lorenz96 model with D = 20 and L = 12. ∆t = 0.025. at the final time-point of the observation window. This means one must estimate the unmeasured model state variables as well as any unknown time independent model parameters, then validate the model with predictions for times after the observation window.
In this paper we have addressed approximating such integrals using a precision annealing Monte Carlo method. In the context of a model x(t n+1 ) = f (x(t n ), p) and observations y l (τ k ) at times t 0 ≤ τ k ≤ t F (with t F = t 0 + N ∆t), the action reflects Gaussian errors of the measurements and of the nonlinear model, given by where R m (n) is nonzero only when there is a measurement at t n . The precision of the model error is R f and the annealing procedure is initiated at R f very small, then continued to a very large R f . The core idea is that when R f is small, the global minimum of A(X) is easily identifiable where x l (τ k ) ≈ y l (τ k ). Increasing R f slowly allows one to track the global minimum as the nonlinearity in the action plays a more and more significant role. The details of this PAMC procedure, implemented by a Metropolis-Hastings Monte Carlo method at each R f , are given as a general outline. We then present results in detail for an instructional model -the Lorenz96 [Lor06] equations, widely used to explore geophysical SDA methods.
In addition to the PAMC method, we introduce an initialization method for selecting a starting point in path space X. From this starting point, we begin to make proposals and accept new samples in order to evaluate the conditional probability distribution.
Our PAMC methods are clearly not restricted to the specific example we used to demonstrate its operation, nor is the use of a Metropolis-Hastings procedure required in its implementation. We will follow this paper with one describing the use of a Hamiltonian Monte Carlo (HMC) procedure [DKPR87,Nea11,Bet18].
How is one to choose between the use of a precision annealing method for the Laplace approximation to expected value integrals and Monte Carlo methods (Metropolis-Hastings or HMC)? The key difference among the methods is that the Metropolis-Hastings Monte-Carlo does not require carrying along Jacobians or Hessians of the action A(X) and samples the conditional probability distribution with paths X in model state space. The Laplace method requires solving for zeros of the Jacobian ∂A(X)/∂X and results in a single path in model state space at the overall minimum of the action. The HMC method is a hybrid of these in which requires a symplectic integrator of the 'Hamiltonian' H(P, X) = P 2 /2M + A(X) and uses ∂A(X)/∂X to move about in 'canonical' {P, X} space. Neither Monte Carlo method requires evaluating or storing higher derivatives of the action, and each samples the conditional probability distribution in path space, while the Laplace method does not. At this early stage of development of these methods, we do not have a firm recommendation as to which one to select in general. From the calculations on a high dimensional Lorenz96 model, it appears that on this test model, all approaches yield excellent results when enough measurements are made at each measurement time in an observation window.