Joint state-parameter estimation of a nonlinear stochastic energy balance model from sparse noisy data

While nonlinear stochastic partial differential equations arise naturally in spatiotemporal modeling, inference for such systems often faces two major challenges: sparse noisy data and ill-posedness of the inverse problem of parameter estimation. To overcome the challenges, we introduce a strongly regularized posterior by normalizing the likelihood and by imposing physical constraints through priors of the parameters and states. We investigate joint parameter-state estimation by the regularized posterior in a physically motivated nonlinear stochastic energy balance model (SEBM) for paleoclimate reconstruction. The high-dimensional posterior is sampled by a particle Gibbs sampler that combines MCMC with an optimal particle filter exploiting the structure of the SEBM. In tests using either Gaussian or uniform priors based on the physical range of parameters, the regularized posteriors overcome the ill-posedness and lead to samples within physical ranges, quantifying the uncertainty in estimation. Due to the ill-posedness and the regularization, the posterior of parameters presents a relatively large uncertainty, and consequently, the maximum of the posterior, which is the minimizer in a variational approach, can have a large variation. In contrast, the posterior of states generally concentrates near the truth, substantially filtering out observation noise and reducing uncertainty in the unconstrained SEBM.


Introduction
Physically motivated nonlinear stochastic (partial) differential equations (SDEs and SPDEs) are natural models of spatiotemporal processes with uncertainty in geoscience. In particular, such models arise in the problem of reconstructing geophysical fields from sparse and noisy data (see e.g. [20,42,45] and the references therein). The nonlinear differential equations, derived from physical principles, often come with unknown but physically constrained parameters also to be determined from data. This promotes the problem of joint state-parameter estimation from sparse and noisy data. When the parameters are interrelated, which is often the case in nonlinear models, their estimation can be an ill-posed inverse problem. Physical constraints on the parameters must then be taken into account. In variational approaches, physical constraints are imposed using a regularization term in a cost function, whose minimizer provides an estimator of the parameters and states. In a Bayesian approach, the physical constraints are encoded in prior distributions, extending the regularized cost function in the variational approach to a posterior and quantifying the estimation uncertainty. When the true parameters are known, the Bayesian approach has demonstrated great success in state estimation, thanks to the developments in Monte Carlo sampling and data assimilation techniques (see e.g. [8,25,49]). However, the problem of joint state-parameter estimation, especially when the parameter estimation is ill-posed, has had relatively little success in nonlinear cases and remains a challenge [23].
In this paper, we investigate a Bayesian approach for joint state and parameter estimation of a non-linear two-dimensional stochastic energy balance model (SEBM) in the context of spatial-temporal paleoclimate reconstructions of temperature fields from sparse and noisy data [16,19,44,46]. In particular, we consider a model of the energy balance of the atmosphere similar to those often used in idealized climate models (see e.g. [17,41,50]) to study climate variability and climate sensitivity. The use of such a model in paleoclimate reconstruction aims at improving the physical consistency of temperature reconstructions during e.g. the last deglaciation and the Holocene by combining indirect observations, so called proxy data, with physically-motivated stochastic models. The SEBM models surface air temperature, explicitly taking into account sinks, sources, and horizontal transport of energy in the atmosphere, with an additive stochastic forcing incorporated to account for unresolved processes and scales. The model takes the form of a nonlinear SPDE with unknown parameters to be inferred from data. These unknown parameters are associated with processes in the energy budget (e.g. radiative transfer, air-sea energy exchange) that are represented in a simplified manner in the SEBM, and may change with a changing climate. The parameters must fall in a prescribed range such that the SEBM is physically meaningful. Specifically, they must be in sufficiently close balance for the stationary temperature of the SEBM to be within a physically realistic range. As we will show, the parametric terms arising from this physically-based model are strongly correlated, leading to a Fisher information matrix that is ill-conditioned. Therefore, the parameter estimation is an ill-posed inverse problem, and the maximum likelihood estimators of individual parameters have large variations and often fall out of the physical range. To overcome the ill-posedness in parameter estimation, we introduce a new strongly regularized posterior by normalizing the likelihood and by imposing the physical constraints through priors on the parameters and the states, based on physical constraints and the climatological distribution. In the regularized posterior, the prior has the same weight as the normalized likelihood to enforce the support of the posterior to be in the physical range. Such a regularized posterior is a natural extension of the regularized cost function in a variational approach: the maximum of the posterior (MAP) is the same as the minimizer of the regularized cost function, but the posterior quantifies the uncertainty in the estimator.
The regularized posterior of the states and parameters is high-dimensional and non-Gaussian. It is represented by its samples, which provide an empirical approximation of the distribution and allow efficient computation of quantities of interest such as posterior means. The samples are drawn using a particle Gibbs sampler with ancestor sampling (PGAS ) [28], a special sampler in the family of particle Markov chain Monte Carlo (MCMC) methods [2] that combines the strengths of both MCMC and sequential Monte Carlo methods (see e.g. [15]) to ensure the convergence of the empirical approximation to the highdimensional posterior. In the PGAS, we use an optimal particle filter that exploits the forward structure of the SEBM.
We consider two priors for the parameters, each based on their physical ranges: a uniform prior and a Gaussian prior with three standard deviations inside the range. We impose a prior for the states based on their overall climatological distribution. Tests show that the regularized posteriors overcome the ill-posedness and lead to samples of parameters and states within the physical ranges, quantifying the uncertainty in their estimation. Due to the regularization, the posterior of the parameters is supported on a relatively large range. Consequently, the MAP of the parameters has a large variation, and it is important to use the posterior to assess the uncertainty. In contrast, the posterior of the states generally concentrates near the truth, substantially filtering out the observational noise and reducing the uncertainty in state reconstruction. Tests also show that the regularized posterior is robust to spatial sparsity of observations, with sparser observations leading to larger uncertainties. However, due to the need for regularization to overcome ill-posedness, the uncertainty in the posterior of the parameters can not be eliminated by increasing the number of observations in time. Therefore, we suggest alternative approaches, such as re-parametrization of the nonlinear function according to the climatological distribution or nonparametric Bayesian inference (see e.g. [18,36]), to avoid ill-posedness.
The rest of the paper is organized as follows. Section 2 introduces the SEBM and its discretization, and formulates a state-space model. We also outline in this section the Bayesian approach to the joint parameter-state estimation and the particle MCMC samplers. Section 3 analyzes the ill-posedness of the parameter estimation problem and introduces the regularized posterior. The regularized posterior is sampled by PGAS and numerical results are presented in Section 4. Discussions and conclusions are presented in Sections 5 and 6. Technical details of the estimation procedure are described in Appendix 7.
2 State-space model formulation After providing a brief physical introduction to the SEBM, we present its discretization and the observation model by representing them as a state-space model suitable for application of sequential Monte Carlo methods in Bayesian inference.

The stochastic energy balance model
The SEBM describes the evolution in space (both latitude and longitude) and time of the surface air temperature upt, ξq: B t upt, ξq´ν∆upt, ξq " g θ puq`f pt, ξq, (2.1) where ξ P r´π, πsˆr´π{2, π{2s is the two-dimensional coordinate on the sphere and the solution upt, ξq is periodic in longitude. Horizontal energy transport is represented as diffusion with diffusivity ν, while sources and sinks of atmospheric internal energy are represented by the nonlinear function g θ puq with the unknown parameters θ. Upper and lower bounds of these three parameters, shown in Table 1, are derived from the energy balance model in [17], adjusted to current estimates of the Earth's global energy budget from [47] using appropriate simplifications. The equilibrium solution of the SEBM for the average values of the parameters approximates the current global mean temperature closely, and the magnitude of sinks and sources approximates the respective magnitudes in [47] well. The physical ranges of the parameters are very conservative and cover current estimates of the global mean temperature during the Quaternary [43]. The state variable and the parameters in the model have been nondimensionalized so that the equilibrium solution of Eqn. (2.1) with f " 0 is approximately equal to one. The quartic nonlinearity of the function g θ puq arises from the Stefan-Boltzmann dependence of longwave radiative fluxes on atmospheric temperature, while a linear feedback is included to represent state dependence of e.g. surface energy fluxes and albedo. Inclusion of quadratic and cubic nonlinarities in g θ puq (to account for nonlinearities in the feedbacks just noted) was found to exacerbate the ill-posedness of the model without qualitatively changing the character of the model dynamics within the parameter range appropriate for the study of Quaternary climate variability (e.g. without admitting multiple deterministic equilibria associated with the ice-albedo feedback). In reality, the diffusivity ν and the parameters θ j , j " p0, 1, 4q will depend on latitude, longitude, and time. We will neglect this complexity in our idealized analysis.
The stochastic term f pt, ξq, which models the net effect of unresolved or oversimplified processes in the energy budget, is a centered Gaussian field that is white in time and colored in space, specified by an isotropic Matérn covariance function with order α " 1 and scale ρ ą 0. That is, with the covariance kernel Cprq being the Matérn covariance kernel given by where Γ is the gamma function, ρ is a scaling factor, and K α is the modified Bessel function of the second kind. We focus on the estimation of the parameters θ and assume that ν and the parameters of f are known. Estimating ν in energy balance models with data assimilation methods is studied in [3], whereas estimation of parameters of f in the context of linear SPDEs is covered for example in [26].
In a paleoclimate context, temperature observations are sparse (in space and time) and derived from climatic proxies, such as pollen assemblages, isotopic compositions, and tree rings, that are indirect measures of the climate state. To simplify our analysis, we neglect the potentially nonlinear transformations associated with the proxies and focus on the effect of observational sparseness. This is a common strategy in the testing of climate field reconstruction methods [51]. As such, we take the data to be noisy observations of the solution at d o locations: for i " 1, . . . , d o , where each ξ i P r´π, πsˆr´π{2, π{2s is a location of observation, H is the observation operator, and i ptq " N p0, σ 2 q are iid Gaussian noise. The data are sparse in the sense that only a small number of the spatial locations are observed.

State-space model representation
In practice, the differential equations are represented by their discretized systems and the observations are discrete in time, therefore we consider only the state space model based on a discretization of the SEBM. We refer the reader to [4,21,30,34,40] for studies about inference of SPDEs in a continuous-time setting.

The state model
We discretize the SPDE (2.1) using linear finite elements in space and a semi-backward Euler method in time, using the computationally efficient Gaussian Markov random field approximation of the Gaussian field by [26] (see details in Section 7.1). We write the discretized equation as a standard state space model: is the deterministic function and tW n u is a sequence of iid Gaussian noise with mean zero and covariance R described in more detail in Section (7.19). Therefore, the transition probability density p θ pu n`1 |u n q, the probability density of U n`1 conditional on U n and θ, is p θ pu n`1 |u n q " detp2πRq´1 {2 expˆ´p u n`1´µθ pu n qq T R´1pu n`1´µθ pu n qq 2˙. (2.7)

The observation model
In discrete form, we assume that the locations of observation are the nodes of the finite elements. Then the observation function in (2.5) is simply H i pU n q " U n,k i with k i P t1, . . . , du denoting the index of the node under observation, for i " 1, . . . , d 0 , and we can write the observation model as where H P R doˆd b is called the observation matrix, and t n u is a sequence of iid Gaussian noise with distribution N p0, Qq, where Q " Diagtσ 2 i u. Equivalently, the probability of observing y n given state U n is ppy n |U n q " detp2πQq´1 {2 expˆ´p y n´H U n q T Q´1py n´H U n q 2˙. (2.9)

Bayesian inference for SSM
Given observations y 1:N :" py 1 , . . . , y N q, our goal is to jointly estimate the state U 1:N :" pU 1 , . . . , U N q and the parameter vector θ :" pθ 0 , θ 1 , θ 4 q in the state-space model (2.6)-(2.9). The Bayesian approach estimates the joint distribution of pU 1:N , θq conditional on the observations by drawing samples to form an empirical approximation of the high-dimensional posterior. The empirical posterior efficiently quantifies the uncertainty in the estimation. Therefore, the Bayesian approach has been widely used (see the review [23] and the references therein). Following Bayes' rule, the joint posterior distribution of pU 1:N , θq can be written as ppθ, u 1:N |y 1:N q " ppθq p θ pu 1:N qp θ py 1:N |u 1:N q p θ py 1:N q , (2.10) where ppθq is the prior of the parameters and p θ py 1:N q " ş p θ pu 1:N qp θ py 1:N |u 1:N qdu 1:N is the unknown marginal probability density function of the observations. In the importance sampling approximation to the posterior, we do not need to know the value of p θ py 1:N q, because as a normalizing constant, and it will be cancelled out in the importance weights of samples. The quantity p θ py 1:N |u 1:N q is the likelihood of the observations y 1:N conditional on the state U 1:N and the parameter θ, which can be explicitly derived from the observation model (2.8): p θ py 1:N |u 1:N q " ppy 1:N |u 1:N q " ź n ppy n |u n q, (2.11) with ppy n |u n q given in (2.9). Finally, the probability density function of the state U 1:N given parameter θ can be derived from the state model (2.6): with p θ pu n`1 |u n q specified by (2.7).

Sampling the posterior by particle MCMC methods
In practice, we are interested in the expectation of quantities of interest or the probability of certain events. These computations involve integrations of the posterior that can neither be computed analytically nor by numerical quadrature methods due to the curse of dimensionality: the posterior is a high-dimensional non-Gaussian distribution involving variables with a dimension at the scale of thousands to millions. Monte Carlo methods generate samples to approximate the posterior by the empirical distribution, so that quantities of interest can be computed efficiently. Markov Chain Monte Carlo (MCMC) methods are popular Monte Carlo methods (see e.g. [29]) that generate samples along a Markov chain with the posterior as the invariant measure. For joint distributions of parameters and states, a standard MCMC method is Gibbs sampling which consists of alternatively updating the state variable U 1:N conditional on θ and y 1:N by sampling ppu 1:N |θ, y 1:N q " p θ pu 1:N qp θ py 1:N |u 1:N q p θ py 1:N q , (2.13) and then updating the parameter θ conditional on U 1:N " u 1:N by sampling the marginal posterior of θ: ppθ|u 1:N , y 1:N q " ppθ|u 1:N q " ppθqp θ pu 1:N q. (2.14) Due to the high-dimensionality of U 1:N , a major difficulty in sampling ppu 1:N |θ, y 1:N q is the design of efficient proposal densities that can effectively explore the support of ppu 1:N |θ, y 1:N q.
Another group of rapidly-developing MC methods are sequential Monte Carlo (SMC) methods [7,15] that exploit the sequential structure of state space models to approximate the posterior densities ppu 1:n |θ, y 1:N q sequentially. SMC methods are efficient but suffer from the well-known problem of depletion (or degeneracy), in which the marginal distribution ppu n |θ, y 1:N q becomes concentrated on a single sample as N´n increases (see Section 7.2 for more details). The particle MCMC methods introduced in [2] provide a framework for systematically combining SMC  In this study, we sample the posterior by PGAS [28], a particle MCMC method that enhances the mixing of the Markov chain by sampling the ancestor of the reference trajectory. For the SMC, we use an optimal particle filter, which takes advantage of the linear Gaussian observation model and the Gaussian transition density of the state variables in our current SEBM. More generally, when the observation model is nonlinear and the transition density is non Gaussian, the optimal particle filter can be replaced by implicit particle filters [12,35] or local particle filters [38,39]; we refer to [8,25,49] for other data assimilation techniques. The details of the algorithm are provided in Section 7.3.

Ill-posedness and regularized posteriors
In this section, we first demonstrate and then analyze the failure of standard Bayesian inference of the parameters with the posteriors in (2.10). The standard Bayesian inference of the parameters fails in the sense that the posterior (2.10) tends to have a large probability mass at non-physical parameter values.
In the process of approximating the posterior by samples, the values of these samples often either hit the (upper or lower) bounds in Table 1 when we use a uniform prior or exceed these bounds when we use a Gaussian prior. As we shall show next, the standard Bayesian inverse problem is numerically ill-posed because the Fisher information matrix is ill-conditioned, which makes the inference numerically unreliable. Following the idea of regularization in variational approaches, we propose to use regularized posteriors in the Bayesian inference. This approach unifies the Bayesian and the variational approaches: the MAP is the minimizer of the regularized cost function in the variational approach, but the Bayesian approach quantifies the uncertainty of the estimator by the posterior.

Model settings and tests
Based on the physical upper and lower bounds in Table 1, we consider two priors for the parameters: a uniform distribution on these intervals and a Gaussian distribution centered at the median and with three standard deviations in the interval, as listed in Table 2. Throughout this study, we shall consider a relatively small numerical mesh for the SPDE with only 12 nodes for the finite elements. Such a small mesh provides a toy model that can neatly represent the spatial structure on the sphere, while allowing for systematic assessments of statistical properties of the Bayesian inference with moderate computational costs. Numerical tests show that the above FEM semi-backward Euler scheme is stable for a time step size ∆t " 0.01 and a stochastic forcing with scale σ f " 0.1 (see realization of the solution is shown in Figure 1 (left and middle), where we present the solution on the sphere at a fixed time with the 12-node finite element mesh, as well as the trajectories of all 12 nodes. The standard deviation of the observation noise is set to be ✏ " 0.01, i.e. one order of magnitude smaller than the stochastic forcing and two order of magnitude smaller than the climatological mean. We first assume that six out of the 12 nodes are observed; we discuss results obtained using sparser or denser observations in the discussion section. Figure 1 also shows the climatological probability histogram of the true state variables and the partial noisy observations. The climatological distribution of the observations is close to that of the true state variables (with a slightly larger variance due to the noise). The histograms show that the state variables are centered around 1 and vary mostly in the interval r0.92, 1.05s. We shall use a Gaussian approximation based on the climatological distribution of the partial noisy observations as a prior to constrain the state variables.
We summarize the settings of the numerical tests in Table 3.

Ill-posedness of the standard Bayesian inference of parameters
By the Bernstein-von Mises theorem (see e.g. [49,Chaper 10]), the posterior distribution of the parameters conditional on the true state data approaches the likelihood distribution as the data size increases. That is, pp✓|u 1:N q in (2.12) becomes close to the likelihood distribution ppu 1:N |✓q (which can be viewed as a distribution of ✓) as the data size increases. Therefore, if the likelihood distribution is numerically degenerate (in the sense that some components are undetermined), then the Bayesian posterior will also become close to degenerate, so that the Bayesian inference for parameter estimation will be ill-posed. In the following, we show that for this model the likelihood is degenerate even if the full states are observed with zero observation noise and that the maximum likelihood estimators have large nonphysical fluctuations (particularly when the The standard deviation of the observation noise is set to σ " 0.01, i.e. one order of magnitude smaller than the stochastic forcing and two orders of magnitude smaller than the climatological mean. We first assume that six out of the 12 nodes are observed; we discuss results obtained using sparser or denser observations in the discussion section. Figure 1 also shows the climatological probability histogram of the true state variables and the partial noisy observations. The climatological distribution of the observations is close to that of the true state variables (with a slightly larger variance due to the noise). The histograms show that the state variables are centered around 1 and vary mostly in the interval r0.92, 1.05s. We shall use a Gaussian approximation based on the climatological distribution of the partial noisy observations as a prior to constrain the state variables. We summarize the settings of numerical tests in Table 3.

Ill-posedness of the standard Bayesian inference of parameters
By the Bernstein-von Mises theorem (see e.g. [48,Chaper 10]), the posterior distribution of the parameters conditional on the true state data approaches the likelihood distribution as the data size increases. That is, ppθ|u 1:N q in (2.14) becomes close to the likelihood distribution ppu 1:N |θq (which can be viewed as a distribution of θ) as the data size increases. Therefore, if the likelihood distribution is numerically degenerate (in the sense that some components are undetermined), then the Bayesian posterior will also become close to degenerate, so that the Bayesian inference for parameter estimation will be ill-posed. In the following, we show that for this model the likelihood is degenerate even if the full states are observed with zero observation noise and that the maximum likelihood estimators have large nonphysical fluctuations (particularly when the states are noisy). As a consequence, the standard Bayesian parameter inference fails by yielding nonphysical samples.
We show first that the likelihood distribution is numerically degenerate because the Fisher information matrix is ill-conditioned. Following the transition density (2.7), the log-likelihood of the state tu 1:N u is lpθ, u 1:N q " c´1 2 N ÿ n"1 pu n`1´µθ pu n qq T R´1pu n`1´µθ pu n qq, subsamples of lengths N ranging from 10 2 to 10 5 . For both Gaussian and uniform priors, the condition numbers are on the scale of 10 8´1 0 11 and therefore the Fisher information matrix is ill-conditioned. In particular, the condition number increases as the data size increased, due to the ill-posedness of the inverse problem of parameter estimation.  The ill-conditioned Fisher information matrix leads to highly variable maximum likelihood estima- where c is a constant independent of pθ, u 1:N q. Since µ θ p¨q is linear in θ (cf. Equation (7.19)), the likelihood function is quadratic in θ and the corresponding scaled Fisher information matrix is where the vectors G θ,k pu n q P R d b are defined in (7.20). Figure 2 shows the means and standard deviations of the condition numbers (the ratio between the maximum and the minimum singular values) of the Fisher information matrices from 100 independent simulations. Each of these simulations generates a long trajectory of length 10 5 using a parameter drawn randomly from the prior, and computes the Fisher information matrices using the true trajectory of all 12 nodes, for subsamples of lengths N ranging from 10 2 to 10 5 . For both Gaussian and uniform priors, the condition numbers are on the scale of 10 8´1 0 11 and therefore the Fisher information matrix is ill-conditioned. In particular, the condition number increases as the data size increased, due to the ill-posedness of the inverse problem of parameter estimation.
The ill-conditioned Fisher information matrix leads to highly variable maximum likelihood estimators , which follows from (7.20). The ill-posedness is particularly problematic when tu 1:N u is observed with noise, as the ill-conditioned Fisher information matrix amplifies the noise in observations and leads to nonphysical estimators. Figure  3 shows the means and standard deviations of errors of MLEs computed from true and noisy trajectories in 100 independent simulations. In each of these simulations, the "noisy" trajectory is obtained by adding a white noise with standard deviation σ " 0.01 to a "true" trajectory generated from the system with a true parameter randomly drawn from the prior. For both Gaussian and uniform priors, the standard deviations and means of the errors of the MLE from the noisy trajectories are one order of magnitude larger than those from true trajectories. In particular, the variations are large when the data size is small. For example, when N " 100, the standard deviation of the MLE for θ 0 from noisy observations is on the order of 10 3 , two orders of magnitude larger than its physical range in Table 2. The standard deviations decrease as the data size increases, at the expected rate of 1{ ? N . However, the errors are too large to be practically reduced by increasing the size of data: for example, a data size N " 10 10 is needed to reduce the standard deviation of θ 4 to less than 0.1 (which is about 10% the size of the physical range by increasing the size of data: for example, a data size N " 10 is needed to reduce the standard deviation of ✓ 4 to less than 0.1 (which is about 10% the size of the physical range r´6.00,´4.80s as specified in Table 2). In summary, the ill-posedness leads to parameter estimators with large variations that are far outside the physical ranges of the parameters.  In particular, in the case of noisy observations, the deviations are at orders ranging from 10 to 1000, far beyond the physical ranges of the parameters in Table 1. Though the deviations decrease as data size increases, an impractically large data size is needed to reduce them to a physical range. Also, the means of errors are larger than the size of physical ranges of the parameters with values that decay slowly as data size increases.
11 Figure 3: The standard deviations and means of the errors of the MLEs, computed from true and noisy trajectories, out of 100 independent simulations with true parameters sampled from the Gaussian and uniform priors. In all cases, the deviations and biases (i.e. means of errors) are large. In particular, in the case of noisy observations, the deviations are at orders ranging from 10 to 1000, far beyond the physical ranges of the parameters in Table 1. Though the deviations decrease as data size increases, an impractically large data size is needed to reduce them to a physical range. Also, the means of errors are larger than the size of physical ranges of the parameters with values that decay slowly as data size increases.
r´6.00,´4.80s as specified in Table 2). In summary, the ill-posedness leads to parameter estimators with large variations that are far outside the physical ranges of the parameters.

Regularized posteriors
To overcome the ill-posedness of the parameter estimation problem, we introduce strongly regularized posteriors by normalizing the likelihood function. In addition, to prevent unphysical values of the states, we further regularize the state variables in the likelihood by an uninformative climatological prior. That is, consider the regularized posterior : where Z :" ş ppθq " p c pu 1:N qp θ pu 1:N qp θ py 1:N |u 1:N q p θ py 1:N q ı 1{N dθdu 1:N is a normalizing constant and p c pu 1:N q is the prior of the states estimated from a Gaussian fit to climatological statistics of the observations, neglecting correlations. That is, we set p c pu 1:N q as p c pu 1:N q :" where u c and σ o are the mean and standard deviation of the observations over all states. Here the multiplicative factor 2 aims for a larger band to avoid an overly narrow prior for the states. This prior can be viewed as a joint distribution of the state variables assuming all components are independent identically Gaussian distributed with mean u c and variance σ 2 c . Clearly, it uses the minimum amount of information about the state variables, and we expect it can be improved by taking into consideration spatial correlations or additional field knowledge in practice.
The regularized posterior can be viewed as an extension of the regularized cost function in the variational approach. In fact, the negative logarithm of the regularized posterior is the same (up to a multiplicative factor 1 N and an additive constant log Z´1 N log p θ py 1:N q) as the cost function in variational approaches with regularization. More precisely, we havé where C y 1:N pθ, u 1:N q is the cost function with regularization: When the prior is Gaussian, the regularization corresponds to Tikhonov regularization. Therefore, the regularized posterior extends the regularized cost function to a probability distribution, with the maximum of the posterior (MAP) being the minimizer of the regularized cost function.
The regularized posterior normalizes the likelihood by an exponent 1{N . This normalization allows for a larger weight (more trust) on the prior, which can then sufficiently regularize the singularity in the likelihood and therefore reduces the probability of nonphysical samples. Intuitively, it avoids the shrinking of the likelihood as the data size increases. When the system is ergodic, the sum 1 N ř N n"1 log " p θ pu n |u n´1 qppy n |u n q ‰ converges to the spatial average Erlog " p θ pU n |U n´1 qppy n |U n qs with respect to the invariant measure as N increases. While being effective, this factor may not be optimal [37] and we leave the exploration of optimal regularization factors to future work.
In the sampling of the regularized posterior, we update the state variable U 1:N conditionally on θ and y 1:N by sampling p c pu 1:N qp θ pu 1:N |θ, y 1:N q (with p θ pu 1:N |θ, y 1:N q specified in (2.13)) using SMC methods. Compared to the standard PMCMC algorithm outlined in Section 2.4, the only difference occurs when we update the parameter θ conditional on the estimated states u 1:N . Instead of (2.14), we draw a sample of θ from the regularized posterior p N pθ|u 1:N , y 1:N q 9 ppθqrp θ pu 1:N qs 1{N . (3.7)

Bayesian inference with regularized posteriors
The regularized posteriors are approximated by the empirical distribution of samples drawn using particle MCMC methods, specifically particle Gibbs with ancestor sampling (PGAS, see Section 7.3) in combination with SMC using optimal importance sampling (see Section 7.2). In the following section, we first diagnose the Markov chain and choose a reasonable chain length for subsequent analyses. We then present the results of parameter estimation and state estimation.
In all the tests presented in this study, we use only M " 5 particles for the SMC, as we can be confident of the Markov chain produced by the particle MCMC methods converging to the target distribution based on theoretical results [2,28]. In general, the more particles are used, the better the SMC algorithm (and hence the particle MCMC methods) will perform, at the price of increased computational cost.

Diagnosis of the Markov Chain
To ensure that the Markov Chain generated by PGAS is well-mixed and to find a length for the chain such that the posterior is acceptably approximated, we shall assess the Markov chain by three criteria:   rates of the Gaussian prior. Higher update rates occur for the uniform prior because the deviations of parameter samples from the previous values are larger, resulting in an increased probability of updating the reference trajectory in the conditional SMC. We test the correlation length of the Markov chain by finding the smallest lag at which the empirical autocorrelation functions (ACF) of the states and the parameters are close to zero. Figure 5 shows the empirical ACFs of the parameters and states at representative nodes, computed using a Markov chain with length 10000. The ACFs approach zero within a time lag of around 40 ( based on a threshold value of 0.1) for the Gaussian prior, and within a time lag of around 5 for the uniform 14 Figure 4: The update rate of the states at different times along the trajectory. The high update rate at time t " 1 is due to the initialization of the particles near the equilibrium and the ancestor sampling. The high update rate at the end time is due to the nature of the SMC filter. Note that the uniform prior has update rates close to 1 at all times. Table 4: The settings of the particle MCMC using SMC with optimal importance densities. the update rate of states; the correlation length of the Markov chain; and the convergence of the marginal posteriors of the parameters. These empirical criteria are convenient and, as we discuss below, have found to be effective in our study. We refer to [13] for a detailed review of various criteria for diagnosing MCMC.
The update rate of states is computed at each time of the state trajectory u 1:N along the Markov chain.
That is, at each time, we say the state is updated from the previous step of of the Markov chain if any entry of the state vector changes. The update rate measures the mixing of the Markov chain. In general, an update rate above 0.5 is preferred, but a high rate close to 1 is not necessarily the best. Figure 4 shows the update rates of typical simulations for both the Gaussian prior and the uniform prior. For both priors, the update rates are above 0.5, indicating a fast mixing of the chain. The rates tend to increase with time (except for the first time step) to a value close to 1 at the end of the trajectory. This phenomenon agrees with the particle depletion nature of the SMC filter: when tracing back in time to sample the ancestors, there are fewer particles and therefore the update rate is lower. The high update rate at time t " 1 step is due to our initialization of the particles near the equilibrium, which increases the possibility of ancestor updates in PGAS. We also note that the uniform prior has update rates close to 1 at all times, much higher than the rates of the Gaussian prior. Higher update rates occur for the uniform prior because the deviations of parameter samples from the previous values are larger, resulting in an increased probability of updating the reference trajectory in the conditional SMC. We test the correlation length of the Markov chain by finding the smallest lag at which the empirical autocorrelation functions (ACF) of the states and the parameters are close to zero. Figure 5 shows the empirical ACFs of the parameters and states at representative nodes, computed using a Markov chain with length 10000. The ACFs approach zero within a time lag of around 40 ( based on a threshold value of 0.1) for the Gaussian prior, and within a time lag of around 5 for the uniform prior. The smaller correlation length in the uniform prior case is again due to the larger parameter variation in the uniform prior case than the Gaussian prior case. Figure 4: The update rate of the states at different times along the trajectory. The high update rate at time t " 1 is due to the initialization of the particles near the equilibrium and the ancestor sampling. The high update rate at the end time is due to the nature of the SMC filter. Note that the uniform prior has update rates close to 1 at all times.  The relatively small decorrelation length of the Markov chain indicates that we can accurately approximate the posterior by a chain of a relatively short length. This result is demonstrated in Figure 6, where we plot the empirical marginal posteriors of the parameters, using Markov chains of three different lengths: L " 1000, 5000, 10000. The marginal posteriors with L " 1000 are reasonably close to those with L " 10 4 , and those with L " 5000 are almost identical to those with L " 10 4 . In particular the marginal posteriors with L " 10 3 capture the shape and spread of the distributions for L " 10 4 . Therefore, a Markov chain with length L " 10 4 provides a reasonably accurate approximation of the posterior. Hence, we use Markov chains with length L " 10 4 in all simulations from here on. This choice of chain length may be longer than necessary, but allows for confidence that the results are robust.
In summary, based on the above diagnosis of the Markov chain generated by PMCMC, to run many simulations for statistical analysis of the algorithm within a limited computation cost, we use chains with length L " 10 4 to approximate the posterior. For the SMC algorithm, we use only five particles. The number of observations in time is N " 100.

Parameter estimation
One of the main goals in Bayesian inference is to quantify the uncertainty in the parameter-state estimation by the posterior. We access the parameter estimation by examining the samples of the posterior in a typical simulation, for which we consider the scatter plots and marginal distributions, the maximum of the posterior (MAP) and the posterior mean. We also examine the statistics of the MAP and the posterior mean in 100 independent simulations. In each simulation, the parameters are drawn from the prior distribution of θ. Then, a realization of the SEBM is simulated. Finally    prior. The smaller correlation length in the uniform prior case is again due to the larger parameter variation in the uniform prior case than the Gaussian prior case. The relatively small decorrelation length of the Markov chain indicates that we can accurately approximate the posterior by a chain of a relatively short length. This result is demonstrated in Figure 6, where we plot the empirical marginal posteriors of the parameters, using Markov chains of three different lengths: L " 1000, 5000, 10000. The marginal posteriors with L " 1000 are reasonably close to those with L " 10 4 , and those with L " 5000 are almost identical to those with L " 10 4 . In particular the marginal posteriors with L " 10 3 capture the shape and spread of the distributions for L " 10 4 . Therefore, a Markov chain with length L " 10 4 provides a reasonably accurate approximation of the posterior. Therefore we use Markov chains with length L " 10 4 in all simulations from here on. This choice of chain length may be longer than necessary, but allows for confidence that the results are robust.
In summary, based on the above diagnosis of the Markov chain generated by PMCMC, to run many simulations for statistical analysis of the algorithm within a limited computation cost, we use chains with length L " 10 4 to approximate the posterior. For the SMC algorithm, we use only five particles. The number of observations in time is N " 100. The non-Gaussianity of the posterior (including the concentration near the boundaries), its insensitivity to changes in the true parameter, and its limited reduction of uncertainty from the prior (Figures 7 -8) are due to the degeneracy of the likelihood distribution and to the strong regularization. Recall that the degenerate likelihood leads to MLEs with large variations and biases, with the standard deviation of the estimators of θ 0 and θ 1 being about 10 times larger than those of θ 4 (see Figure 3). As a result, when regularized by the Gaussian prior, the components θ 0 and θ 1 , which are more under-determined by the likelihood, are constrained mainly by the Gaussian prior and therefore their marginal posteriors are close to their marginal priors. In contrast, the component θ 4 is forced to concentrate around the center of the prior but with a large tail. While dramatically reducing the large uncertainty of θ 0 and θ 1 in the ill-conditioned likelihood, the regularized posterior still exhibits a slightly larger uncertainty than the prior for these two components.
In the case of the uniform prior, it is particularly noteworthy that the marginal posteriors of θ 0 and θ 1 differ more from their priors than the parameter θ 4 . These results are the opposite of what was found for the Gaussian prior. Such differences are due to the different mechanism of "regularization" by the two priors. The Gaussian prior eliminates the ill-posedness by regularizing the ill-conditioned Fisher information matrix with the covariance of the prior. So, the information in the likelihood, e.g. the bias  what was found for the Gaussian prior. Such differences are due to the different mechanism of "regularization" by the two priors. The Gaussian prior eliminates the ill-posedness by regularizing the ill-conditioned Fisher information matrix with the covariance of the prior. So, the information in the likelihood, e.g. the bias and the correlations between p✓ 0 , ✓ 1 q and ✓ 4 , are preserved in the regularized posterior. The uniform prior, on the other hand, cuts the support of the degenerate likelihood and rejects out-of-range samples. As a result, the correlation between ✓ 0 and ✓ 1 is preserved in the regularized posterior because they feature similar variations, but the correlations  between p✓ 0 , ✓ 1 q and ✓ 4 are weakened (Figure 7).
In practice, one is often interested in a point estimate of parameters. Commonly used point estimators are the MAP and the posterior mean. Figures 7-8 show that both the MAP and the posterior mean can be far away from the truth for Gaussian as well as uniform priors. In particular, in the case of the uniform prior, the MAP values are further away from the truth than the posterior mean. In the case of the Gaussian prior, the MAP values do not present a clear advantage or disadvantage over the posterior mean. Table 5a shows the means and standard deviations of the errors of the posterior mean and MAP from 100 independent simulations. In each simulation and for each prior, we drew a parameter sample from the prior and generated a trajectory of observations, and then estimated jointly the parameters and states. The table shows that both posterior mean and MAP estimates are generally biased, consistent with the biases in Figures 7 and 8. More specifically, in the case of the Gaussian prior, the MAP has slightly smaller biases than the posterior mean, but the two have almost the same variances. Both are negatively biased for ✓ 0 and slightly positively biased for ✓ 1 and ✓ 4 . In the case of the uniform prior, the MAP features biases and standard deviations which are about 50% larger than those of the posterior mean. Both estimators exhibit large positive biases in ✓ 0 , large negative biases in ✓ 1 , and small positive biases in ✓ 4 .

State estimates
The state estimation aims both to filter out the noise from the observed nodes and to estimate the states of unobserved nodes. We access the state estimation by examining the ensemble of the posterior trajectories in a typical simulation, for which we consider the marginal distributions and the coverage probability of 90% credible intervals. We also examine the statistics of these quantities in 100 independent simulations. 18

State estimates
The state estimation aims both to filter out the noise from the observed nodes and to estimate the states of unobserved nodes. We access the state estimation by examining the ensemble of the posterior trajectories in a typical simulation, for which we consider the marginal distributions and the coverage probability of 90% credible intervals. We also examine the statistics of these quantities in 100 independent simulations. We present the ensemble of posterior trajectories at an observed node in Figure 9 and at an unobserved node in Figure 10. In each of these figures, we present the ensemble mean with a one-standard-deviation  The coverage probability (CP), the proportion of the states whose 90% credible intervals contain the true values, is 95% in the Gaussian prior case and 92% for the uniform prior in the above simulation. The target probability is 90% as in this case 90% of the true values would be covered by 90% credible intervals. The values indicate statistically meaningful uncertainty estimates, for example larger uncertainty ranges at nodes with higher mean errors. The slight over-dispersiveness, i.e. higher CPs than the target probabilities, might be a result of the large uncertainty in the parameter estimates. Table 6a shows the means and standard deviations of the relative errors and CPs in state estimation by the posterior mean in 100 independent simulations, averaging over observed and unobserved notes. The relative errors at each time t are computed by averaging the error of the ensemble mean (relative to the true value) over all the nodes. The relative error of the trajectory is the average over all times along the trajectory. The relative errors are 1.14% and 2.39% respectively 20 Figure 9: The ensemble of sample trajectories of the state at an observed node. Top row: the sample trajectories (in cyan) concentrate around the true trajectory (in black dash-asterisk). The true trajectory is well-estimated by the ensemble mean (in blue dash-diamond), and is mostly enclosed by the one-standarddeviation band (in magenta dash-dot lines). The relative error of the ensemble mean along the trajectory is 0.7% and 0.8%, filtering out 30% and 20% of the observation noise, respectively. Bottom row: histograms of samples at three instants of time: t " 20, t " 60 and t " 100. The histograms show that the samples concentrate around the true states.
band, in comparison with the true trajectories, superimposed on the ensembles of all sample trajectories at these nodes. We also present histograms of samples at three instants of time: t " 20, t " 60 and t " 100. Figure 9 shows that the trajectory of the observed node is well estimated by the ensemble mean, with a relative error of 0.7%. Recall that the observation noise leads to a relative error of about 1%, so the posterior filters out 30% of the noise. Also note that the ensemble quantifies the uncertainty of the estimation, with the true trajectory being mostly enclosed within a one-standard-deviation band around the ensemble mean. Further, the histograms of samples at the three time instants show that the ensemble generally concentrates near the truth. In the Gaussian prior case, the peak of the histogram decreases as time increases. partially due to the degeneracy of SMC when we trace back the particles in time. In the uniform prior case, the ensembles are less concentrated than those in the Gaussian case, due to the wide spread of the parameter samples (Figure 7). Figure 10 shows sample trajectories of an unobserved node. Despite the fact that the node is unobserved, the posterior means have relative errors of 0.8% and 3.3% in cases of Gaussian and uniform priors respectively, with a one-standard-deviation band covering the true trajectory at most times. While the sparse  for the cases of Gaussian and uniform prior. These numbers are a result of having averaged over the observed and unobserved nodes. Note that the relative errors are similar at different times t " p20, 60, 100q, indicating that the MCMC is able to ameliorate the degeneracy of the SMC to faithfully sample the posterior of the states.
In the Gaussian prior case, the CPs are above the target probability in the 100 independent simulations with a mean of 96%. This supports the finding from above that the posteriors are slightly over-dispersive due to the large uncertainty in the parameter estimates. The standard deviation is very small with 2% which indicates the robustness of the Gaussian prior model. In the uniform prior case, the CPs are much lower with a mean of 73%. This might be a result of larger biases compared to the Gaussian prior case which are not compensated by larger uncertainty estimates. In addition, the standard deviation is much higher in the uniform prior case with 31%. This shows that this case is less robust than the Gaussian prior case.
21 Figure 10: The ensemble of sample trajectories of the state at an unobserved node. The ensembles exhibit a large uncertainty in both cases of priors, but the posterior means achieve relative errors of 0.8% and 3.3% in cases of Gaussian and uniform priors respectively. The one-standard-deviation band covers the true trajectory at most times. Bottom row: the histogram of samples at three time instants, showing that the samples concentrate around the true states. Particularly, in the case of the Gaussian prior, the peaks of the histogram are close to the true states, even when the histograms form a multi-mode distribution.
observations do cause large uncertainties for both priors, the histograms of samples show that the ensembles concentrate near the truth. Particularly, in the case of Gaussian prior, the peaks of the histogram are close to the true states, even when the histograms form a multi-modal distribution due to the degeneracy of SMC. We find that the posterior is able to filter out the noise in the observed nodes and reduce the uncertainty in the unobserved nodes from the climatological distribution. In particular, in the case of the Gaussian prior, the ensemble of posterior samples concentrates near the true state at both observed and unobserved nodes and substantially reduces the uncertainty. In the case of the uniform prior, the ensemble of posterior samples spreads more widely, and only slightly reduces the uncertainty. The coverage probability (CP), the proportion of the states whose 90% credible intervals contain the true values, is 95% in the Gaussian prior case and 92% for the uniform prior in the above simulation. The target probability is 90% as in this case 90% of the true values would be covered by 90% credible intervals.
The values indicate statistically meaningful uncertainty estimates, for example larger uncertainty ranges at nodes with higher mean errors. The slight over-dispersiveness, i.e. higher CPs than the target probabilities, might be a result of the large uncertainty in the parameter estimates. Table 6 shows the means and standard deviations of the relative errors and CPs in state estimation by the posterior mean in 100 independent simulations, averaging over observed and unobserved notes. The relative errors at each time t are computed by averaging the error of the ensemble mean (relative to the true value) over all the nodes. The relative error of the trajectory is the average over all times along the trajectory. The relative errors are 1.14% and 2.39% respectively for the cases of Gaussian and uniform prior. These numbers are a result of averaging over the observed and unobserved nodes. Note that the relative errors are similar at different times t " p20, 60, 100q, indicating that the MCMC is able to ameliorate the degeneracy of the SMC to faithfully sample the posterior of the states.
In the Gaussian prior case, the CPs are above the target probability in the 100 independent simulations with a mean of 96%. This supports the finding from above that the posteriors are slightly over-dispersive due to the large uncertainty in the parameter estimates. The standard deviation is very small with 2% which indicates the robustness of the Gaussian prior model. In the uniform prior case, the CPs are much lower with a mean of 73%. This might be a result of larger biases compared to the Gaussian prior case which are not compensated by larger uncertainty estimates. In addition, the standard deviation is much higher in the uniform prior case with 31%. This shows that this case is less robust than the Gaussian prior case.

Observing fewer nodes
We tested the consequences of having sparser observations in space, e.g. observing only two out of the 12 nodes. In the Gaussian prior case, in a typical simulation with the same true parameters and observation data as in Section 4.2, the relative error in state estimation increases slightly, from 0.7% to 0.8% for the observed node and from 0.8% to 1.1% for the unobserved node. As a result, the overall error increases. The parameter estimates show small but noticeable changes (see Figure 11): the posteriors of the parameters have slightly wider support and the posterior means and MAPs exhibit slightly larger errors than those in Section 4.2. We also ran 100 independent simulations to investigate sampling variability in the state and parameter estimates. Table 6(b) reports the means and standard deviations of the relative errors of the posterior mean trajectory, and CPs for state estimation in these simulations. The Gaussian prior case shows small increases in both the means and the standard deviations of errors, as well as slightly lower and less robust CPs. This confirms the results quoted above for a typical simulation. The uniform prior case shows almost negligible error and CP increases. Table 5(b) reports the mean and standard deviations of the posterior means and MAP for parameter estimation in these simulations. Small changes in comparison to the results in Table 5(a) are found. These small changes are due to the strong regularization that has been introduced to overcome the degeneracy of the likelihood.

Observing a longer trajectory.
When the length N of the trajectory of observation increases, the exponent of the regularized posterior (3.5), viewed as a function of θ only, tends to its expectation with respect to the ergodic measure of the    Figure 7, the marginal posteriors have slightly wider supports.

Observing fewer nodes
We tested the consequences of having sparser observations in space, e.g. observing only two out of the 12 nodes. In the Gaussian prior case, in a typical simulation with the same true parameters and observation data as in Section 4.2, the relative error in state estimation increases slightly, from 0.7% to 0.8% for the observed node and from 0.8% to 1.1% for the unobserved node. As a result, the overall error increases. The parameter estimates show small but noticeable changes ( Figure  11): the posteriors of the parameters have slightly wider support and the posterior means and MAPs exhibit slightly larger errors than those in Section 4.2.
We also ran 100 independent simulations to investigate sampling variability in the state and pa- Ý ÝÝÝ Ñ ErC y 1:N pθ, u 1:N qs almost surely. As a result, the marginal posterior tends to be stable as N increases. This result indicates that an increase of data size has a limited effect on the regularized posterior of parameters. This fact is verified by numerical tests with N " 1000, in which the marginal posteriors only have a slightly wider support than those in Figure 7 with N " 100. In general, the number of observations needed for the posterior to reach a steady state depends on the dimension of the parameters and the speed of convergence to the ergodic measure of the system. Here we have only three parameters and the SEBM converges to its stationary measure exponentially (in fewer than 10 time steps), therefore N " 100 is large enough to make the posterior be close to the steady state. When the trajectory is long, a major issue is the computational cost from sampling the posterior of the states. Note that as N increases, the dimension of the states in the posterior increases, demanding a longer Markov chain to explore the target distribution. In numerical tests with N " 1000, the correlation length of the Markov chain is at least 100, about four times the correlation length found for N " 100. Therefore, to obtain the same number of effective samples as before, we would need a Markov chain with length at least four times the previous length, say, L " 4ˆ10 4 . The computational cost increases linearly in N L, with each step requiring an integration of the SPDE. The high computational cost, an instance of the well-known "curse of dimensionality", renders the direct sampling of the posterior unfeasible. Two groups of methods could reduce the computational cost and make the Bayesian inference feasible. The first group of methods, dynamical model reduction, exploits the low-dimensional structure of the stochastic process to develop low-dimensional dynamical models which efficiently reproduce the statistical-dynamical properties needed in the SMC (see e.g. [9,10,24,32] and the references therein). The other group of methods approximates the marginal posterior of the parameter by reduced order models for the response of the data to parameters (see e.g. [6,11,14,22,31,33]). In a paleoclimate reconstruction context, the number of observations will generally be determined by available observations and the length of the reconstruction period rather than by computational considerations. We leave these further development of efficient sampling methods for long trajectories as a direction of future research.

Estimates of the nonlinear function
One goal of parameter estimation is to identify the nonlinear function g θ (specified in (2.2)) in the SEBM. The posterior of the parameters also quantifies the uncertainty in the identification of g θ . Figure 12 shows

Estimates of the nonlinear function
One goal of parameter estimation is to identify the nonlinear function g ✓ (specified in (2.2)) in the SEBM. The posterior of the parameters also quantifies the uncertainty in the identification of g ✓ . Figure 12 shows the nonlinear function g ✓ associated with the true parameters and with the MAPs and posterior means presented in Figure 7, superposed on an ensemble of the nonlinear function evaluated with all the samples. Note that in the Gaussian prior case, the true and estimated functions g ✓ are close even though ✓ 4 is estimated with large biases by either the posterior mean or by the MAP. In the uniform prior case, the posterior mean has a smaller error than the MAP and leads to a better estimate of the nonlinear function. In either case, the large band of the ensemble represents a large uncertainty in the estimates. For the Gaussian prior, neither the posterior distribution of the equilibrium state u e (for which g ✓ pu e q " 0) nor of the feedback strength dg ✓ {dupu e q are substantially changed from the corresponding priors. Both experience only a small reduction of uncertainty. In contrast, the posterior distributions are narrower than the priors for the uniform prior case -although the posterior means and MAPs are both biased.
24 Figure 12: Top row: The true nonlinear function g θ and its estimators using posterior mean and MAP, superposed on the ensemble of all estimators using the samples. Bottom row: The distribution of the equilibrium state u e (i.e. the zero of the nonlinear function g θ p¨q) and the distribution of dg θ du pu e q, with θ being samples of the prior and of the posterior. the nonlinear function g θ associated with the true parameters and with the MAPs and posterior means presented in Figure 7, superposed on an ensemble of the nonlinear function evaluated with all the samples. Note that in the Gaussian prior case, the true and estimated functions g θ are close even though θ 4 is estimated with large biases by either the posterior mean or by the MAP. In the uniform prior case, the posterior mean has a smaller error than the MAP and leads to a better estimate of the nonlinear function. In either case, the large band of the ensemble represents a large uncertainty in the estimates. For the Gaussian prior, neither the posterior distribution of the equilibrium state u e (for which g θ pu e q " 0) nor of the feedback strength dg θ {dupu e q are substantially changed from the corresponding priors. Both experience only a small reduction of uncertainty. In contrast, the posterior distributions are narrower than the priors for the uniform prior case -although the posterior means and MAPs are both biased.

Conclusions and future work
We have investigated the joint state-parameter estimation of a nonlinear stochastic energy balance model (SEBM) motivated by the problem of spatial-temporal paleoclimate reconstruction from sparse and noisy data, for which parameter estimation is an ill-posed inverse problem. We introduced strongly regularized posteriors to overcome the ill-posedness by restricting the parameters and states to physical ranges and by normalizing the likelihood function. We considered both a uniform prior and a more informative Gaussian prior based on the physical ranges of the parameters. We sampled the regularized high-dimensional posteriors by a Particle Gibbs with Ancestor Sampling (PGAS) sampler that combines Markov Chain Monte Carlo (MCMC) with an optimal particle filter to exploit the forward structure of the SEBM. Results show that the regularization overcomes the ill-posedness in parameter estimation and leads to physical posteriors quantifying the uncertainty in parameter-state estimation. Due to the ill-posedness, the posterior of the parameters features a relatively large uncertainty. This result implies that there can be a large uncertainty in point estimators such as the posterior mean or the maximum a posteriori (MAP), the latter of which corresponds to the minimizer in a variational approach with regularization. Despite the large uncertainty in parameter estimation, the marginal posteriors of the states generally concentrate near the truth, reducing the uncertainty in state reconstruction. In particular, the more informative Gaussian prior leads to much better estimations than the uniform prior: the uncertainty in the posterior is smaller, the MAP and posterior mean have smaller errors in both state and parameter estimates, and the coverage probabilities are higher and more robust. Results also show that the regularized posterior is robust to spatial sparsity of observations, with sparser observations leading to slightly larger uncertainties due to less information. However, due to the need of regularization to overcome ill-posedness, the uncertainty in the posterior of the parameters cannot be eliminated by increasing the number of observations in time. Therefore, we suggest alternative approaches, such as re-parametrization of the nonlinear function according to the climatological distribution or nonparametric Bayesian inference (see e.g. [18,36]) to avoid ill-posedness. The ill-posedness of the parameter estimation problem for the model we have considered is of particular interest because the form of the nonlinear function g θ puq is not arbitrary but is motivated by the physics of the energy budget of the atmosphere. The fact that wide ranges of the parameters θ i are consistent with the "obserations" even in this highly idealized setting indicates that surface temperature observations themselves may not be sufficient to constrain physically-important parameters such as albedo, graybody thermal emissivity, or air-sea exchange coefficients separately. While state-space modeling approaches allow reconstruction of past surface climate states, it may be the case that the associated climate forcing may not contain sufficient information to extract the relative contributions of the individual physical processes that produced it.
7 Appendix: Technical details of the estimation procedure 7.1 Discretization of the SEBM Finite element representation in space We discretize the SEBM in space by finite element methods (see e.g. [1])see e.g.. Denote by tφ i pξqu d b i"1 the finite element basis functions, and approximate the solution upt, ξq by where φ is a continuously differentiable compactly supported test function and the integral ş t 0 xf ps,¨q, φy is an Itô integral.
For convenience, we write this Galerkin approximate system in vector notation. Denote U ptq " pp u 1 ptq, . . . , p u d b ptqq T , (7.3) Φpξq " pφ 1 pξq, . . . , φ d b pξqq T , (7.4) Taking φ " φ j , j " 1, . . . , d b in equation (7.2) and using the symmetry of the inner product, we obtain a stochastic integral equation for the coefficient U ptq P R d b : xg θ pU T n Φq, Φy ds`ż t 0 xf ps,¨q, Φy. (7.6) To simplify notation, we denote the mass and stiffness matrices by which are symmetric, tri-diagonal, positive definite matrices in R d bˆdb , and we denote the nonlinear term as G θ pU ptqq :" xg θ pU T ptqΦq, Φy. (7.8) The above stochastic integral equation can then be written as xf ps,¨q, Φy. (7.9) by its linear finite element truncation, with the stochastic processes tf i ptq, i " 1, . . . , d b u being spatially correlated and white in time. Note that for ν " 0.1 and ρ ą 0 in the Matérn covariance (2.4), the process f pt, ξq is the stationary solution of the stochastic Laplace equation pρ´2´ν qf pt, ξq " σ f W pt, ξq, (7.14) where W is a spatio-temporal white noise [52,53]. Computationally efficient approximations of the forcing process are obtained using the GMRF approximation of [26] which generates F ptq " pf 1 ptq, f 2 ptq, . . . , f d b ptqq by solving (7.14). That is, using the above finite element notation, we solve for each time t the linear system pρ´2M 0`M1 q F ptq " σ f xΦ, W pt,¨qy, (7.15) where the random vector xΦ, W pt,¨qy :" pxφ 1 , W pt,¨qy, . . . , xφ d b , W pt,¨qyq is Gaussian with mean 0 and covariance M 0 . Solving (7.15) yields F ptq " N`0, σ 2 f M´1 ρ M 0 M´1 ρ˘, (7.16) where M ρ :" pρ´2M 0`M1 q. Semi-backward Euler time integration. Equation (7.9) is integrated in time by a semi-backward Euler scheme M ∆t U n`1 " M 0 U n`∆ t G θ pU n q`?∆t M 0 F n , (7.17) where U n is the approximation of U pt n q with t n " n∆t, and tF n u is a sequence of iid random vectors with distribution N´0, σ 2 f M´1 ρ M 0 M´1 ρ¯, with the matrix M ∆t denoting M ∆t :" M 0`∆ t M 1 .

(7.18)
Efficient generation of the Gaussian field. It follows from (7.15) that M 0 F n is Gaussian with mean zero and covariance M 0 M´1 ρ M 0 M´1 ρ M 0 . Note that while M ρ is a sparse matrix, its inverse matrix M´1 ρ is not. To efficiently use the sparseness of M ρ , following [26], we approximate M 0 by x M 0 :" diagpxφ i , 1yq and compute the noise M 0 F n by C´1N p0, I d q, where C is the Cholesky factorization of the inverse of the covariance matrix (called precision matrix) The precision matrix is a sparse representation of the inverse of the covariance. Therefore, the matrix C is also sparse and the noise sequence can be efficiently generated.
In summary, we can write the discretized SEBM in the form U n`1 " µ θ pU n q`W n (7.19) where the deterministic function µ θ p¨q is given by µ θ pU n q " M´1 ∆t M 0 U n`ÿ k"0,1,4 θ k G θ,k pU n q, (7.20) with G θ,k pU n q :" ∆tM´1 ∆t A T pAU ptqq˝k, and tW n u is a sequence of iid Gaussian noise with mean 0 and covariance R: Input: Observation y 1:N and ensemble size M . For the SEBM, we use the optimal importance density q in (7.27 Draw samples A m n´1 " Fp¨|w 1:M n´1 q with F defined in (7.25).

5:
Draw samples U m n " qpu n |y n , U A m n´1 n´1 q and set U m 1:n :" pU A m n´1 n´1 , U m n q.
with the mean µ m n and the covariance Σ given by µ m n " µpU m n´1 q`RH T Q´1py n´H µpU m n´1 qq, (7.28) Σ " R´RH T`Q`H RH T˘´1 HR. Drawbacks of SMC. While the resampling technique prevents w m n from being degenerate at each current time n, SMC algorithms suffer from the degeneracy (or particle depletion) problem: the marginal distribution p ppu n |py 1:N qq becomes concentrated on a single particle as N´n increases because each resampling step reduces the number of distinct particles of u n . As a result, the estimate of the joint density ppu 1:N |y 1:N q of the trajectory deteriorates as time N increases.

Particle Gibbs and PGAS
The framework of particle MCMC introduced in [2] is a systematic combination of SMC and MCMC methods, exploiting the strengths of both techniques. Among the various particle MCMC methods, we focus on the particle Gibbs sampler (PG) that uses a novel conditional SMC update [2], as well as its variant, the particle Gibbs with ancestor sampling (PGAS) sampler [28], because they are best fit for sampling our joint parameter and state posterior. The PG and PGAS samplers use a conditional SMC update step to realize the transition between two steps of the Markov chain while ensuring that the target distribution will be the stationary distribution of the Markov chain. The basic procedure of a PG sampler is as follows: • Initialization: draw θp1q from the prior distribution ppθq. Run an SMC algorithm to generate weighted samples tU m 1:N , w m N u M m"1 for p θp1q pu 1:N |y 1:N q and draw U 1:N p1q from these weighted samples. • Markov chain iteration: for l " 1,¨¨¨, L´1, 1. Sample θpl`1q from the marginal posterior ppθ|y 1:N , U 1:N plqq given by (2.14). 2. Run a conditional SMC algorithm, conditioned on U 1:N plq, which is called the reference trajectory. That is, in the SMC algorithm, the M -th particle is required to move along the reference trajectory by setting U M n " U n plq. Draw other samples from the importance density, and normalize the weights and resample all the particles as usual. This leads to weighted samples tU m 1:N , w m N u M m"1 with U M 1:N " U 1:N plq. 3. Draw U 1:N pl`1q from the above weighted samples.
• Return the Markov chain tθplq, U 1:N plqu L l"1 . The conditional SMC algorithm is the core of PG samplers. It retains the reference path throughout the resampling steps by deterministically setting U M 1:N " U 1:N plq and A M n " M for all n, while sampling the remaining M´1 particles according to a standard SMC algorithm. The reference path interacts with the other paths by contributing a weight w M n . This is the key to ensuring that the PG Markov chain converges to the target distribution. A potential risk of the PG sampler is that it yields a poorly mixed Markov chain, because the reference trajectory tends to dominate the SMC ensemble trajectories. The PGAS sampler increases the mixing of the chain by connecting the reference path to the history of other particles by assigning an ancestor to the reference particle at each time. This is accomplished by drawing a sample for the ancestor index A M n´1 of the reference particle, which is referred to as ancestor sampling. The distribution of the index A M n´1 is determined by the likelihood of connecting U n plq to the particles tU m n´1 u M m"1 , in other words, according to weights r α m n´1|n " w m n´1 p θpl`1q pU n plq|U m n´1 qppy n |U n plqq, r w m n´1|n " r α m n´1 ř M k"1 r α k n´1 (7.30) The above weight r α m n´1|n can be seen as a posterior probability, where the importance weight w m n´1 is the prior probability of the particle U m n´1 , and the product p θpl`1q pU n plq|U m n´1 qppy n |U n plqq is the likelihood that U n plq originates from U m n´1 conditional on observation y n . In short, the PGAS sampler assigns the reference particle U n plq an ancestor A M n´1 that is drawn from the distribution FpA M n´1 " k| r w 1:M n´1|n q " r w k n´1|n . The above conditional SMC with ancestor sampling within PGAS is summarized in Algorithm 2.
Initialize the particles in SMC:

5:
Set U M n " U n ptq and draw samples U m n " qpx n |y n , U A m n´1 n´1 q for m " 1 : M´1.

6:
Draw A M n´1 " Fp¨| r w 1:M n´1|n q, where the weights in r w 1:M n´1|n are computed in (7.30).

8:
Compute the normalized weights w m n according to (7.26). 9: end for 10: Draw A N with Fp¨|w 1:M N q. 11: return U p1:N q pl`1q " U A N 1:N . Algorithm 2: Conditional SMC with ancestor sampling for PGAS sampler.