^{3}S

^{2}), Potsdam, NY 13699, USA

We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes “the neighborhood of stripes” (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler–Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

Striping is a persistent artifact in remote sensing images and is
particularly pronounced in visible–near infrared (VNIR) water-leaving
radiance products such as those produced by operational sensors including NPP
VIIRS, Landsat 8 Operational Land Imager (OLI), and Geostationary Ocean Color
Imager (GOCI), as well airborne instruments such as NASA's JPL PRISM sensor.
These sensors cover a temporal sampling range from daily (VIIRS) to hourly
(GOCI) and spectral sampling from multi-spectral (VIIRS, GOCI) to
hyperspectral (PRISM). Striping is pronounced in products from all these
sensors because atmospheric correction for ocean color products typically
removes at least 90 % of the signal recorded at the top of the atmosphere
(TOA). Put another way, any artifacts in the TOA signal are amplified by at
least a factor of 10 in any derived water products such as normalized water-leaving radiance of a specific spectral band (nLw(

Striping is ubiquitous and difficult to remove because it has many possible origins. The detectors themselves are subjected to small amplitude variations in both sensitivity and calibration. The view angles (azimuthal and zenith) also vary from detector to detector and from pixel to pixel. Other differences in the instrument's optical path, components (e.g., mirrors), asynchronous readout, and so on also cause striping. Not unexpectedly, the magnitude of the striping varies from image to image. Striping is particularly problematic when comparing a sequence of images, since any difference in computations between images produces spurious results in the neighborhood of stripes.

Ocean products from NPP VIIRS have shown problematic striping since its
launch, which has led to focus efforts at both NASA and NOAA to find
correction methods. NASA created a vicarious destriping method for VIIRS
images based on a collection of long-term on-orbit image data, including
solar and lunar calibrations. NASA's Ocean Biology Processing Group (OBPG)
began serving operational products with their vicarious calibrations and
destriping for VIIRS in 2014

The method described here is closely related to the destriping functional
described in

Assuming that the stripes are parallel to one another in the image plane, we
take the direction of the stripes as the

The regularization term emphasizes the smoothness in the vertical direction, which is assumed to
be free of stripes. This regularization term is given by

A drawback of scene-based destriping is unintended changes in the values of
all the pixels and not just the stripes. If we apply the regularization term
for the whole image as it is in Eq. (

The mathematical expression for the computation of

Then the new destriping functional, with the spatially weighted regularization term, is written as

The destriped image is obtained by minimizing the functional after choosing
an appropriate regularization parameter. Note that the functional

We create a destriped image by minimizing the energy functional in
Eq. (

We use finite-difference approximations with suitable boundary conditions for
each derivative to directly represent these differential operators. In this
work, we apply “reflexive” boundary conditions parallel to the stripes and
“zero” boundary conditions transverse to the stripes when we the generate
derivative operators. These boundary conditions lead

We construct the operator

As an example, take an array

An array of

The boundary points are highlighted in bold. If we compute the partial
derivative of

Computing the finite-difference approximations for each element in the array,
we can obtain the differential operator

A discretized derivative operator

Now we can determine the solution to Eq. (

Using a suitable value for the regularization parameter,
Eq. (

The condition number of the resulting matrix quantifies the amplification of
computational errors seen while solving the problem by direct computation.
The condition number may be computed as the ratio between the largest
singular value and the smallest singular value of the coefficient matrix. If
the condition number is large, then the coefficient matrix is said to be
ill-conditioned and hence the corresponding system is ill-posed. In an
ill-posed system, the solution is highly sensitive to perturbations of the
input data. Regularizing an ill-posed system, which emphasizes a desired property
of the problem, introduces a stable way to define a desirable solution

We regularize our computed solution by emphasizing the expected physics. To
damp the accumulated errors from the residuals, we must make sure that we add
sufficient regularity. The balance between the data term and the
regularization term is very important: if we add too much regularity, it will
divert the solution from the desired solution. Stated in terms of Tikhonov
regularization,

Some of the common methods to determine the regularization parameter in
inverse problems are the

Panel

Unlike terrestrial images, which can show sharp edges, ocean color images typically appear continuous. This is because the water tends to diffuse any color agents in the water column, and the spatial resolution of the sensor is usually finer than those diffusive features. The same holds spectrally if the sampling wavelength is less than the autocorrelation function of the spectra, as is the case for hyperspectral images. However, this continuity is broken if gaps appear in the data.

Clouds are very bright and often saturate the sensor. In normal processing, clouds are typically masked from the data since their large radiance values obscure the (relatively dark) ocean. These types of processing masks also cause large, irregular data gaps. There are other sources of data gaps as well, as we discuss here, that can be inherent in the sensor design.

As an example of data gaps introduced by system design, consider the VIIRS
sensor, which uses 16 detector elements to generate each spatial image. The
spatial footprint of each adjacent sensor element overlaps at the detector
edges of

These three images represent the destriped versions of the image, shown
in Fig.

We need to preprocess the image data in such a way as to ensure continuity
across any data gaps. An obvious way to fix the gaps is to use
“inpainting”, a technique from image processing – rather than infer what the
actual missing data might be, inpainting simply imposes continuity across the
whole image when gaps are present. In a museum setting, inpainting refers to
the process whereby a painter–restorer interprets a damaged painting by
artistically filling in damaged or missing parts of a painting, smoothly
bleeding in the colors of the painting that surrounds the damage

Missing data are filled in by solving the Laplace equation with Dirichlet
boundary conditions,

In this section, we first apply the destriping method to a simulated image of sea surface temperature (SST), and then apply the destriping algorithm to two real-world images, one from the multi-spectral NOAA imager VIIRS, and the second from the JPL PRISM hyperspectral sensor.

A benchmark data set – sea surface temperature data off the Oregon coast –
was used to test the codes. The original image data is shown in
Fig.

The image (b) in Fig.

Our next step is to apply the destriping method to the SST data and check the
accuracy of the algorithm. There, we compare the solutions with
regularization parameters

The next task is to check the accuracy of the estimated values for the
stripes. The stripe at the 110th row was randomly selected for this purpose.
There we plot the intensity values of the stripe (black), reconstruction
(red) and the actual values as they were against the column index. The
results from the functional in Eq. (

This figure presents three different reconstructions of the stripe
at the 110th row of the image shown in
Fig.

In addition to the reconstruction of stripes, we need to pay attention to the
rest of the features of the image. The idea of destriping is to remove
artificial stripes while preserving the other original features of the image.
Therefore, we randomly select the 67th row of the image to compare before and
after effects of destriping at a place where there is no actual stripe. The
graphs (a) and (b) in Fig.

The

This figure shows the effects of destriping on the places where there are “no
stripes”. We randomly picked the 67th row for this comparison. When

Panel

A good example of VIIRS stripping is shown (Fig.

The first step of applying this destriping method is to determine the
threshold value to separate the neighborhood of stripes and the rest of the
features. However, when we deal with real data, we may not always get nice
and smooth images. For instance, if we compute the

The image shows the chlorophyll concentration in milligrams per cubic meter near the Santa Monica region in southern California as viewed by VIIRS on 7 November 2014. Green represents the land and dark blue represents the dropped data due to bow-tie effects and missing data due to clouds. For a detailed discussion, we next consider the subset of the image that is covered by the pink square in the image.

The effects of spatially weighted regularizing destriping are shown in
Fig.

Panel

Panel

Panel

The approach is proposed in

Image-based methods such as the variational destriping can be used together
with other destriping methods. For instance, NASA's vicarious calibration of
the L2 (*.nc) product method uses a monthly moon calibration to monitor the
striping and calibrations and create up-to-date corrections using the entire
image collection. Figure

When we compare the images (a) and (b) in Fig.

In the last example, we apply the variational destriping algorithm to another
data set from the JPL PRISM hyperspectral imager, where the data is publicly
available at

The regularization parameter to destripe the image (a) in
Fig.

We present a variational destriping method by explicitly
including a tunable regularization parameter with a weighted regularization
term to a part of the destriping functional in

We used three data sets (SST, VIIRS and PRISM) in our paper. They are
VIIRS data at

R. Basnayake, E. Bollt and J. Sun developed the variational-based destriping algorithm. N. Tufillaro collected raw data from reliable sources and processed the data so that it could be readable in MATLAB. M. Gierach provided the JPL-PRISM data.

The authors declare that they have no conflict of interest.

The authors Ranil Basnayake, Erik Bollt, Nicholas Tufillaro and Jie Sun were supported by the National Geospatial Intelligence Agency under grant number HM02101310010. Erik Bollt was also supported by the Office of Naval Research, PI, N00014-15-1-2093, thanks to the Office of Naval Research grant no. N00014-16-1-2492.Edited by: Stephen Wiggins Reviewed by: Kevin McIlhany and one anonymous referee