Regularization destriping: the functional and its minimization
The method described here is closely related to the destriping functional
described in . Our work differs in its exact
functional form and its principle of solution. In particular, we formulate a
solution for destriping in an inverse-problem framework, and keep only the
part of the functional in that smooths the stripes.
This new formulation allows us to provide an explicit numerical solution
instead of an iterative one, the former being more efficient and thus better
suited for operational codes. We explicitly introduce a regularization
parameter that controls the relative balance between the data term (“fitting
the original image”) and the regularity term (“smoothing out the
stripes”). Such regularization is a common practice in inverse
problems , and fall under the rubric of Tikhonov regularization
theory. As a further refinement of the destriping functional, we specifically
define weights for the regularization term so that the algorithm applies only
to the stripes while preserving the other features.
Assuming that the stripes are parallel to one another in the image plane, we
take the direction of the stripes as the x (horizontal) direction. Thus the
data term representing the horizontal gradient difference between the
original and the destriped images is given as
ED(u)=∫Ω∂∂x(u-f)2dΩ,
where Ω is the image domain on xy plane, f(x,y) is the original
image with stripes, and u(x,y) is the destriped image.
The regularization term emphasizes the smoothness in the vertical direction, which is assumed to
be free of stripes. This regularization term is given by
ER(u)=∫Ω∂u∂y2dΩ.
The regularization parameter α>0 balances the data term and the
regularization term. The resulting destriping functional is
EC(u)=∫Ω∂∂x(u-f)2dΩ+α∫Ω∂u∂y2dΩ.
This is the x-directional destriping functional proposed by , and it is equivalent to the basic form of our
destriping functional when α=1. The choice of α, as we show
later, is key to achieving the balance between matching the original image
and removing stripes. However, our approach differs
from and our goal is to develop a destriping method
that is easy to implement while preserving the other features of the image.
A drawback of scene-based destriping is unintended changes in the values of
all the pixels and not just the stripes. If we apply the regularization term
for the whole image as it is in Eq. (), the
entire image is effected. This could modify the original features of the
image, in addition to recovering stripes. Therefore, we further develop our
functional in Eq. () to regularize only the
stripes. We introduce a mask (L) to the regularization term, to limit the
smoothing effects on the stripes. To obtain L from the image, we first
compute the slope of the image transverse to the stripes using first-order
finite differences. Then we sum the absolute differences parallel to the
stripes. This yields the total value (S) corresponding to each row. From
the peaks of graph S versus r, where r is the row index, we can
identify the stripes and select a “threshold” to separate the stripes from
the other features. To compute the S curve, we do not need to consider the
complete image and instead we can consider a vertical image segment that
contains some part of all the biased data. In this case, we can preserve the
actual features such as roads or any other real signals that are exactly
aligned with stripes.
The mathematical expression for the computation of S for an image f, of size m×n, with a suitable boundary condition, can be written as
S(r)=∑c=1nf(r,c)-f(r+1,c),
where r=1,2,…,m and f(m+1,c) is the introduced boundary row. Now
defining the “threshold” value from the S curve, we obtain the sparse
matrix L by assigning “ones” for the locations of the
stripes. We will explain the procedure of selecting an appropriate
“threshold” value using examples in Sect. . Any row r,
where r=1,2,…,m of matrix L with size m×n can be
defined as
Lr,c=1,if S(r)≥threshold0,otherwise,
where c=1,2,…,n.
Then the new destriping functional, with the spatially weighted regularization term, is written as
E(u)=∫Ω∂∂x(u-f)2dΩ+α∫ΩL∂u∂y2dΩ.
The destriped image is obtained by minimizing the functional after choosing
an appropriate regularization parameter. Note that the functional E(u) is
invariant under constant shift. That is, E(u+a)=E(u) for any constant a,
implying that minimization of E(u) leads to an infinite number of
solutions. In this work, we impose appropriate boundary conditions (as
discussed below) that ensure uniqueness of the solution.
We create a destriped image by minimizing the energy functional in
Eq. () using the Euler–Lagrange
equation. For a functional of the form
J(u)=∫ΩFx,y,u,ux,uydΩ
on the bounded domain Ω, the Euler–Lagrange equation is given as
∂F∂u-∂∂x∂F∂ux-∂∂y∂F∂uy=0.
As explained in and , the Euler–Lagrange equation,
uxx+αLuyy=fxx
is obtained by applying Eq. () to
Eq. (), where subscripts represent
the argument variable(s) of the partial derivatives. We can rewrite
Eq. () as
Dxx+αLDyyu=Dxxf,
where the operators D•• are two-dimensional arrays of size
k×k used to compute the partial derivatives of a given vector of size
k×1 with respect to the indices ••.
We use finite-difference approximations with suitable boundary conditions for
each derivative to directly represent these differential operators. In this
work, we apply “reflexive” boundary conditions parallel to the stripes and
“zero” boundary conditions transverse to the stripes when we the generate
derivative operators. These boundary conditions lead Dxx+αLDyy to a full rank operator and hence we reach a unique solution.
In Eq. () we stack the given image of size
p×q into k×1 vector, where k=pq. We can now use the
differential operators to write the linear Euler–Lagrange equation in the
form of Au=b, and solve for u using an appropriate numerical method
rather than solving the Euler–Lagrange equation for u from an iterative
scheme such as the gradient descent method.
Construction of the differential operator
We construct the operator Dxx using finite-difference approximations.
The operator Dyy is built by taking the transpose of the finite-difference stencil. Suppose we have a function M(x,y)∈Rp×q. We need to compute the second-order partial derivative of M with
respect to x. We use a fourth-order finite-difference approximation and
compute the point-wise second partial derivative of the array M with
respect to x.
As an example, take an array Mx,y of size 3×5
where we want to compute Mxxx,y. We index the elements in
the array in the form of a column vector as shown in Table 1, with two added
boundary columns for each side.
An array of 3×5 with boundary points in bold.
m4
m1
m1
m4
m7
m10
m13
m13
m10
m5
m2
m2
m5
m8
m11
m14
m14
m11
m6
m3
m3
m6
m9
m12
m15
m15
m12
The boundary points are highlighted in bold. If we compute the partial
derivative of m1 with respect to x, the resulting approximation is
∂2m1∂x2=112h2-m4+16m1-30m1+16m4-m7=112h2-14m1+15m4-m7.
Computing the finite-difference approximations for each element in the array,
we can obtain the differential operator Dxx corresponding to the
3×5 array in Table . The
resulting operator is a sparse matrix with only five non-zero diagonals, and
it is shown in Table with a multiplication
factor of 12h2.
A discretized derivative operator Dxx×12h2 for a
3×5 matrix.
m1
m2
m3
m4
m5
m6
m7
m8
m9
m10
m11
m12
m13
m14
m15
m1
–14
0
0
15
0
0
–1
0
0
0
0
0
0
0
0
m2
0
–14
0
0
15
0
0
–1
0
0
0
0
0
0
0
m3
0
0
–14
0
0
15
0
0
–1
0
0
0
0
0
0
m4
15
0
0
–30
0
0
16
0
0
–1
0
0
0
0
0
m5
0
15
0
0
–30
0
0
16
0
0
–1
0
0
0
0
m6
0
0
15
0
0
-30
0
0
16
0
0
–1
0
0
0
m7
–1
0
0
16
0
0
–30
0
0
16
0
0
–1
0
0
m8
0
–1
0
0
16
0
0
–30
0
0
16
0
0
–1
0
m9
0
0
–1
0
0
16
0
0
–30
0
0
16
0
0
–1
m10
0
0
0
–1
0
0
16
0
0
–30
0
0
15
0
0
m11
0
0
0
0
–1
0
0
16
0
0
–30
0
0
15
0
m12
0
0
0
0
0
–1
0
0
16
0
0
–30
0
0
15
m13
0
0
0
0
0
0
–1
0
0
15
0
0
–14
0
0
m14
0
0
0
0
0
0
0
–1
0
0
15
0
0
–14
0
m15
0
0
0
0
0
0
0
0
–1
0
0
15
0
0
–14
Solution to the Euler–Lagrange equation
Now we can determine the solution to Eq. (). We
rewrite the Eq. () as
Au=b,
where A=Dxx+αLDyy and b=Dxxf. If the size of the given
striped image f is p×q, then A is a k×k sparse array
and b is a k×1 array, where k=pq.
Using a suitable value for the regularization parameter,
Eq. () can be solved as a linear system. The
system is sparse, and hence the computation time for an image with n pixels
is of O(n) for each iteration. Clearly, at this stage, for a given
α, Eq. () is straightforward to solve;
however, in terms of the image processing, the specific choice of α
plays an important role. To achieve “the most appropriate solution,” we
need to determine the best regularization parameter α.
Selection of the regularization parameter
The condition number of the resulting matrix quantifies the amplification of
computational errors seen while solving the problem by direct computation.
The condition number may be computed as the ratio between the largest
singular value and the smallest singular value of the coefficient matrix. If
the condition number is large, then the coefficient matrix is said to be
ill-conditioned and hence the corresponding system is ill-posed. In an
ill-posed system, the solution is highly sensitive to perturbations of the
input data. Regularizing an ill-posed system, which emphasizes a desired property
of the problem, introduces a stable way to define a desirable solution . This is the standard
trade-off between regularity and stability in Tikhonov regularization terms.
We regularize our computed solution by emphasizing the expected physics. To
damp the accumulated errors from the residuals, we must make sure that we add
sufficient regularity. The balance between the data term and the
regularization term is very important: if we add too much regularity, it will
divert the solution from the desired solution. Stated in terms of Tikhonov
regularization, α serves the role to select a unique optimizer u,
from what would be an otherwise ill-posed system had only the data fidelity
had been chosen. In terms of the images, the data fidelity states that the
optimizer image u should “appear as” the original data image, measured
here in terms of along the stripes, but “smoothed” according to the
regularizer term, in this case transverse to the stripes. The question then
is how to balance these two terms.
Some of the common methods to determine the regularization parameter in
inverse problems are the L-curve method of and
U-curve method of and . After
trials with both methods we selected the U-curve method. Starting with the
functional
J(u)=argminu‖Bu-v‖22+α‖u‖22,
where B is a k×k array and u,v are k×1 arrays for an
integer k, a regularized solution is computed for a fixed α. Then
the norm of the residuals and solution are written as
xα=‖Buα-v‖22andyα=‖uα‖22.
In general, α∈(0,1) and in this work we picked 100 equally spaced
α values in the interval of (10-12,1). Considering all the
α values, and computing the sum of the reciprocals, results in a
U-curve,
Uα=1xα+1yα.
The “best” α value corresponds to the minimum of U(α). The
“best” α value must be in the interval α∈σr2/3,σ12/3, where σ1 is the
largest singular value of the operator B, and σr is the smallest
non-zero singular value of the operator B
.
Panel (a) shows (simulated) variations of sea surface
temperature off the coast of Oregon, USA, on 1 August 2002.
The image was generated from a Regional Ocean Model System (ROMS, Courtesy of John Osborne, Oregon State University),
using the data assimilation from the Geostationary Operational Environmental Satellite (GOES). Panel (b) is created by adding artificial stripes to every 20th row.
Inpainting data gaps
Unlike terrestrial images, which can show sharp edges, ocean color images
typically appear continuous. This is because the water tends to diffuse any
color agents in the water column, and the spatial resolution of the sensor is
usually finer than those diffusive features. The same holds spectrally if the
sampling wavelength is less than the autocorrelation function of the spectra,
as is the case for hyperspectral images. However, this continuity is broken
if gaps appear in the data.
Clouds are very bright and often saturate the sensor. In normal processing,
clouds are typically masked from the data since their large radiance values
obscure the (relatively dark) ocean. These types of processing masks also
cause large, irregular data gaps. There are other sources of data gaps as
well, as we discuss here, that can be inherent in the sensor design.
As an example of data gaps introduced by system design, consider the VIIRS
sensor, which uses 16 detector elements to generate each spatial image. The
spatial footprint of each adjacent sensor element overlaps at the detector
edges of . To reduce data transmission from the
satellite to ground stations, the overlapping regions are not transmitted,
causing the so-called “bow-tie” effect. This appears as visible horizontal
stripes in the Level 1 or Level 2 unmapped pixel values. These gaps are
removed during projection to ground-based coordinates, so-called Level-3 data
(L3). However, stripes which are linear in the “unmapped” data become
nonlinear after mapping, and are more difficult to remove. Therefore we work
with the Level 1 and Level 2 data where the stripes are linear, and often
aligned with the focal plane array or detector elements.
These three images represent the destriped versions of the image, shown
in Fig. b, in three different ways.
Panels (a) and (b) are obtained with α=1 and
α=3×10-1 from the functional in
Eq. (). Panel (c) is obtained with
α=7×10-1 from the functional with spatially weighted
regularization term, as shown in
Eq. ().
We need to preprocess the image data in such a way as to ensure continuity
across any data gaps. An obvious way to fix the gaps is to use
“inpainting”, a technique from image processing – rather than infer what the
actual missing data might be, inpainting simply imposes continuity across the
whole image when gaps are present. In a museum setting, inpainting refers to
the process whereby a painter–restorer interprets a damaged painting by
artistically filling in damaged or missing parts of a painting, smoothly
bleeding in the colors of the painting that surrounds the damage
. In modern-day computational parlance,
inpainting refers to an image-processing algorithm that mimics this idea. A
wide variety of methods have been implemented .
Missing data are filled in by solving the Laplace equation with Dirichlet
boundary conditions, ∇u=0 in Ω and u=f on ∂Ω;
specifically, we use the MATLAB routine roifill. Because the
algorithm approximates the partial derivatives using finite-difference
approximations, inpainting should be done before applying the destriping
algorithm on images.
Results and discussion
In this section, we first apply the destriping method to a simulated image of
sea surface temperature (SST), and then apply the destriping algorithm to two
real-world images, one from the multi-spectral NOAA imager VIIRS, and the
second from the JPL PRISM hyperspectral sensor.
Benchmark data: sea surface temperature
A benchmark data set – sea surface temperature data off the Oregon coast –
was used to test the codes. The original image data is shown in
Fig. a, where red represents high temperature
and blue represents the low temperature. This simulated image is from a
three-dimensional
Regional Ocean Model System (ROMS) showing the Oregon Coast on 1 August 2002
. The area covered is 41–46∘ N, -124–-125∘ W.
The image (b) in Fig. shows the artificially
added stripes on the image shown in Fig. a.
The intensities of stripes are assigned as the absolute values of difference
between the original intensity and the average of each striped row.
However, stripes at the 10th, 90th and 110th rows have some added noise intensity
values so that they can be visualized properly.
Our next step is to apply the destriping method to the SST data and check the
accuracy of the algorithm. There, we compare the solutions with
regularization parameters α=1 and α=3×10-1 using the
functional with an unweighted regularization term, as shown in
Eq. () and α=7×10-3 with a
spatially weighted regularization term, as shown in
Eq. (). The idea of this comparison
is to show the importance of the selection of our regularization method and
how the weighted regularization term improves the destriping results. The
destriped image from the unweighted regularization term is shown in
Fig. a, and clearly it is overly smooth as most of the
original features are destroyed from the destriping. Therefore, we chose an
appropriate α value to balance the data term and the unweighted
regularization term from the U-curve method. The U-curve solution is
α=3×10-1 and the resulting destriped image is shown in
Fig. b. However, it can be observed that some features
of the destriped image have undergone smoothing at some places in addition to
the stripes during the destriping process. Hence, it is an important task to
preserve the other features where there are no stripes while applying
destriping. We achieve this by improving the regularization term by
incorporating different weights to the places where stripes are available and
to where they are not available. The weighted matrix L can be obtained from the expression in Eq. () by giving zeros to the places where no
stripes. In this manner, we can preserve the features of the original image
from smoothing, and the resulting image is shown in
Fig. c.
The next task is to check the accuracy of the estimated values for the
stripes. The stripe at the 110th row was randomly selected for this purpose.
There we plot the intensity values of the stripe (black), reconstruction
(red) and the actual values as they were against the column index. The
results from the functional in Eq. () with
α=1 is shown in the graph (c) of
Fig. . The reconstructions of the first
20 pixels and the last 10 pixels are closer to the original values, but the
rest have significant differences compared to the actual value. The graph (b)
in Fig. shows destriped results of the
110th row from the functional in Eq. () using
the U-curve solution, α=3×10-1. In this case, the
reconstruction of some middle pixels (18–28 and 31–37) and the last 10
pixels are off by some extent, but the rest is much closer to the actual
values than the reconstruction with α=1. The graph (c) in
Fig. shows the best solution among these
approaches, obtained from the weighted regularization term with
α=7×10-1. This reconstruction is very close to the actual
values and hence it is the most accurate reconstruction. These are the sort of
results that we expect from a destriping algorithm. More importantly, if we
compare “a stripe-like feature” at the 18th row, we can confirm the importance of giving weights to the regularization term, as the weighted
regularization term focus only the stripes.
This figure presents three different reconstructions of the stripe
at the 110th row of the image shown in
Fig. b. The graphs show the actual data in
blue, striped data in black and the destriped data in red. Graphs
(a) and (b) represent the reconstructions from the
functional in Eq. () with α=1 and
α=3×10-1, respectively. The graph (c) shows the
reconstruction from the functional in
Eq. () with α=7×10-1.
In addition to the reconstruction of stripes, we need to pay attention to the
rest of the features of the image. The idea of destriping is to remove
artificial stripes while preserving the other original features of the image.
Therefore, we randomly select the 67th row of the image to compare before and
after effects of destriping at a place where there is no actual stripe. The
graphs (a) and (b) in Fig. are from the
functional in Eq. () with α=1 and
α=3×10-1, respectively, and the graph (c) in
Fig. from the functional in
Eq. () with α=7×10-1.
The graphs (a) and (b) in Fig. conclude
that the destriping has affected the other features of the image when the
functional in Eq. () is applied regardless of
the value of the regularization parameter. However, the graph (c) in
Fig. shows that the functional with the
spatially weighted regularization term in
Eq. () does not affect the other
features of the image.
The S curve corresponding to the SST image is shown in graph (a) in
Fig. . The “threshold” value for this problem is
also shown on the same graph and it is 18 in this case. If we consider the
units of the sea surface temperature data, the threshold value can written as
842 ∘C longitude per latitude. To determine the best “threshold” value
that does not affect the image regions with no stripes, we begin the
procedure by assigning the “threshold” value to the maximum peak of the
S curve. Then we apply the destriping algorithm and check the solution. If
at least one stripe is still visible, we assign the next highest peak of the
S curve as the “threshold” value and check for the stripes. Continuing in
this manner until all the stripes are removed, the best “threshold” value
can be determined. The peak points that have crossed the threshold value are
the stripes. As a comparison of the full destriping image, the absolute
percentage error between the striped and destriped image is shown in the
image (b) in Fig. . This is computed by
Absolute Percentage Error=f-uf×100,
where f and u are striped and destriped images respectively. The graph of
the absolute percentage error shows that not only the 67th row but also all
the other non-stripe rows are not affected from this spatially weighted
regularization method.
This figure shows the effects of destriping on the places where there are “no
stripes”. We randomly picked the 67th row for this comparison. When
α=1 in the unweighted regularization functional, the image is more
smoothed and affects destriping to the whole image. Much better results can
be obtained from the unweighted regularization functional with
α=3×10-1, which is the U-curve solution and is shown in graph
(b). A spatially weighted regularization term with α=7×10-1 has less of an effect on the other features of the image and we can
observe that from the graph (c).
Panel (a) shows the S function values against the column
numbers. The peak points represent the stripes. When “threshold” is set at
18, only stripes can be included for the regularization but excludes all the
other features of the image. With the units of the image data, the threshold
value can written as 842 ∘C longitude per latitude. Then the
percentage error between the striped and destriped image are shown in the
panel (b). The error where there are no stripes is always closer to
the zero.
Example 2 – VIIRS images
A good example of VIIRS stripping is shown (Fig. ) in
a chlorophyll concentration map near the Santa Monica region in southern
California on 7 November 2014 . The chlorophyll concentration is
given in milligrams per cubic meter. Green is the land and the dark blue is the dropped
data and the missing data. The full VIIRS data granule covers
-122.09∘ W to -116.90∘ E and
34.2∘ N to 31.6∘ S (Fig. ).
We consider a subsetted (cropped) region of interest
to highlight the stripes in Fig. a.
The first step of applying this destriping method is to determine the
threshold value to separate the neighborhood of stripes and the rest of the
features. However, when we deal with real data, we may not always get nice
and smooth images. For instance, if we compute the S curve values using the
whole image, we are unable to get any evidence to determine the threshold
value, as the sum of the magnitude of forward differences in some other rows
are higher than that of the stripes. Therefore, we need to carefully pick a
subregion of the image that includes all the rows with some selected columns.
The sum of the magnitude of forward differences in non-stripe rows should be
less than the that of stripes in this region. Then the stripes can be easily
highlighted. For this example, we select the region using the columns from
137 to 148, and it is shown in Fig. b with
the S curve. The threshold value can be determined from the S curve, and
it is 0.558 for this image. If we include the proper units of the image
data, the threshold value is 86.04 mg m-3.
The image shows the chlorophyll concentration in milligrams per cubic meter near
the Santa Monica region in southern California as viewed by VIIRS on
7 November 2014. Green represents the land and dark blue represents the
dropped data due to bow-tie effects and missing data due to clouds. For a
detailed discussion, we next consider the subset of the image that is covered
by the pink square in the image.
The effects of spatially weighted regularizing destriping are shown in
Fig. e. We apply our algorithm to the image
shown in Fig. a. In this case, the
regularization parameter was α=10-2 with the weighted
regularization term. The intensity of the destriped image now varies smoothly.
Therefore, the image is smooth enough to further post-process for other
applications such as computing optical flow.
Panel (a) shows the cropped region shown in Fig. and the graph (b) shows the S
curve with the threshold and the image piece (columns 137 to 148) that we used compute the values of S curve. The panels
(c), (d) and (e) represent the destriped images of (a) with α=1 with unweighted regularization term and α=10-5 and
α=10-2 with weighted regularization term, respectively. Panel (e) provides the best solution for the destriped image. Panel (c) is over regularized whereas
panel (d) is not sufficiently regularized. Panel (f) represents the percentage error between panels (a) and (e).
Panel (a) shows the destriped image scene of the panel
(a) in Fig. from NASA's vicarious
calibration of The L2 (*.nc) product method. While NASA's vicarious
calibration of The L2 (*.nc) product method does improve over the raw image,
there are still stripe artifacts present. The following image (b)
represents the destriped image of the image (a) from our method.
Panel (a) is band 22 (∼ 410 nm) of a hyperspectral image which was taken from JPL PRISM
. The stripe pattern is vertical and the destriped images
are shown in image (b). Green represents the land of the observed
region.
The approach is proposed in without the transverse
direction functional, effectively setting α=1 in our functional
Eq. (). However, our destriping functional as
shown in Eq. () has weight matrix
L inside the regularization term and hence the two approaches are
not the same. Panel (c) in Fig. represents
the destriped image when α=1 without the weighted regularization term,
where image (a) is the original image. The destriped image is blurred and we
lose some information from the original image due to “over regularization”. In
this case, by over regularization we mean that continuity in the
transverse direction to the “stripes” has been emphasized so strongly in
functional Eq. (), when α is close to
1, that it is past balance against the need to also emphasize the image data
in the first term, called “data fidelity”. On the other hand, image (d) is
the destriped image of image (a) with α=10-5 in weighted
regularization terms. In this image, we still can observe some stripes due to
less regularity to smooth the stripes. Hence, it is clearly observed that a
given weighting factor to the regularization term is necessary, and
choosing an appropriate regularization parameter is also very important. We can
observe this scenario when we compare the images (c), (d) and (e) in
Fig. as α varies from 10-5 to
1. Therefore, once we have an appropriate α, we do not need an extra
functional to reduce the blurring effects as explained in
, and hence our approach provides a simple algorithm to
remove the stripes. The appropriate α is selected from the U-curve
method, as explained in Sect. . Panel (f) in
Fig. shows the percentage error between
the striped and destriped images. This is proof that the effects
from the destriping on the image where there are no stripes is negligible.
Image-based methods such as the variational destriping can be used together
with other destriping methods. For instance, NASA's vicarious calibration of
the L2 (*.nc) product method uses a monthly moon calibration to monitor the
striping and calibrations and create up-to-date corrections using the entire
image collection. Figure a shows the VIIRS image in
Fig. a after the NASA correction, and the
data are publicly available at . However, the striping artifacts
are still visible in that image. Fig. b shows the
application of the weighted regularizing destriping to the image from
NASA's vicarious calibration of The L2 (*.nc) product method. The product from
the applications of both corrections is superior to either individual
correction. The regularization parameter for this image was 5×10-3,
and the destriped image is shown in Fig. b. The
threshold value for the columns 137 to 148 was 0.55. If we include the
proper units of the chlorophyll data, the threshold value is
84.81 (mg) longitude/(m3) latitude.
When we compare the images (a) and (b) in Fig. , it
can be observed that our method can be used to further improve upon the
destriped images, after NASA's vicarious calibration of the L2 (*.nc) product method has already been applied.
Example 3 – JPL PRISM images
In the last example, we apply the variational destriping algorithm to another
data set from the JPL PRISM hyperspectral imager, where the data is publicly
available at . Stripes arise in this sensor because of the
focal-plane array read-out mechanism has cross-talk artifacts. An image from
the band centered at 410 nm (band 22), which was taken from an airborne
campaign around Monterey, CA, near the Elkhorn Slough , is shown in
Fig. .
The regularization parameter to destripe the image (a) in
Fig. was α=5×10-3. The
destriped image is shown in image (b) in
Fig. . We used the columns from 50 to 70 to
determine the threshold value to assign the weights in the regularization
term, and this value was 1.7 for this image. It can be clearly observed that the
stripes are smoothed from the destriping method while preserving the other
features of the image.