All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Description

Parametrically Optimal, Robust and Tree-Search Detection of Sparse Signals

Transcript

Journal of Signal and Information Processing
, 2013, 4, 336-342
http://dx.doi.org/10.4236/jsip.2013.43042 Published Online August 2013 (http://www.scirp.org/journal/jsip)
Parametrically Optimal, Robust and Tree-Search Detection of Sparse Signals
A. T. Burrell
1
, P. Papantoni-Kazakos
2
1
Oklahoma State University, Computer Science Department, Stillwater, USA;
2
University of Colorado Denver, Electrical Engineer-ing Department, Denver, USA. Email: tburrell@okstate.edu, titsa.papantoni@ucdenver.edu Received June 30
th
,
2013; revised July 30
th
, 2013; accepted August 13
th
, 2013 Copyright © 2013 A. T. Burrell, P. Papantoni-Kazakos. This is an open access article distributed under the Creative Commons At-tribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the srcinal work is prop-erly cited.
ABSTRACT
We consider sparse signals embedded in additive white noise. We study parametrically optimal as well as tree-search sub-optimal signal detection policies. As a special case, we consider a constant signal and Gaussian noise, with and without data outliers present. In the presence of outliers, we study outlier resistant robust detection techniques. We compare the studied policies in terms of error performance, complexity and resistance to outliers.
Keywords:
Sparse Signals; Detection; Robustness; Outlier Resistance; Tree Search
1. Introduction
In recent years, some refreshed interest has been given to
sparse signals
, by the signal processing community [1,2], while the effective probing/transmission of such signals; previously denoted
bursty
, has been addressed by both tree-search [3,4], and random access algorithms [5]. The revisited investigation of sparse signals has focused on linear transformations [1,2], while the term
robustness
has been used loosely in [1]. In this paper, we focus on the detection of sparse sig- nals embedded in white Gaussian noise with the possible occasional occurrence of data outliers. We study both op- timal and sub-optimal detection techniques, when data outliers are considered both absent and present. In the latter case, we consider
robust
detection techniques, where robustness is here precisely defined as referring to outlier resistant operations [6,7]. We compare our tech- niques in terms of error performance, complexity and resistance to outliers. The organization of the paper is as follows: In Section 2, we state the fundamental general problem, present assumptions and notation, and determine, as well as par- tially analyze, the general optimal detector. In Section 3, we present and analyze the optimal detector for the case of white Gaussian noise and constant signal, when no outliers are considered in the design. In Section 4, we present and analyze the robust (outlier resistant) detector. In Section 5, we present and evaluate tree-search sub- optimal detectors. In Section 6, we include discussion and conclusions.
2. Problem Statement and General Solution
We consider a sequence of observations generated by mutually independent random variables, a small per- centage of which represent signal embedded in noise, while the remaining percentage represent just noise. Let it be known that the percentage of observations repre- senting signal presence is bounded from above by a given value
α
. We assume that the random variables rep- resenting the signal are identically distributed, and that so are those representing the noise. We denote by
1
,,
n
xx
, a sequence of n such observations, while we denote by
1
,,
n
XX
,,
n
, the sequence of mutually independent ran- dom variables whose realization is the sequence
1
xx
. We also denote by
f
1
(.) either the probability distribution (for discrete variables) or the probability density (for absolutely continuous variables) function (pdf) of the variables which represent signal presence, while we denote by
f
0
(.) the pdf of the variables which represent just noise. Given the observation sequence
1
,,
n
xx
and as-suming
f
1
(.),
f
0
(.),
known, the objective is to identify the locations of the signal presence; that is, which ones of the
1
,,
n
xx
observations originated from the
f
1
(.) pdf.
Copyright © 2013 SciRes.
JSIP
Parametrically Optimal, Robust and Tree-Search Detection of Sparse Signals
337
Our approach to the problem solution will be Maxi-mum Likelihood (ML); which is equivalent to that of the Bayesian minimization of error probability approach, when all signal locations and their number are equally probable [7]. That is, given the sequence
1
,,
n
xx
,,
m
ii
, the optimal detector decides in favor of the ; 0
m
n
signal locations if:
1
10110
loglogmaxloglog
kk jmkiji jkj
fxfx
k
fxf
x
(1) Let us define:
10
log
fx gx fx
(2) Via some straight forward modifications of the ex- pression in (1), we may then express the optimal ML detector as follows:
Optimal ML Detector
1) Given the sequence
1
,,
n
xx
, compute all
g
(
x
j
); 1
j
n
2) If
g
(
x
k
)
0; for all
k
, 1
k
n
, then, decide that no observation contains signal. 3) If
a set of integers ; 1
m
n
:
g
(
x
k
)
0; for all and
g
(
x
k
)
0; for all , then decide that the observations con-
1
,,
m
ii
i
m
1
,,
m
ki
i
1
,,
m
ki
taining the signal are all those with indices in the set .
1
,,
m
ii
4) If
a set of integers ;
m
n
:
g
(
x
k
)
0; for all and
g
(
x
k
)
0; for all
1
,,
m
ii
1
,,
m
kii
1
,,
m
kii
, then decide that the observations con- taining the signal are those whose indices
k
are contained in the set and whose
g
(
x
k
) values are the
n
highest in the set.
1
,,
m
ii
Considering the log likelihood ratio in (2), let
h
i
(
w
) and
H
i
(
w
);
i
= 0, 1, denote, respectively, the pdf and the cumulative distribution function of the random variable
g
(
X
) at the value point w, given that the pdf of
X
is
f
i
,
i
= 0, 1. Let
P
d
(0) denote the probability of correct detection induced by the optimal ML detector, given that no ob- servation contains signal. Let, instead,
1
,,
d
Pii
denote the probability of correct detection , given that the indices of the observations containing signal are given by the set
. Then, assuming absolutely continu-
1
,,
m
ii
ous {
X
i
} random variables; without lack in generality, we obtain the following expressions; without much difficulty, where
n
is assumed an integer; for simplicity in nota-tion:
0
00
nd
PH
110
,,100;1
mn
mdm
PiiHHmn
1
111100
,,d1;
n
dmn
PiinwhwHwHwmn
(3) Remarks It is important to note that the optimal detector pre- sented above assumes no knowledge as to any structure of the sparse signal and requires n-size memory, as well as ordering of the positive
g
(
x
k
) values, inducing com- plexity of order
n
log
n
. If, on the other hand, a structure of the signal is known a priori and is such that it appears as a bursty batch, then, the sequential algorithm in [7-9] that monitors changes in distribution should be deployed instead; it requires no memory and its complexity is of order n.
3. Constant Signal and White Gaussian Additive Noise
In this section, we consider the special case where the signal is a known constant
0, and the noise is zero mean white Gaussian with standard deviation
. After some simple straight forward normalizations, the optimal ML detector of Section II takes here the lollowing form:
Optimal ML Detector
1) Given the sequence
1
,,
n
xx
, compute all (
x
i
–
/2); 1
j
n
2) If (
x
k
–
/2)
0; for all
k
, 1
k
n
, then, decide that no observation contains signal. 3) If
a set of integers
1
,,
m
ii
; 1
m
n
: (
x
k
–
/2)
0; for all
,,
m
i
1
ki
and (
x
k
–
/2)
0; for all
1
,,
m
kii
, then decide that the observations con- taining the signal are all those with indices in the set
1
,,
m
ii
. 4) If
a set of integers
1
,,
m
ii
;
m
n
: (
x
k
–
/2)
0; for all
1
,,
m
kii
and (
x
k
–
/2)
0; for all
i
1
,,
m
ki
, then decide that the observations con- taining the signal are those whose indices
k
are contained in the set
1
,,
m
ii
and whose (
x
k
–
/2) values are the
n highest in the set. As to the probabilities of correct detection, defined in Section 2, they take the following form here, where
(
x
) and
(
x
) denote, respectively, the pdf and the cumulative distribution function of the zero mean and unit variance Gaussian random variable and where
α
n
is assumed again to be an integer :
1
,, ;02
ndm
Piimn
Copyright © 2013 SciRes.
JSIP
Parametrically Optimal, Robust and Tree-Search Detection of Sparse Signals 338
11
11221
,,d d ;
nn
dmnn
Piinwwwwnwwwwm
n
(4) Remarks It is interesting to note here that if it is known that the signal may appear as a set bursty batches of unknown sizes, then the re-initialization sequential algorithm in [8] will sequentially detect the beginning and the ending of each batch with minimal complexity, no memory re- quirements and with accuracy increasing with the sig- nal-to-noise ratio and the size of each batch. Let then
T
n
denote the value of the algorithm which detects the be-ginning of such a batch, upon the processing of the n
th
datum
x
n
from its beginning. Let
W
n
denote the value of the algorithm which detects the ending of the batch, upon processing the
n
th
datum
y
n
from the beginning of its ini- tialization. The whole algorithmic system operates then as follows, where
0
and
1
are two positive thresholds pre selected to satisfy power and false alarm trade offs: 1) Process the observed sequence
1
,,
n
xx
sequen- tially starting with the algorithm {
T
n
} whose values are updated as follows:
0
0
T
11
max0,2
nnn
TTx
Stop the first time
n
, such that
T
n
0
and declare
n
as the time when the signal batch begins. Then, switch immediately to the algorithm {
W
n
} whose values are updated as follows, where time zero denotes the time when the algorithm begins and where
y
n
denotes the
n
th
observed datum after the latter beginning: 2)
0
0
W
11
max0,2
nnn
WWy
k
Stop the first time n, such that
W
n
1
and declare that the signal batch has ended. We now express a Corollary which will be useful in the computation of bounds for the probability of correct detection in (4). The expressions in the Corollary are derived from recursive relationships produced via inte- gration by parts and can be proven easily by induction. Corollary The following equations hold:
11
d1
k
wwwk
(5)
1110
d1
k kk
wwwk
2 (6)
-111
d!!1 1!
knk nnmmmk
wwwwnkk nmn
(7)
0110
d!!21!!
kmmkmkimii
wwwwmk kimi
(8) Lemma 1 below utilizes the results in the Corollary, to express two lower bounds for the probability of correct detection in (4). The bound in (9) is relatively tight for low signal-to-noise ratio values
/
. The bound in (10) is relatively tight for high signal-to-noise ratio
/
, in- stead. Lemma 1 The probability of correct detection in (4) increases monotonically with increasing value of the signal-to- noise ratio
/
, converging to the value 1 as
/
reaches asymptotically large values. This probability is bounded from below as follows, assuming that
n
is an integer; for simplicity in notation:
111
1!1!,,!1; 22
dmnmnmmn
nn Piinnmnm
(9)
1101(1)
1!1!,,1!!2; 22
ndminnini
nn Piininimn
(10) Proof Considering
/
asymptotically large, we first appro- ximate
by 1, in Expression (4). We then, use expression (5) and consider again asymptotically large values of
/
. The result proves that the probability in (4) converges to 1, as
/
approaches infinity. To derive the bound in (9), we first bound
from below by
(–
w
) in the integrand of expression (4). Then, we use Equation (7) on the result- ing integral expression. To derive the bound in (10), we bound
from below by: 1)
(
/
); for negative
w
values and 2) by
(–
w
); for positive w values. We then use Expres- sions (5) and (8) from the Corollary. Lemma 2 below states a probability of correct detec-
Copyright © 2013 SciRes.
JSIP
Parametrically Optimal, Robust and Tree-Search Detection of Sparse Signals
339
tion result for the case where the percentage of signal- including observations and the signal-to-noise ratio are both very small. Lemma 2 Let
0 and
/
0. Then, the probability of cor-rect detection in (4) is of order
n
n
(
/
). Proof We brake the integral in (4) into two parts: the part from
−
to 0 and the part from 0 to
/2
. For the first part, we expand
1
n
via Taylor series expansion to first order
/
approximation. For the sec- ond part, we approximate the integral by
/2
times the value of the integrand at zero. Then, we approximate 1 –
1. As a result, we then obtain:
1002
,,02d2d;
dmnnn
Piinw
n
wnwwwmn
w
(11) We note that, in general, the probability of correct de- tection induced by the ML optimal detector is of the or- der
2
n
, increasing exponentially to 1; with increasing signal-to-noise ratio
/
, while it is also de- creasing exponentially to 0; with increasing sample size.
4. The Outlier Resistant Detector
In this section, we consider the case where extreme occa- sional outliers may be contaminating the Gaussian envi- ronment of Section 3. Then, instead of white and Gaus- sian, the noise environment is modeled as white with pdf belonging to a class
F
of density functions, defined as follows, for some given value
in (0, 0.5), where
ε
represents the outlier contamination level:
F
= {
f
;
0
1
ffh
,
f
0
is the Gaussian zero mean and standard deviation
pdf,
h
is any pdf}. The outlier resistant robust detector is then found based on the
least favorable
density
f
*
in
class
F
above, where the Kullback-Leibler number between
f
*
and its shifted by location parameter
version attains the in-fimum among the Kullback-Leibler numbers realized by all pdfs in
F
[6,7]. As found in [7], the log likelihood ratio in (2) is a truncated version of that used in Section 3, As a result, for
0, the ML robust detector is operating as follows:
Robustl ML Detector
1) Given the sequence
1
,,
n
xx
, compute all
2
i
zx
; 1
j
n
, where,
;;;
dxd zxxdxd dxd
222
:1exp12
ddd d
(13) 2) If
2
k
zx
0
; for all
k
, 1
k
n
, then, de- cide that no observation contains signal. 3) If
a set of integers
1
,,
m
ii
; 1
m
n
:
20
k
zx
; for all and
1
,,
m
kii
2
k
zx
0
; for all, then decide that the observa- tions containing the signal are all those with indices in the set
1
,,
m
ii
. 4) If
a set of integers
1
,,
m
ii
;
m
n
:
20
k
zx
; for all and
1
,,
m
kii
2
k
zx
0
; for all , then decide
1
,,
m
kii
that the observations containing the signal are those whose indices k are contained in the set and whose
1
,,
m
ii
2
k
zx
values are the
α
n
highest in the set. We will denote by
1
,,
r dom
Pii
the probability of correct detection induced by the robust ML detector, given that the noise is Gaussian containing no outliers and given that the signal occurs at the observation indices
1
,,
m
ii
. Then, we can derive the expressions below, with some extra caution, assuming again that
α
n
is an integer:
1
,,; 02
rndom
Piimn
1
1121
,,d :
n
d nr domnn
Piinwwwdd wm
n
(14) Comparing Expressions (4) and (14), we notice that the robust detector induces lower probability of correct detection at the nominal Gaussian model; for the case of
m
=
n
, where the difference of the two probabilities decreases monotonically with decreasing contamination level
ε
. As we will see in the sequel, this loss of per- formance of the robust detector at the nominal Gaussian model is at the gain of resistance to outliers. Let there exist a small positive value
ς
, such that the noise per observation is zero mean Gaussian; with prob- ability 1
−
ς
, and is an infinite positive value y; with probability
ς
. We express below the probabilities
(12)
Copyright © 2013 SciRes.
JSIP

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks