Searching for a Target Whose Truncated Brownian Motion

Abstract

This paper presents search model for a randomly moving target which follows truncated Brownian motion. The conditions that make the expected value of the first meeting time between the searcher and the target is finite are given. We show the existence of an optimal strategy which minimizes this first meeting time.

Share and Cite:

Teamah, A. , El-Hadidy, M. and El-Ghoul, M. (2017) Searching for a Target Whose Truncated Brownian Motion. Applied Mathematics, 8, 786-798. doi: 10.4236/am.2017.86061.

1. Introduction

Detecting the holes on the oil pipeline under water prevents without a disaster such as that occurred in the Guilf of Mexico on April 2010. Linear search model is the one of the interesting search models which is used to detect these holes.

Searching for a Brownian target on the real line has been studied by El-Rayes et al. [1] . They illustrated this problem when the searcher started the searching process from the origin. They found the conditions that make the expected value of the first meeting time between the searcher and target is finite. They showed the existence of the optimal search plan which makes the expected value of the first meeting time between the searcher and target minimum. Mohamed et al. [2] studied this problem for a Brownian target on one of n-intersected real lines. The information about the target position is not available to the searchers at all the time. Recently, El-Hadidy [3] , studied this search problem for a d-dimen- sional Brownian target that moves randomly on d-space.

The main contribution of this paper centers on studying the search problem for a one-dimensional truncated Brownian motion. The searcher moves with a linear motion. This kind of search problems recently has various applications in physics such as finding a very small object that moves in the space like viruses and bacteria, or the object that is very large, like stars and planets. We aim to show the conditions that make the expected value of the first meeting time between the searcher and target is finite and show the existence of the optimal search plan that minimizes it.

This paper is organized as follows. In Section 2, we introduce the problem. In Section 3, the finite search plan and the expected value of the first meeting are discussed. In Section 4, we find the existence of optimal search path. In Section 5, we give an application to calculate the expected value of the first meeting time between the searcher and target.

2. Problem Formulation

The problem under study can be formally described as follows: We have a searcher starts the searching process from the origin of the line. The searcher moves continuously along its line in both directions of the starting point. The searcher would conduct its search in the following manner: Start at H 0 = 0 and go to the left (right) as far as H 1 . Then, turn back to explore the right (left) part of H 0 = 0 as far as H 2 . Retrace the steps again to explore the left (right) part of H 1 as far as H 3 and so on. The target is assumed to move randomly on the real line according to the one-dimensional truncated Brownian motion. The initial position of the target is unknown but the searcher knows the probability distribution of it, i.e., the probability distribution of the target is given at time 0, and the process { W ( t ) , t R + } , which controls the target’s motion, is truncated Brownian motion, where it has stationary independent increments, for any time interval (t1,t2) W ( t 1 ) W ( t 2 ) follows truncated normally distributed, and this process is called a truncated Brownian motion with drift μ and variance σ 2 . A search plan with speed V, which the searcher follows it, is a function

ϕ : R + R such that:

| ϕ ( t 1 ) ϕ ( t 2 ) | V | t 1 t 2 | , t 1 , t 2 R + . where R is the set of real numbers

And V is a constant in R + and ϕ ( 0 ) = 0 . The first meeting time τ ϕ is a random variable valued in R + defined as:

τ ϕ = inf { t , ϕ ( t ) = X 0 + W ( t ) }

where X 0 is a random variable follows truncated normal distribution and independent with W ( t ) and represent initial position of the target. The aim of the searcher is to minimize the expected value of τ ϕ .

Let Φ V ( t ) is the set of all search plans with speed V. The problem is to find a search plan ϕ Φ V ( t ) such that E τ ϕ < , in this case we call ϕ is a finite search plan if:

E τ ϕ * E τ ϕ , ϕ Φ V ( t ) .

Then we call ϕ * is optimal search plan.

Let λ and θ be positive integers greater than one and v be a rational number such that:

1) ν > | μ | .

2) θ > 1 such that c = v ( θ 1 ) ( θ + 1 ) > | μ | .

We shall define two sequences { G i } i 0 , { H i } i 0 and a search plan with speed v as follows:

G i = λ ( θ i 1 ) , H i = ( 1 ) i + 1 c [ G i + 1 + ( 1 ) i + 1 ] .

For any t R + , if G i t G i + 1 , ϕ ( t ) = H i + ( 1 ) i ( t G i ) v .

Note that the truncated normal distribution:

If Y is N ( μ , σ 2 ) then, the probability density function of double truncated of X is given by:

f X ( x ) = 1 2 π σ 2 exp ( ( x μ ) 2 2 σ 2 ) F ( b μ σ ) F ( a μ σ ) I [ a , b ] ( x ) for a x b .

where F is the cumulative distribution function and I [ a , b ] ( x ) = 1 which is the indicator function.

And the expected value for truncated normal distribution is given by:

μ = E ( x ) = μ + [ f ( a μ σ ) f ( b μ σ ) F ( b μ σ ) F ( a μ σ ) ] σ .

The variance for truncated normal distribution is given by:

σ 2 = var ( x ) = σ 2 [ 1 + ( a μ σ ) f ( a μ σ ) ( b μ σ ) f ( b μ σ ) F ( b μ σ ) F ( a μ σ ) ] σ 2 [ f ( a μ σ ) f ( b μ σ ) F ( b μ σ ) F ( a μ σ ) ] 2 .

3. Existence of a Finite Search Plan

In this section we aim to find the conditions that make the search plan to be finite and minimize the expected value of the first meeting time.

Theorem 3.1: Let υ be the measure defined on R by X o and if ϕ ( t ) is the search plan defined above, then the expectation E τ ϕ is finite if:

a 0 i = 1 θ 2 i p { ψ ˜ ( G 2 i ) < x } υ ( d x ) , 0 b i = 1 θ 2 i + 1 p { ψ ( G 2 i + 1 ) > x } υ ( d x )

are finite, where ψ ˜ ( G 2 i ) = W ( G 2 i ) + c G 2 i , ψ ( G 2 i + 1 ) = W ( G 2 i + 1 ) c ( G 2 i + 1 ) .

Proof:

The continuity of ϕ ( t ) and W ( t ) imply that if X 0 is positive then X 0 + W ( t ) is greater than ϕ ( t ) until the first meeting, also if X 0 is negative then X 0 + W ( t ) is smaller than ϕ ( t ) until the first meeting, hence for any i 0

p ( τ ϕ > G 2 i + 1 ) a 0 p ( X 0 + W ( G 2 i ) < H 2 i / X 0 = x ) υ ( d x ) + 0 b p ( X 0 + W ( G 2 i + 1 ) > H 2 i + 1 / X 0 = x ) υ ( d x ) , where a < 0 , b > 0. (1)

Using the notation: ψ ˜ ( G 2 i ) = W ( G 2 i ) + c G 2 i , we obtain:

W(G2i) − H2i < X0 = −x, then W ( G 2 i ) [ ( 1 ) 2 i + 1 c ( G 2 i + 1 + ( 1 ) 2 i + 1 ) ] < X 0 = x .

Leads to: W ( G 2 i ) ( 1 ) c [ G 2 i + 1 1 ] = W ( G 2 i ) + c G 2 i = ψ ˜ ( G 2 i ) < x .

Similarly, by using the notation: ψ ( G 2 i + 1 ) = W ( G 2 i + 1 ) c G 2 i + 1 .

p ( τ ϕ > G 2 i + 1 ) a 0 p ( ψ ˜ ( G 2 i ) < x ) υ ( d x ) + 0 b p ( ψ ( G 2 i + 1 ) > x ) υ ( d x ) . (2)

Similarly for any i > 0

p ( τ ϕ > G 2 i ) a 0 p ( ψ ˜ ( G 2 i ) < x ) υ ( d x ) + 0 b p ( ψ ( G 2 i 1 ) > x ) υ ( d x ) . (3)

But we have

E ( τ ϕ ) = 0 p ( τ ϕ > t ) d t = i = 0 G i G i + 1 p ( τ ϕ > t ) d t i = 0 G i G i + 1 p ( τ ϕ > G i ) d t = i = 0 ( G i + 1 G i ) p ( τ ϕ > G i ) . (4)

Since G i = λ ( θ i 1 ) and G i + 1 = λ ( θ i + 1 1 )

G i + 1 G i = λ ( θ i + 1 1 ) λ ( θ i 1 ) = λ ( θ i + 1 θ i )

E ( τ ϕ ) i = 0 λ θ i ( θ 1 ) p ( τ ϕ > G i ) = λ ( θ 1 ) i = 0 θ i p ( τ ϕ > G i ) = λ ( θ 1 ) [ p ( τ ϕ > 0 ) + θ p ( τ ϕ > G 1 ) + θ 2 p ( τ ϕ > G 2 ) + θ 3 p ( τ ϕ > G 3 ) + ] = λ ( θ 1 ) [ p ( τ ϕ > 0 ) + θ p ( τ ϕ > G 1 ) + θ 2 { a 0 p ( ψ ˜ ( G 2 < x ) ) υ ( d x ) + 0 b p ( ψ ( G 1 > x ) ) υ ( d x ) } + θ 3 { a 0 p ( ψ ˜ ( G 2 < x ) ) υ ( d x ) + 0 b p ( ψ ( G 3 > x ) ) υ ( d x ) } + θ 4 { a 0 p ( ψ ˜ ( G 4 < x ) ) υ ( d x ) + 0 b p ( ψ ( G 3 > x ) ) υ ( d x ) } + + θ k { a 0 p ( ψ ˜ ( G k < x ) ) υ ( d x ) + 0 b p ( ψ ( G k 1 > x ) ) υ ( d x ) } + θ k + 1 { a 0 p ( ψ ˜ ( G k < x ) ) υ ( d x ) + 0 b p ( ψ ( G k + 1 > x ) ) υ ( d x ) } + θ k + 2 { a 0 p ( ψ ˜ ( G k + 2 < x ) ) υ ( d x ) + 0 b p ( ψ ( G k + 1 > x ) ) υ ( d x ) } + θ k + 3 { a 0 p ( ψ ˜ ( G k + 2 < x ) ) υ ( d x ) + 0 b p ( ψ ( G k + 3 > x ) ) υ ( d x ) } + ] .

Then:

E ( τ φ ) λ ( θ 1 ) [ p ( τ ϕ > 0 ) + θ p ( τ ϕ > G 1 ) + θ 2 a 0 p ( ψ ˜ ( G 2 ) < x ) υ ( d x ) + θ 2 0 b p ( ψ ( G 1 ) > x ) υ ( d x ) + θ 3 a 0 p ( ψ ˜ ( G 2 ) < x ) υ ( d x ) + θ 3 0 b p ( ψ ( G 3 ) > x ) υ ( d x ) + ] = λ ( θ 1 ) [ p ( τ ϕ > 0 ) + θ p ( τ ϕ > G 1 ) + θ 2 0 b p ( ψ ( G 1 ) > x ) υ ( d x ) + ( θ 2 + θ 3 ) a 0 p ( ψ ˜ ( G 2 ) < x ) υ ( d x ) + ( θ 3 + θ 4 ) 0 b p ( ψ ( G 3 ) > x ) υ ( d x ) + ( θ 4 + θ 5 ) a 0 p ( ψ ˜ ( G 4 ) < x ) υ ( d x ) + + ( θ k + θ k + 1 ) a 0 p ( ψ ˜ ( G k ) < x ) υ ( d x ) + ( θ k + 1 + θ k + 2 ) 0 b p ( ψ ( G k + 1 ) > x ) υ ( d x ) + ] .

Then:

E ( τ ϕ ) λ ( θ 1 ) [ p ( τ ϕ > 0 ) + θ p ( τ ϕ > G 1 ) + θ 2 0 b p ( ψ ( G 1 ) > x ) υ ( d x ) + θ 2 ( θ + 1 ) a 0 p ( ψ ˜ ( G 2 ) < x ) υ ( d x ) + θ 4 ( θ + 1 ) a 0 p ( ψ ˜ ( G 4 ) < x ) υ ( d x ) + + θ k ( θ + 1 ) a 0 p ( ψ ˜ ( G k ) < x ) υ ( d x ) + θ 3 ( θ + 1 ) 0 b p ( ψ ( G 3 ) > x ) υ ( d x ) + θ 5 ( θ + 1 ) 0 b p ( ψ ( G 5 ) > x ) υ ( d x ) + + θ k + 1 ( θ + 1 ) 0 b p ( ψ ( G k + 1 ) > x ) υ ( d x ) + ] .

Leads to:

E ( τ ϕ ) λ ( θ 1 ) [ g + ( θ + 1 ) ( a 0 M ( x ) υ ( d x ) + 0 b B ( x ) υ ( d x ) ) ] . (5)

where:

g = p ( τ ϕ > 0 ) + θ p ( τ ϕ > G 1 ) + θ 2 0 b p ( ψ ( G 1 ) > x ) υ ( d x )

M ( x ) = i = 1 θ 2 i p ( ψ ˜ ( G 2 i ) < x ) .

B ( x ) = i = 1 θ 2 i + 1 p ( ψ ( G 2 i + 1 ) > x ) .

Lemma 3.1: Let a n 0 for n 0 , and a n + 1 a n . Let { d n } , n 0 be a strictly increasing sequence of integers with d 0 = 0 . Then for any k 0 see [4]

n = k ( d n + 1 d n ) a d n + 1 n = d k a k n = k ( d n + 1 d n ) a d n .

Lemma 3.2: If c μ , where μ is the drift of W ( t ) and c is a constant, then for any t > 0 , and for some ε > 0

p { W ( t ) c t } 1 2 1 F ( b ) F ( a ) ε t .

Proof:

p { W ( t ) c t } = p { σ t X + μ c t } = p { X ( c μ ) t σ t } = k b 1 2 π exp ( x 2 2 ) F ( b ) F ( a ) d x , k = ( c μ ) t σ t .

Hence:

k b 1 2 π exp ( x 2 2 ) F ( b ) F ( a ) d x = 1 F ( b ) F ( a ) ( 1 2 ( E r f ( b 2 ) E r f ( k 2 ) ) ) .

And then

k 1 2 π exp ( x 2 2 ) d x = 1 2 E r f c ( k 2 ) 1 2 ε t , see [1] .

where: E r f c is the complementary Error function commonly donated E r f c ( x ) , is an entire function defined by

E r f c ( x ) = 1 E r f ( x ) = 2 π x e r 2 d r , then E r f c ( k 2 ) = 1 E r f ( k 2 )

Then: 1 2 ( E r f ( b 2 ) E r f ( k 2 ) ) = 1 2 ( 1 E r f c ( b 2 ) 1 + E r f c ( k 2 ) ) = 1 2 ( E r f c ( k 2 ) E r f c ( b 2 ) ) 1 2 E r f c ( k 2 ) 1 2 ε t . (6)

Then: 1 F ( b ) F ( a ) 1 2 ( E r f c ( k 2 ) E r f c ( b 2 ) ) 1 2 1 F ( b ) F ( a ) ε t . (7)

Hence: p { W ( t ) c t } 1 2 ( F ( b ) F ( a ) ) ε t . (8)

where ε = e ( c μ ) 2 2 σ 2 , and X follows truncated standardized normal distribution.

Lemma 3.3: If μ 0 , x 1 x 2 , t max ( x 1 μ , x 2 μ ) , a < 0 , b > 0 then

p { x 2 W ( t ) x 1 } is non-increasing with t.

Proof: Since

P ( x 2 W ( t ) x 1 ) = P ( x 2 σ t X + μ t x 1 ) = P ( x 2 μ t σ t X x 1 μ t σ t ) if μ < 0 , then ( x 2 μ t ) / σ t and d d t ( ( x 2 μ t ) / σ t ) 0 this implies that

P ( x 2 W ( t ) x 1 ) is non-increasing also if μ > 0 , then ( x 1 μ t ) / σ t 0 , this implies that P ( x 2 W ( t ) x 1 ) is non-increasing.

Lemma 3.4: If W ( n ) = i = 1 n X i , n 1 , where { X i } is a sequence of inde-

pendent identically distributed random variables (i.i.d.r.v), such that X i is truncated normally distributed with parameters μ c and σ 2 , and so

q ( j , j + 1 ) = n = 0 p { ( j + 1 ) < W ( n ) < j }

Satisfies the conditions of the Renewal theorem, see [5] .

Theorem 3.2: The chosen search plan satisfies:

M ( x ) L ˜ ( | x | ) and B ( x ) L ( | x | ) .

where L ( | x | ) and L ˜ ( | x | ) are linear functions.

Proof If x 0 , then B ( x ) B ( 0 ) , but we have for x > 0

B ( 0 ) = i = 1 θ 2 i + 1 p ( ψ ( G 2 i + 1 ) > 0 ) , B ( x ) = i = 1 θ 2 i + 1 p ( G 2 i + 1 > x ) .

Then B ( x ) = B ( 0 ) + i = 1 θ 2 i + 1 p ( x < ψ ( G 2 i + 1 ) 0 ) .

Lemma 3.2 states that p { W ( t ) c t } 1 2 1 F ( b ) F ( a ) ε t , then

p ( W ( t ) c t 0 ) 1 2 1 F ( b ) F ( a ) ε t p ( ψ ( t ) 0 ) 1 2 1 F ( b ) F ( a ) ε t .

where ψ ( t ) = W ( t ) c t , B ( 0 ) < 1 F ( b ) F ( a ) i = 1 θ 2 i + 1 ε G 2 i + 1 , 0 < ε < 1 .

We define the following:

1) ψ ( n ) = i = 1 n y i , where { y i } i 1 is a sequence of (i.i.d.r.v), y i N ( μ c , σ 2 )

2) d n = G 2 n + 1 = λ ( θ 2 n + 1 1 ) .

3) we choose d m such that d m = max ( 0 , x / μ ) , refer to Lemma 3.3 putting x 1 = 0 and x 2 = x 1 .

4) a ( n ) = p ( x < ψ ( n ) 0 ) = j = 0 | x | p ( ( j + 1 ) < ψ ( n ) j ) .

5) α = θ 2 / λ ( θ 2 1 ) .

6) U ( j , j + 1 ) = n = 0 p ( ( j + 1 ) < ψ ( n ) j ) .

If n > d m then by Lemma 3.3, a ( n ) is non-increasing and we can apply Lemma 3.1 we obtain;

B ( x ) B ( 0 ) = n = 1 m θ 2 n + 1 a ( d n ) + α n = m + 1 ( d n d n 1 ) a ( d n ) n = 1 m θ 2 n + 1 + α n = d m a ( n ) θ 3 ( θ 2 m 1 ) ( θ 2 1 ) + j = 0 | x | U ( j , j + 1 )

U ( j , j + 1 ) satisfies the conditions of Renewal theorem (by Lemma 3.4), hence U ( j , j + 1 ) is bounded for all j, by a constant , so

B ( x ) B ( 0 ) + S 1 + S 2 | x | = L ( | x | ) .

We can prove M ( x ) L ˜ ( | x | ) by similar way.

Lemma 3.5:

E | W ( T ) | σ E ( T ) + | μ | E ( T ) .

where E stand for the expectation value and σ 2 is the variance.

Proof

Let X ( t ) be a standard truncated Brownian motion, assume that T n = min ( T , n ) is a bounded stopping time x ( t ) . since x 2 ( t ) t is a continuous martingale, then E ( X 2 ( T n ) T n ) = E X 2 ( 0 ) = 0 see [5] .

But E ( X 2 ( T n ) ) = E ( lim n inf ( X 2 ( T n ) ) ) ( lim n inf E ( X 2 ( T n ) ) ) by Fatou lemma

( lim n inf E ( X ( T n ) ) ) . (9)

Hence E ( X 2 ( T ) ) E ( T ) by monotonically of T n , since W ( t ) = σ X ( t ) + μ t , then

E | W ( T ) | σ E | X ( T ) | + | μ | E ( T ) . (10)

But E | X ( T ) | E X 2 ( T ) E ( T )

Then: E | W ( T ) | σ E ( T ) + | μ | E ( T ) . (11)

4. Existence of an Optimal Path

Definition

Let ϕ n Φ V ( t ) , n 1 be a sequence of search plans, we say that ϕ n converges to ϕ as n tends to if and only if for any t R + , ϕ n converges to ϕ ( t ) uniformly on every compact subset.

Note that the set Φ V ( t ) constitutes an equicontinuous family of function, also | ϕ n ( t ) | V | t | for all n. We deduce that there exists a subsequence ϕ n k which converges to a continuous function ϕ by applying the theorem of Asscoli, see [6] , it is easy to verify that this function ϕ contained in Φ V ( t ) that is, the set Φ V ( t ) is sequentially compact.

Theorem 4.1

Let for any t R + , W ( t ) be truncated Brownian process. The mapping

ϕ E τ ϕ R +

is lower semi continuous on Φ V ( t )

Proof let w be a sample point corresponding to the sample path ψ ( t ) of x o + W ( t )

Let ( ϕ n ( t ) ) n 1 be a sequence of search plans which converges to ϕ Φ V ( t ) .

Given t R + , we define for any n 1

B n ( t ) = { w : min o x t | ψ ( x ) ϕ n ( x ) | > a } and B ( t ) = { w : min 0 x t | ψ ( x ) ϕ ( x ) | > a }

Let w B ( t ) since ϕ n ( t ) converges uniformly on [0,t] to ϕ , then there exists an integer n ( w ) such that for any 0 x t ,

| ϕ n ( x ) ϕ ( x ) | < ε = 1 2 min 0 x t | ψ ( x ) ϕ ( x ) |

Hence for any 0 x t and for any n > n ( w )

| ψ ( x ) ϕ n ( x ) | | ψ ( x ) ϕ ( x ) | | ϕ ( x ) ϕ n ( x ) | 2 ε ε = ε > a

Consequently w B n ( t ) for all n n ( w ) and hence B ( t ) lim n inf B n ( t ) .

Now, by Fatou’s lemma

0 p ( B ( t ) ) d t 0 p ( lim n b inf B n ( t ) ) d t 0 lim n b inf ( B n ( t ) ) d t lim n b inf 0 p ( B n ( t ) ) d t . (12)

Since sample paths are continuous, then B n ( t ) = ( τ ϕ n > t ) and B ( t ) = ( τ ϕ > t ) . It is known that a lower semi-continuous function over the sequentially compact space attains its minimum.

5. Application

Let a target moves according to a one-dimensional truncated Brownian motion. In addition, we have a searcher starts the searching from the origin of the line. The searcher moves continuously along its line in both directions of the starting point .We want to calculate E ( τ ϕ ) which is given by:

E ( τ ϕ ) λ ( θ 1 ) [ g + ( θ + 1 ) ( a 0 M ( x ) υ ( d x ) + 0 b B ( x ) υ ( d x ) ) ] = Ω ( λ , θ , g , a , b )

Case 1: if:

M 1 ( x ) = i = 1 2 θ 2 i p ( ψ ˜ ( G 2 i ) < x ) , B 1 ( x ) = i = 1 2 θ 2 i + 1 p ( ψ ( G 2 i + 1 ) > x )

g = p ( τ ϕ > 0 ) + θ p ( τ ϕ > G 1 ) + θ 2 0 b p ( ψ ( G 1 ) > x ) υ ( d x ) . (13)

Let X 0 = x be a random variable of initial position of target has a truncated normal distribution, W ( t ) = σ t X + μ t .

where X is a random variable has a truncated normal, in order to calculate

M 1 ( x ) = i = 1 2 θ 2 i p ( ψ ˜ ( G 2 i ) < x ) . (14).

Since ψ ˜ ( G 2 i ) = W ( G 2 i ) + c G 2 i p ( W ( G 2 i ) + c G 2 i < x ) = p ( ψ ˜ ( G 2 i ) < x )

p ( x < W ( G 2 i ) c G 2 i ) (15)

Since G 2 i = λ ( θ 2 i 1 ) , c = v ( θ 1 ) ( θ + 1 ) , W ( G 2 i ) = σ G 2 i x + μ G 2 i

Then we get

p ( x < σ λ ( θ 2 i 1 ) x μ λ ( θ 2 i 1 ) c ( λ ( θ 2 i 1 ) ) ) = p ( x + σ λ ( θ 2 i 1 ) x < μ λ ( θ 2 i 1 ) c ( λ ( θ 2 i 1 ) ) ) = p ( x ( 1 + σ λ ( θ 2 i 1 ) ) < μ λ ( θ 2 i 1 ) c λ ( θ 2 i 1 ) ) = p ( x < μ λ ( θ 2 i 1 ) c λ ( θ 2 i 1 ) ( 1 + σ λ ( θ 2 i 1 ) ) ) = p ( x < λ ( θ 2 i 1 ) [ μ c ] ( 1 + σ λ ( θ 2 i 1 ) ) ) (16)

Put λ = 3 , θ = 2 , ν = 10 , μ = 2 , σ = 30 , c = 3.33 . By subsisting in Equation (16)

p ( x < ( 3 × ( 2 2 i 1 ) [ 2 3.33 ] ) ( 1 + ( 30 × 3 ( 2 2 i 1 ) ) ) ) . (17)

By subsisting (17) in (14) we get:

M 1 ( x ) = i = 1 2 2 2 i p ( x < 3 ( 2 2 i 1 ) [ 2 3.33 ] ( 1 + ( 30 × 3 ( 2 2 i 1 ) ) ) ) = 2 2 p ( x < 3 ( 2 2 1 ) [ 2 3.33 ] ( 1 + ( 30 × 3 ( 2 2 1 ) ) ) ) + 2 4 p ( x < 3 ( 2 4 1 ) [ 2 3.33 ] ( 1 + ( 30 × 3 ( 2 4 1 ) ) ) ) = 2 2 p ( x < 0.5271 ) + 2 4 p ( x < 1.186 ) = 2 2 p ( x μ σ < 0.5271 2 30 ) + 2 4 p ( x μ σ < 1.186 2 30 ) = 2 2 p ( z < 0.08 ) + 2 4 p ( z < 0.1062 ) = 9.168 , where z ~ n ( 0 , 1 ) (18)

To calculate B 1 ( x ) by the same way we can get:

B 1 ( x ) = i = 1 2 θ 2 i + 1 p ( ψ ( G 2 i + 1 ) > x ) , since ψ ( G 2 i ) = W ( G 2 i ) c G 2 i .

W ( G 2 i ) = σ G 2 i x + μ G 2 i , G 2 i = λ ( θ 2 i 1 ) , c = v ( θ 1 ) ( θ + 1 )

Then: p ( ψ ( G 2 i ) > x ) = p ( W ( G 2 i ) c G 2 i > x ) = p ( x > W ( G 2 i ) + c G 2 i )

B 1 ( x ) = i = 1 2 θ 2 i + 1 p ( x > μ λ ( θ 2 i + 1 1 ) + c λ ( θ 2 i + 1 1 ) ( 1 + σ λ ( θ 2 i + 1 1 ) ) ) = i = 1 2 θ 2 i + 1 p ( x > λ ( θ 2 i + 1 1 ) [ c μ ] ( 1 + σ λ ( θ 2 i + 1 1 ) ) ) = 2 3 p ( x < 3 ( 2 3 1 ) [ 3.33 2 ] ( 1 + ( 30 × 3 ( 2 3 1 ) ) ) ) + 2 5 p ( x < 3 ( 2 5 1 ) [ 3.33 2 ] ( 1 + ( 30 × 3 ( 2 5 1 ) ) ) ) = 2 3 p ( x > 0.202 ) + 2 5 p ( x > 0.426 ) = 2 3 p ( z > 0.202 2 30 ) + 2 5 p ( z > 0.426 2 30 ) = 20.8. (19)

Since E ( τ Φ ) λ ( θ 1 ) [ g + a 0 M ( x ) υ ( d x ) + 0 b B ( x ) υ ( d x ) ] . (20)

By subsisting of (18), (19) in Equation (20) we can get

E 1 ( τ Φ ) 3 [ g + a 0 9.168 d x + 0 b 20.8 d x ] . (21)

Let a = 2 , b = 2 .

Since g = p ( τ Φ > 0 ) + θ p ( τ Φ > G 1 ) + θ 2 0 b p ( ψ ( G 1 ) > x ) υ ( d x )

p ( τ Φ > G 2 i + 1 ) a 0 p ( x 0 + W ( G 2 i ) < H 2 i / x 0 = x ) υ ( d x ) + 0 b p ( x 0 + W ( G 2 i + 1 ) > H 2 i + 1 / x 0 = x ) υ ( d x ) . (22)

Put i = 0 in Equation (22), then we can get

p ( τ Φ > G 1 ) a 0 p ( x 0 + W ( G 0 ) < H 0 / x 0 = x ) υ ( d x ) + 0 b p ( x 0 + W ( G 1 ) > H 1 / x 0 = x ) υ ( d x ) . (23)

Since: H i = ( 1 ) i + 1 c ( G i + 1 + ( 1 ) i + 1 ) , W ( G i ) = σ G i x + μ G i , G i = λ ( θ i 1 )

Then: H 0 = ( 1 ) c ( G 0 + 1 1 ) = C G 0 = 0 , W ( G 0 ) = σ G 0 x + μ G 0 .

By subsisting of H 0 , W ( G 0 ) in Equation (23) we can get

p ( τ Φ > G 1 ) a 0 p ( x 0 < 0 ) υ ( d x ) + a b p ( x 0 > H 1 W ( G 1 ) / x 0 = x ) υ ( d x ) . (24)

Since H 1 = ( 1 ) 2 c ( G 1 + 1 + ( 1 ) 2 ) = c ( G 1 + 2 ) = c ( λ ( θ 1 ) + 2 ) , H 1 = 16.65 W ( G 1 ) = σ G 1 x + μ G 1 = 51.9615 x + 6.

By subsisting of H 1 , W ( G 1 ) in Equation (24) we get

p ( τ Φ > G 1 ) a 0 p ( x < 0 ) υ d x + 0 b p ( x > 16.65 ( 51.9615 x + 6 ) ) d x a 0 p ( x < 0 ) υ d x + 0 b p ( x + 51.9615 x > 16.65 6 ) d x a 0 p ( x < 0 ) + 0 b p ( x > 16.65 6 52.9615 ) d x (25)

Since p ( x < 0 ) = p ( z < 0.07 ) = 0.472

p ( x > 16.65 6 52.9615 ) = p ( x > 0.20 ) = p ( x μ σ > 0.20 2 30 ) = p ( z > 0.06 ) = 0.524.

By subsisting in Equation (25) we can get p ( τ Φ > G 1 ) a ( 0.472 ) + b ( 0.524 ) .

Since a = 2 , b = 2 , p ( τ Φ > G 1 ) 1.992 . (26)

To calculate p ( ψ ( G 1 ) > x ) , where ψ ( G 2 i + 1 ) = W ( G 2 i + 1 ) c G 2 i + 1 .

That is, ψ ( G 1 ) = W ( G 1 ) c G 1 , since ψ ( G 1 ) = 51.9615 x 3.99 (27)

Then p ( ψ ( G 1 ) > x ) = p ( 51.9615 x 3.99 > x ) = p ( 51.9615 x + x > 3.99 ) = p ( z > 0.06 ) = 0.524. (28)

By subsisting (26), (28) in Equation (13), we get

g = 1 + 2 ( 1.992 ) + 2 2 0 2 0.524 d z = 9.176 (29)

By subsisting (29) in (21) we get

E 1 ( τ Φ ) 3 [ 9.176 + 18.336 + 41.6 ] = 207.336 .

Case 2: if we take: M 2 ( x ) = i = 1 3 θ 2 i p ( ψ ˜ ( G 2 i ) < x ) and

B 2 ( x ) = i = 1 3 θ 2 i + 1 p ( ψ ( G 2 i + 1 ) > x ) . By the same way for chosen

λ = 3 , θ = 2 , v = 10 , μ = 2 , σ = 30 , c = 3.33 .

We can get: M 2 ( x ) = 37.42 , B 2 ( x ) = 85.10 .

Then E 2 ( τ ϕ ) 762 .69 .

Table 1. The upper bound of E ( τ ϕ ) .

Case 3: if we take:

M 3 ( x ) = i = 1 4 θ 2 i p ( ψ ˜ ( G 2 i ) < x ) , B 3 ( x ) = i = 1 4 θ 2 i + 1 p ( ψ ( G 2 i + 1 ) > x ) .

By the same way for chosen λ = 3 , θ = 2 , v = 10 , μ = 2 , σ = 30 , c = 3.33 .

we can get: M 3 ( x ) = 142.12 , B 3 ( x ) = 331.082 .

Then E 3 ( τ ϕ ) 2866 .71 .

Case 4: if we take:

M 4 ( x ) = i = 1 5 θ 2 i p ( ψ ˜ ( G 2 i ) < x ) , B 4 ( x ) = i = 1 5 θ 2 i + 1 p ( ψ ( G 2 i + 1 ) > x ) .

By the same way for chosen λ = 3 , θ = 2 , v = 10 , μ = 2 , σ = 30 , c = 3.33 .

We can get: M 4 ( x ) = 497.02 , B 4 ( x ) = 1220.95 .

Then E 4 ( τ ϕ ) 10335 .41 . If we want to see the effect μ on the search plan, choose λ = 3 , θ = 2 , v = 10 , σ = 30 , c = 3.33 as constants and give different values to μ , and for each chosen we calculate the values B j ( x ) , M j ( x ) , j = 1 , 2 , 3 , 4 and corresponding values of Ω ( . ) , see Table 1, in this table we can determine a search plane which make the value of the upper bound of E ( τ ϕ ) is small. In future study we can do a program in order to get the search plan for different values of λ , θ , c , ν , σ , μ .

6. Conclusion

In this paper, we investigated the search model for a lost target whose truncated Brownian motion is on a real line, and the expected value of the first meeting between the searcher and target is studied. Also the existence of the optimal search plan that minimizes this expected value is proved. The search model, when the lost target follows truncated Brownian motion on one of finite number of disjoint linear lines will be investigated in the future.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] El-Rayes, A.B., Mohamed, A.A. and Abou Gabal, H.M. (2003) Linear Search for a Brownian Target Motion. Acta Mathematica Scientia, 23(B), 321-327.
[2] Mohamed, A., Kassem, M. and El-Hadidy, M.A. (2011) Multiplicative Linear Search for a Brownian Target Motion. Applied Mathematical Modelling, 35, 4127-4139.
[3] El-Hadidy, M.A.A. (2016) Searching for a d-Dimensional Brownian Target with Multiple Sensors. International Journal of Mathematics in Operational Research, 9, 279-301.
https://doi.org/10.1504/IJMOR.2016.078822
[4] Elrayes, A.B. and Abd El-Moneim, M.A. (1989) Searching for a Randomly Moving Target. The Third ORMA Conference Egypt, 2, 323-329.
[5] Feller, W. (1966) An Introduction to Probability Theory and Its Applications. Willey, New York.
[6] Royden, H.L. (1968) Real Analysis. Second Edition, Macmillan, London.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.