<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article  PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "http://dtd.nlm.nih.gov/publishing/3.0/journalpublishing3.dtd"><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="3.0" xml:lang="en" article-type="research article"><front><journal-meta><journal-id journal-id-type="publisher-id">APM</journal-id><journal-title-group><journal-title>Advances in Pure Mathematics</journal-title></journal-title-group><issn pub-type="epub">2160-0368</issn><publisher><publisher-name>Scientific Research Publishing</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.4236/apm.2024.146025</article-id><article-id pub-id-type="publisher-id">APM-133840</article-id><article-categories><subj-group subj-group-type="heading"><subject>Articles</subject></subj-group><subj-group subj-group-type="Discipline-v2"><subject>Physics&amp;Mathematics</subject></subj-group></article-categories><title-group><article-title>
 
 
  Stochastic Maximum Principle for Optimal Advertising Models with Delay and Non-Convex Control Spaces
 
</article-title></title-group><contrib-group><contrib contrib-type="author" xlink:type="simple"><name name-style="western"><surname>Giuseppina</surname><given-names>Guatteri</given-names></name><xref ref-type="aff" rid="aff1"><sup>1</sup></xref></contrib><contrib contrib-type="author" xlink:type="simple"><name name-style="western"><surname>Federica</surname><given-names>Masiero</given-names></name><xref ref-type="aff" rid="aff2"><sup>2</sup></xref></contrib></contrib-group><aff id="aff1"><addr-line>Dipartimento di Matematica, Politecnico di Milano via Bonardi, Milano, Italia</addr-line></aff><aff id="aff2"><addr-line>Dipartimento di Matematica e Applicazioni, Universit&amp;amp;#224; di Milano-Bicocca via Cozzi, Milano, Italia</addr-line></aff><pub-date pub-type="epub"><day>04</day><month>06</month><year>2024</year></pub-date><volume>14</volume><issue>06</issue><fpage>442</fpage><lpage>450</lpage><history><date date-type="received"><day>22,</day>	<month>May</month>	<year>2024</year></date><date date-type="rev-recd"><day>15,</day>	<month>June</month>	<year>2024</year>	</date><date date-type="accepted"><day>18,</day>	<month>June</month>	<year>2024</year></date></history><permissions><copyright-statement>&#169; Copyright  2014 by authors and Scientific Research Publishing Inc. </copyright-statement><copyright-year>2014</copyright-year><license><license-p>This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/</license-p></license></permissions><abstract><p>
 
 
  In this paper we study optimal advertising problems that model the introduction of a new product into the market in the presence of carryover effects of the advertisement and with memory effects in the level of goodwill. In particular, we let the dynamics of the product goodwill to depend on the past, and also on past advertising efforts. We treat the problem by means of the stochastic Pontryagin maximum principle, that here is considered for a class of problems where in the state equation either the state or the control depend on the past. Moreover the control acts on the martingale term and the space of controls U can be chosen to be non-convex but now the space of controls 
  U can be chosen to be non-convex. The maximum principle is thus formulated using a first-order adjoint Backward Stochastic Differential Equations (BSDEs), which can be explicitly computed due to the specific characteristics of the model, and a second-order adjoint relation.
 
</p></abstract><kwd-group><kwd>Stochastic Optimal Control</kwd><kwd> Delay Equations</kwd><kwd> Advertisement Models</kwd><kwd> Stochastic Maximum Principle</kwd></kwd-group></article-meta></front><body><sec id="s1"><title>1. Introduction</title><p>We consider a stochastic model for problems of optimal advertising under uncertainty, so we have to study a stochastic version of advertising models. We start from the stochastic model introduced in [<xref ref-type="bibr" rid="scirp.133840-ref1">1</xref>] (see also [<xref ref-type="bibr" rid="scirp.133840-ref2">2</xref>] and [<xref ref-type="bibr" rid="scirp.133840-ref3">3</xref>] ): we consider carryover effects of the advertising, which in the model reads as delay in the control, and with memory effects of the level of goodwill, which in the model reads as delay in the state. We refer also to [<xref ref-type="bibr" rid="scirp.133840-ref4">4</xref>] for optimal advertising problems with memory effects in the level of goodwill.</p><p>In our model the delay in the effect of advertising affects the martingale term of the state equation, namely we consider, following [<xref ref-type="bibr" rid="scirp.133840-ref1">1</xref>] , the controlled stochastic differential equation in ℝ with pointwise delay in the state and in the control</p><p>{ d x ( t ) = [ b 0 u ( t ) − a 0 x ( t ) − a d x ( t − d ) ] d t + σ 1 x ( t ) d W t 1 + σ 2 u ( t − d ) d W t 2 , x ( τ ) = x 0 ( τ ) ,   τ ∈ [ − d , 0 ] . (1.1)</p><p>Following the usual notations, see e.g. [<xref ref-type="bibr" rid="scirp.133840-ref1">1</xref>] , x is the level of goodwill, a 0 and a d are factors related to the image deterioration in case of no advertising, b 0 is a constant representing an advertising effectiveness factor. The diffusion term σ 1 x ( t ) d W t 1 represents for the word of mouth communication, with σ 1 ≥ 0 the so-called “advertising volatility”; while the second diffusion term keeps track of the delayed effect of the advertising effort u, and the constant σ 2 ≥ 0 is the effectiveness volatility of the communication. Besides, x 0 ( 0 ) is the initial goodwill level, while x 0 ( τ ) ,   τ ∈ [ − d , 0 ) is the history of the goodwill level before the advertising campaign is started.</p><p>The functional to maximize, over all controls in U , is the following profit, defined on finite horizon:</p><p>J &#175; ( t , x , u ) = E ∫ t T ( − c ( u ( s ) ) + l ( x ( s ) ) ) d s + E   r ( x ( T ) ) , (1.2)</p><p>where c is the cost of advertising, l is the current reward, and r is the reward from the final goodwill. Our purpose is to derive a maximum principle for such a problem with non convex control space U, extending the results already present in the literature for the convex case, see e.g. [<xref ref-type="bibr" rid="scirp.133840-ref5">5</xref>] , where the stochastic maximum principle for control problems with pointwise delay in the state and in the control is studied, see also [<xref ref-type="bibr" rid="scirp.133840-ref6">6</xref>] and [<xref ref-type="bibr" rid="scirp.133840-ref7">7</xref>] where more general models are treated but the convexity of the control space U is still required. We underline the fact that stochastic control problems with diffusion depending on the control are difficult to treat; concerning problems related to advertising we mention the recent paper [<xref ref-type="bibr" rid="scirp.133840-ref8">8</xref>] , where the author solves the problem by means of the dynamic programming approach, differently from here the case of pointwise delay cannot be treated, moreover it is taken into account a diffusion depending linearly on the control, but not on the delayed control. We stress that one novelty of this paper is to handle the new difficulty coming form the non convexity of U: also in the non delayed case the non convexity of U makes the approach based on stochastic maximum principle more complicated. First we need to utilize the spike variation of the control and introduce the second variation to handle the control acting on the martingale term. We will therefore follow [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] to introduce this additional step. Thanks to the specific characteristics of the state equation, we can derive a quasi-explicit form for both the first adjoint, which results in an anticipating backward equation, see [<xref ref-type="bibr" rid="scirp.133840-ref10">10</xref>] , and the second adjoint, which is written using the optimal cost and a simple auxiliary process. Note that the specific case we address is not considered in [<xref ref-type="bibr" rid="scirp.133840-ref11">11</xref>] and does not completely fall within the hypotheses of [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] , as we also consider the delay the control appearing in the martingale term.</p><p>The structure of the paper is the following: in section 2 we present the notation, the assumptions and the control problem. Section 3 is devoted to collect the results on the first variation of the optimal state and also on the second variation, which we have to study since the space of controls is not convex. Section 4 concerns the first and second adjoint relations necessary to formulate the stochastic maximum principle, which is stated and proved section 4.3.</p></sec><sec id="s2"><title>2. Stochastic Control Problem for Delay Equations Appearing in Advertising Models</title><sec id="s2_1"><title>2.1. Assumptions and Preliminaries</title><p>Let W ( t ) be a 2 dimensional brownian motion defined on a complete probability space ( Ω , F , ℙ ) and ( F t ) t ≥ 0 the natural filtration generated by W, augmented in the usual way. We fix a finite time horizon T &gt; 0 and we set, for every q ≥ 1 and for every 0 ≤ u ≤ v ≤ T , the following spaces:</p><p>L F 0 ( Ω &#215; [ u , v ] ; ℝ k ) = : { X : Ω &#215; [ u , v ] → ℝ k , ( F t ) -progressive   measurable } (2.1)</p><p>L F q ( Ω &#215; [ u , v ] ; ℝ k ) = : { X : Ω &#215; [ u , v ] → ℝ k ,   ( F t ) -progr .meas .,   ( E ∫ u v | X ( t ) | q d t ) 1 / q &lt; ∞ } (2.2)</p><p>L F q ( Ω ; C ( [ u , v ] ; ℝ k ) ) = : { X : Ω &#215; [ u , v ] → ℝ k ,   ( F t ) -progr .meas .,   ( E sup t ∈ [ u , v ] | X ( t ) | q ) 1 / q &lt; ∞ } (2.3)</p></sec><sec id="s2_2"><title>2.2. Formulation of the Problem</title><p>We recall the state equation we are interested in:</p><p>{ d x ( t ) = [ b 0 u ( t ) − a 0 x ( t ) − a d x ( t − d ) ] d t + σ 1 x ( t ) d W t 1 + σ 2 u ( t − d ) d W t 2 , x ( τ ) = x 0 ( τ ) ,   τ ∈ [ − d , 0 ) ,   x ( 0 ) = x 0 ( 0 ) . (2.4)</p><p>We assume the following on the coefficients:</p><p>Hypothesis 2.1</p><p>(i) a 0 , b 0 , σ 1 , σ 2 ∈ ℝ ;</p><p>(ii) the control strategy u, the advertisement in this case, belongs to the space</p><p>U : = { z ∈ L F 0 ( Ω &#215; [ 0, T ] , ℝ ) : z ( t ) ∈ U ,   ℙ − a . s . }</p><p>where U is a bounded subset of ℝ possibly non convex, in particular U can be a bounded subset of ℕ and this represents the realistic situation in which the advertisement is multiple of a given quantity;</p><p>(iii) d &gt; 0 is the delay with which the past of the state affects the system;</p><p>We recall that the purpose is to maximize, over all controls in U , the following profit, on finite horizon:</p><p>J &#175; ( t , x , u ) = E ∫ t T ( − c ( u ( s ) ) + l ( x ( s ) ) ) d s + E   r ( x ( T ) ) , (2.5)</p><p>Hypothesis 2.2 We assume the following:</p><p>(i) according to the literature, see e.g. [<xref ref-type="bibr" rid="scirp.133840-ref12">12</xref>] , c : U → ℝ is non linear, convex and locally lipschitz;</p><p>(ii) l : [ 0, T ] &#215; ℝ → ℝ represents the current reward, it is twice differentiable with at most linear growth;</p><p>(iii) r : ℝ → ℝ represents the foreseen reward from the final goodwill and it is twice differentiable and strictly increasing, with bounded second derivatives.</p><p>From now on, since we recall and apply the results in [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] where a mini-mization problem is considered, we focus on the problem (equivalent to the original one of maximizing the profit J &#175; given in (2.5) since J = − J &#175; ) of minimizing, over all admissible control in U, the cost functional</p><p>J ( t , x , u ) = E ∫ t T ( c ( u ( s ) ) − l ( x ( s ) ) ) d s − E   r ( x ( T ) ) , (2.6)</p><p>We are going to formulate necessary conditions for optimality.</p></sec></sec><sec id="s3"><title>3. First and Second Order Variations of the Optimal State Equation</title><p>Since U is not necessarily convex we use the spike variation method, see [<xref ref-type="bibr" rid="scirp.133840-ref13">13</xref>] , see also [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] for the case of delay equation.</p><p>Let ( u , x ) be an optimal couple: that is u is an optimal control and x its corresponding optimal trajectory, that is solution to Equation (2.4). The spike variation works as follows: for ε &gt; 0 we consider an interval V ε ⊂ [ 0, T ] , with m ( V ε ) = ε , where m is the Lebesgue measure. Let v ∈ U we set</p><p>u ε ( t ) = { u ( t ) t ∈ [ 0, T ] \ V ε v t ∈ V ε (3.1)</p><p>We are going to derive a maximum principle for the control problem with state Equation (2.4) and cost functional (2.6) in the case of non convex space of controls. We will denote by x ε the solution of (2.4) corresponding to u ε and δ u the spike variation of the control, i.e. δ u ( t ) : = ( u ( t ) − v ) I V ε ( t ) .</p><p>We can write the equation for the first order variation of the state:</p><p>{ d y ε ( t ) = [ − a 0 y ε ( t ) − a d y ε ( t − d ) ] d t + σ 1 y ε ( t ) d W t 1 + σ 2 δ u ( t − d ) d W t 2 , t ∈ [ 0 , T ] , y ε ( 0 ) = 0 ,   y ε ( τ ) = 0 ,   − d ≤ τ &lt; 0.</p><p>(3.2)</p><p>where δ u ε ( t − d ) = ( u ( t − d ) − v ) I V ε ( t − d ) and the second variation is, for t ∈ [ 0, T ] ,</p><p>{ d z ε ( t ) = [ − a 0 z ε ( t ) − a d z ε ( t − d ) ] d t + b 0 δ u ( t ) d t + σ 1 z ε ( t ) d W t 1 z ε ( 0 ) = 0 ,   z ε ( τ ) = 0 ,   − d ≤ τ &lt; 0. (3.3)</p><p>It is well known that such equations are well posed in L F 2 ( Ω ; C ( [ 0, T ] ; ℝ ) ) (see e.g. [<xref ref-type="bibr" rid="scirp.133840-ref14">14</xref>] for a general theory on stochastic delay equations).</p><p>The following asymptotic behaviors hold, see ([9, Theorem 3.3]) for the delayed system and the classic result in ([15, Theorem 4.4]),</p><p>E sup t ∈ [ 0, T ] | y ε ( t ) | 2 = O ( ε ) ,   E sup t ∈ [ 0, T ] | z ε ( t ) | 2 = O ( ε ) , (3.4)</p><p>E sup t ∈ [ 0, T ] | x ε ( t ) − x ( t ) − y ε ( t ) | 2 = O ( ε 2 ) , (3.5)</p><p>E sup t ∈ [ 0, T ] | x ε ( t ) − x ( t ) − y ε ( t ) − z ε ( t ) | 2 = o ( ε 2 ) , (3.6)</p><p>Moreover, ∀   p ≥ 1 ,</p><p>E sup t ∈ [ − d , T ] | y ε ( t ) | p &lt; + ∞ ,   E sup t ∈ [ − d , T ] | z ε ( t ) | p &lt; + ∞ , (3.7)</p></sec><sec id="s4"><title>4. Maximum Principle for the Stochastic Delay Equation</title><p>In this Section, first we formulate the first order adjoint equation in 4.1, then we pass to the second order adjoint in 4.2, and finally we formulate the stochastic maximum principle in 4.3.</p><sec id="s4_1"><title>4.1. First Order Duality Relation</title><p>Following [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] we define the first order adjoint equation of (3.2), that is the anticipated backward equation</p><p>{ p ( t ) = r x ( x ( T ) ) + ∫ t T l x ( x ( s ) ) d s − a 0 ∫ t T     p ( s ) d s − a d ∫ t T     E F s p ( s + d ) d s         + σ 1 ∫ t T     E F s q 1 ( s + d ) d s + ∫ t T     q 1 ( s ) d W s 1 + ∫ t T     q 2 ( s ) d W s 2 p ( T − τ ) = 0 ,     for   all   τ ∈ [ − d , 0 ) , q 1 ( T − τ ) = q 2 ( T − τ ) = 0   a .e .   τ ∈ [ − d , 0 ) . (4.1)</p><p>that admits a unique solution ( p , q ) ∈ L F 2 ( Ω ; C ( [ 0, T ] ; ℝ ) ) &#215; L F 2 ( Ω &#215; [ 0, T ] ; ℝ 2 ) , see e.g. [<xref ref-type="bibr" rid="scirp.133840-ref10">10</xref>] where we write q = ( q 1 , q 2 ) .</p><p>Since Equation (4.1) is linear, in view of the future times conditions we have an explicit, recursive, formulation for the solution ( p , q ) :</p><p>Proposition 4.1 Assume Hypotheses 2.1 and 2.2. Define for every</p><p>k = 0 , ⋯ , [ T d ] ,</p><p>p k ( t ) : = p | [ ( T − ( k + 1 ) d ) ∨ 0 , ( T − k d ) ] ( t )</p><p>and</p><p>q k , 1 ( t ) : = q | [ ( T − ( k + 1 ) d ) ∨ 0 , ( T − k d ) ] 1 ( t ) ,     q k , 2 ( t ) : = q | [ ( T − ( k + 1 ) d ) ∨ 0 , ( T − k d ) ] 2 ( t ) .</p><p>Then ( p k , q k ) solve:</p><p>{ p 0 ( t ) = e − a 0 ( T − t ) E F t ( r x ( x ( T ) ) ) + E F t ∫ t T     e − a 0 ( s − t ) l x ( x ( s ) )   d s ,   t ∈ [ T − d , T ] , q 0 , 1 ( t ) = e − a 0 ( T − t ) L 1 ( t ) − ∫ t T     e − a 0 ( s − t ) K 1 ( s , t ) d s ,       for   a .e .   t ∈ [ T − d , T ] , q 0 , 2 ( t ) = e − a 0 ( T − t ) L 2 ( t ) − ∫ t T     e − a 0 ( s − t ) K 2 ( s , t ) d s ,       for   a .e .   t ∈ [ T − d , T ] . p k ( t ) = e − a 0 ( T − k d − t ) p k − 1 ( T − k d ) − a d ∫ t T − k d     e − a 0 ( s − t ) E F s p k − 1 ( s + d ) d s   + σ 1 ∫ t T     e − a 0 ( s − t ) E F s q k − 1 , 2 ( s + d ) d s + ∫ t T     e − a 0 ( s − t ) q k , 1 ( s ) d W s 1   + ∫ t T     e − a 0 ( s − t ) q k , 2 ( s ) d W s 2 ,       t ∈ [ ( T − ( k + 1 ) d ) ∨ 0 , ( T − k d ) ] ,   k : 1 , ⋯ , [ T d ] (4.2)</p><p>where L 1 and L 2 are given by:</p><p>E F t ( r x ( X ( T ) ) ) = E ( r x ( X ( T ) ) ) + ∫ t T     L 1 ( s ) d W 1 ( s ) + ∫ t T     L 2 ( s ) d W 1 ( s ) . (4.3)</p><p>and K 1 and K 2 are given by:</p><p>E F t ( l x ( s ) ) = E ( l x ( s ) ) + ∫ T − d t     K 1 ( s , τ ) d W 1 ( τ ) + ∫ T − d t     K 2 ( s , τ ) d W 2 ( τ ) . (4.4)</p><p>Proof. Notice that, to avoid trivialities, we are taking d &lt; T .</p><p>Let us consider the first order adjoint Equation (4.1), (, , for t ∈ ( T − d , T ] : in view of the fact that p ( t ) = q i ( t ) = 0 , i = 1 , 2 for a.e. t ∈ [ T , T + d ] , for t ∈ [ T − d , T ] it can be rewritten as a linear BSDE</p><p>{ p ( t ) = r x ( x ( T ) ) + ∫ t T l x ( x ( s ) ) d s − a 0 ∫ t T     p ( s ) d s + ∫ t T     q 1 ( s ) d W s 1 + ∫ t T     q 2 ( s ) d W s 2 p ( T − τ ) = 0 ,     for   all   τ ∈ [ − d , 0 ) , q 1 ( T − τ ) = q 2 ( T − τ ) = 0   a .e .   τ ∈ [ − d , 0 ) ,</p><p>(4.5)</p><p>and its solution for t ∈ [ T − d , T ] , is given by</p><p>p 0 ( t ) = e − a 0 ( T − t ) r x ( x ( T ) ) + ∫ t T     e − a 0 ( s − t ) l x ( x ( s ) ) d s       + ∫ t T     e − a 0 ( s − t ) q 0 , 1 ( s ) d W s 1 + ∫ t T     e − a 0 ( s − t ) q 0 , 2 ( s ) d W s 2 (4.6)</p><p>Formula (4.2) is just the rearrangement of the variation of constant formula, while (4.3) and (4.4) is an application of the Martingale Representation Theorem, see also [<xref ref-type="bibr" rid="scirp.133840-ref16">16</xref>] . □</p><p>Using (3.2), (3.2) and (4.1), we deduce the first duality relation:</p><p>Proposition 4.2 Assume 2.1 and 2.2 then:</p><p>E p ( T ) y ε ( T ) = E r x ( x ( T ) ) y ε ( T ) = − E ∫ 0 T [ l x ( x ( s ) ) y ε ( s ) + q 2 ( s ) δ u ( s − d ) ] d s (4.7)</p><p>E p ( T ) z ε ( T ) = E r x ( x ( T ) ) z ε ( T ) = − E ∫ 0 T     l x ( x ( s ) ) z ε ( s ) d s + E ∫ 0 T     p ( s ) δ u ( s ) d s (4.8)</p><p>Moreover the cost can be written as follows:</p><p>J ( u ) − J ( u ε ) = E ∫ 0 T ( − δ l ( t ) + b 0 p ( t ) δ u ( t ) − σ 2 q 2 ( t ) δ u ( t − d ) ) d t   + 1 2 E   r x x ( x ( T ) ) ( y ε ( T ) ) 2 + 1 2 E ∫ 0 T     l x x ( t , x ( t ) ) ( y ε ( t ) ) 2 d t (4.9)</p><p>Proof. See ( [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] , Propositions 4.1 and 4.2). □</p></sec><sec id="s4_2"><title>4.2. Second Order Duality Relation</title><p>Following ( [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] , Theorem 4.9]) we introduce the process P, called P 0,0 in [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] , that in this simplified case can be explicitely defined by means of a auxiliary process y ˜ s , 1 (here s indicates the initial time and 1 is the initial condition) that solves the following equation</p><p>{ d y ˜ s , 1 ( t ) = ( − a 0 y ˜ s , 1 ( t ) − a d y ˜ s , 1 ( t − d ) ) d t + σ 1 y ˜ s , 1 ( t ) d W ( t ) ,   t ∈ [ s , T ] , y ˜ s , 1 ( s ) = 1 ,   y ( τ ) = 0 ,   − d ≤ τ &lt; s . (4.10)</p><p>By means of the solution process y ˜ s , 1 ∈ L F 2 ( Ω ; C ( [ 0, T ] ; ℝ ) ) , the process P can be now defined by</p><p>P ( s ) = E F s r x x ( x ( T ) ) y ˜ s ,1 ( T ) 2 + E F s ∫ s T     l x x ( t ) y ˜ s ,1 ( t ) 2 d t . (4.11)</p><p>Also in this case, the solution y ˜ 1, s of Equation (4.10) can be explicitly defined by recursion:</p><p>Proposition 4.3 Assume Hypotheses 2.1 and 2.2. Define</p><p>y ˜ k ( t ) : = y ˜ | [ ( s + k d ) , ( s + ( k + 1 ) d ) ∧ T ] s ,1 ( t ) , for every k : 0, ⋯ , [ T d ]</p><p>Then y ˜ k solves</p><p>{ y ˜ 0 ( t ) = e a 0 − σ 1 2 2 ( t − s ) + σ 1 ( W 1 ( t ) − W 1 ( s ) ) ,     t ∈ [ s , ( s + d ) ∧ T ] y ˜ k ( t ) = e − a 0 ( t − ( s + k d ) ) y ˜ k − 1 ( s + k d ) − a d ∫ s + k d t     e − a 0 ( t − r ) y ˜ k − 1 ( r − d ) d s   + σ 1 ∫ s + k d t     e a 0 ( t − r ) y ˜ k ( s ) d W s 1 ,     t ∈ [ ( s + k d ) , ( s + ( k + 1 ) d ) ∧ T ] ,   k = 1 , ⋯ , [ T d ] (4.12)</p><p>Proof. Thanks to the initial condition in (25), we have that for every t ∈ [ s , s + d ∧ T ] :</p><p>{ d y ˜ s , 1 ( t ) = − a 0 y ˜ s , 1 ( t ) d t + σ 1 y ˜ s , 1 ( t ) d W t 1 y ˜ s , 1 ( s ) = 1 , (4.13)</p><p>Therefore y ˜ 0 ( t ) = y ˜ | [ s , ( s + d ) ∧ T ] s , 1 ( t ) = e a 0 − σ 1 2 2 ( t − s ) + σ 1 ( W 1 ( t ) − W 1 ( s ) ) . Then the general case for t &gt; s + d follows by the application of the constant variation formula on any interval [ ( s + k d ) , ( s + ( k + 1 ) d ) ∧ T ] . □</p><p>The expansion of the cost then becomes:</p><p>Proposition 4.4 Assume 2.1 and 2.2. Then following expansion holds true:</p><p>J ( u ) − J ( u ε ) = E ∫ 0 T ( − δ c ( s ) + b 0 p ( s ) δ u ( s ) − σ 2 q 2 ( s ) δ u ( s − d ) ) ( t ) d s     + 1 2 E ∫ 0 T     σ 2 2 ( δ u ( s − d ) ) 2 P ( s ) d s + o ( ε ) (4.14)</p><p>Proof. The proof is in [<xref ref-type="bibr" rid="scirp.133840-ref9">9</xref>] [Proposition 4.9, Theorem 4.10, Theorem 4.11] and is a consequence of an approximation procedure based on the regularization of the dirac measure δ d and the well known reformulation of the delayed problem in infinite dimensions. □</p></sec><sec id="s4_3"><title>4.3. The Stochastic Maximum Principle</title><p>We are now in the position to state our main result.</p><p>Theorem 4.5 Under assumptions 2.1 and 2.2, any optimal couple ( x , u ) satisfies the following variational inequality:</p><p>b 0 ( u ( t ) − v ) p ( t ) − σ 2 ( u ( t ) − v ) E F t q 2 ( t + d ) − ( c ( u ( t ) ) − c ( v ) ) + 1 2 σ 2 ( u ( t ) − v ) 2 E F t P ( t + d ) ≤ 0 ,     ∀   v ∈ U ,   a . e .   ℙ − a . s . (4.15)</p><p>Where ( p , q ) ∈ L F 2 ( Ω ; C ( [ 0, T ] ; ℝ ) ) &#215; L F 2 ( Ω &#215; [ 0, T ] ; ℝ 2 ) , is the solution of the first order adjoint equation (16), and P is the second adjoint process given by (4.11).</p><p>Proof. Let t ∈ [ 0, T ] , and V ε = [ t , t + ε ] , then</p><p>J ( u ) − J ( u ε ) = E ∫ 0 T ( − δ c ( s ) + b 0 p ( s ) δ u ( s ) − σ 2 q 2 ( s ) δ u ( s − d ) ) ( t ) d s       + 1 2 E ∫ 0 T     σ 2 2 ( δ u ( s − d ) ) 2 P ( s ) d s = E ∫ 0 T ( − [ ( c ( u ( s ) ) − v ) + b 0 p ( s ) ( u ( s ) − v ) ] I V ε ( s )       − σ 2 q 2 ( s ) ( u ( s − d ) − v ) I V ε ( s − d ) ) d s       + 1 2 E ∫ 0 T     σ 2 2 ( u ( s − d ) − v ) 2 P ( s ) I V ε ( s − d ) d s</p><p>= − E ∫ t t + ε ( c ( u ( s ) − v ) I V ε ( s ) − b 0 p ( s ) ( u ( s ) − v ) ) d s       − E ∫ t t + ε     E F t σ 2 q 2 ( s ′ + d ) ( u ( s ′ ) − v ) d s       + 1 2 E ∫ t t + ε     E F t σ 2 2 ( u ( s ′ ) − v ) 2 P ( s ′ + d ) d s ′ . (4.16)</p><p>Letting ε tends to 0, by standard arguments we deduce (4.15).</p></sec></sec><sec id="s5"><title>Acknowledgements</title><p>The authors are grateful to the GNAMPA (Gruppo Nazionale per l’Analisi Matematica, la Probabilit&#224; e le loro Applicazioni) for the financial support.</p></sec><sec id="s6"><title>Conflicts of Interest</title><p>The authors declare no conflicts of interest regarding the publication of this paper.</p></sec><sec id="s7"><title>Cite this paper</title><p>Guatteri, G. and Masiero, F. (2024) Stochastic Maximum Principle for Optimal Advertising Models with Delay and Non-Convex Control Spaces. Advances in Pure Mathematics, 14, 442-450. https://doi.org/10.4236/apm.2024.146025</p></sec></body><back><ref-list><title>References</title><ref id="scirp.133840-ref1"><label>1</label><mixed-citation publication-type="other" xlink:type="simple">Grosset, L. and Viscolani, B. (2004) Advertising for a New Product Introduction: A Stochastic Approach. &lt;i&gt;Top&lt;/i&gt;, 12, 149-167. &lt;br&gt;https://doi.org/10.1007/bf02578929</mixed-citation></ref><ref id="scirp.133840-ref2"><label>2</label><mixed-citation publication-type="other" xlink:type="simple">Gozzi, F. and Marinelli, C. (2006) Stochastic Optimal Control of Delay Equations Arising in Advertising Models. &lt;i&gt;Stochastic Partial Differential Equations and Appl&lt;/i&gt;&lt;i&gt;i&lt;/i&gt;&lt;i&gt;cations&lt;/i&gt;, VII, Chapman &amp; Hall/CRC, Boca Raton, 133-148.</mixed-citation></ref><ref id="scirp.133840-ref3"><label>3</label><mixed-citation publication-type="other" xlink:type="simple">Gozzi, F., Marinelli, C. and Savin, S. (2009) On Controlled Linear Diffusions with Delay in a Model of Optimal Advertising under Uncertainty with Memory Effects. &lt;i&gt;Journal of Optimization Theory and Applications&lt;/i&gt;, 142, 291-321. &lt;br&gt;https://doi.org/10.1007/s10957-009-9524-5</mixed-citation></ref><ref id="scirp.133840-ref4"><label>4</label><mixed-citation publication-type="other" xlink:type="simple">Hartl, R.F. (1984) Optimal Dynamic Advertising Policies for Hereditary Processes. &lt;i&gt;Journal of Optimization Theory and Applications&lt;/i&gt;, 43, 51-72. &lt;br&gt;https://doi.org/10.1007/bf00934746</mixed-citation></ref><ref id="scirp.133840-ref5"><label>5</label><mixed-citation publication-type="other" xlink:type="simple">Chen, L. and Wu, Z. (2010) Maximum Principle for the Stochastic Optimal Control Problem with Delay and Application. &lt;i&gt;Automatica&lt;/i&gt;, 46, 1074-1080. &lt;br&gt;https://doi.org/10.1016/j.automatica.2010.03.005</mixed-citation></ref><ref id="scirp.133840-ref6"><label>6</label><mixed-citation publication-type="other" xlink:type="simple">Guatteri, G. and Masiero, F. (2021) Stochastic Maximum Principle for Problems with Delay with Dependence on the Past through General Measures.&lt;i&gt; Mathematical Control &amp; Related Fields&lt;/i&gt;, 11, 829-855. &lt;br&gt;https://doi.org/10.3934/mcrf.2020048</mixed-citation></ref><ref id="scirp.133840-ref7"><label>7</label><mixed-citation publication-type="other" xlink:type="simple">Hu, Y. and Peng, S. (1996) Maximum Principle for Optimal Control of Stochastic System of Functional Type. &lt;i&gt;Stochastic Analysis and Applications&lt;/i&gt;, 14, 283-301. &lt;br&gt;https://doi.org/10.1080/07362999608809440.</mixed-citation></ref><ref id="scirp.133840-ref8"><label>8</label><mixed-citation publication-type="other" xlink:type="simple">de Feo, F. (2023) Stochastic Optimal Control Problems with Delays in the State and in the Control via Viscosity Solutions and an Economical Application. arXiv: 2308.14506.</mixed-citation></ref><ref id="scirp.133840-ref9"><label>9</label><mixed-citation publication-type="other" xlink:type="simple">Guatteri, G. and Masiero, F. (2023) Stochastic Maximum Principle for Equations with Delay: Going to Infinite Dimensions to Solve the Non-Convex Case. arXiv: 2306.07422.</mixed-citation></ref><ref id="scirp.133840-ref10"><label>10</label><mixed-citation publication-type="other" xlink:type="simple">Peng, S. and Yang, Z. (2009) Anticipated Backward Stochastic Differential Equations. &lt;i&gt;The Annals of Probability&lt;/i&gt;, 37, 877-902. &lt;br&gt;https://doi.org/10.1214/08-aop423</mixed-citation></ref><ref id="scirp.133840-ref11"><label>11</label><mixed-citation publication-type="other" xlink:type="simple">Meng, W., Shi, J., Wang, T. and Zhang, J.F. (2023) A General Maximum Principle for Optimal Control of Stochastic Differential Delay Systems. arXiv: 2302.03339.</mixed-citation></ref><ref id="scirp.133840-ref12"><label>12</label><mixed-citation publication-type="other" xlink:type="simple">Tapiero, C.S. (1978) Optimum Advertising and Goodwill under Uncertainty. &lt;i&gt;Ope&lt;/i&gt;&lt;i&gt;r&lt;/i&gt;&lt;i&gt;ations Research&lt;/i&gt;, 26, 450-463. &lt;br&gt;https://doi.org/10.1287/opre.26.3.450</mixed-citation></ref><ref id="scirp.133840-ref13"><label>13</label><mixed-citation publication-type="other" xlink:type="simple">Peng, S. (1990) A General Stochastic Maximum Principle for Optimal Control Problems. &lt;i&gt;SIAM Journal on Control and Optimization&lt;/i&gt;, 28, 966-979. &lt;br&gt;https://doi.org/10.1137/0328054</mixed-citation></ref><ref id="scirp.133840-ref14"><label>14</label><mixed-citation publication-type="book" xlink:type="simple">Mohammed, S.-E.A. (1998) Stochastic Differential Systems with Memory: Theory, Examples and Applications. In: Decreusefond, L., &amp;#216;ksendal, B., Gjerde, J. and &amp;#220;st&amp;#252;nel, A.S., Eds., &lt;i&gt;Stochastic Analysis and Related Topics&lt;/i&gt; &lt;i&gt;VI&lt;/i&gt;, Birkh&amp;#228;user Boston, 1-77. &lt;br&gt;https://doi.org/10.1007/978-1-4612-2022-0_1</mixed-citation></ref><ref id="scirp.133840-ref15"><label>15</label><mixed-citation publication-type="other" xlink:type="simple">Yong, J. and Zhou, X.Y. (1999) Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer-Verlag, New York.</mixed-citation></ref><ref id="scirp.133840-ref16"><label>16</label><mixed-citation publication-type="other" xlink:type="simple">Hu, Y. and Peng, S. (1991) Adapted Solution of a Backward Semilinear Stochastic Evolution Equation. &lt;i&gt;Stochastic Analysis and Applications&lt;/i&gt;, 9, 445-459. &lt;br&gt;https://doi.org/10.1080/07362999108809250</mixed-citation></ref></ref-list></back></article>