Opportunism in Supply Chain Recommendations: A Dynamic Optimization Approach

Abstract

Supply chain partners often face a fundamental trade-off: dishonesty can provide immediate rewards, but excessive lying erodes credibility and undermines future opportunities. Prior research has documented the prevalence of deception and the value of trust, yet extant studies examine these behaviors as static choices. Much less is known about how deception evolves dynamically and how trust is eroded and restored over time. We develop a dynamic framework in which deception is treated as a decision and trust as a state that both decay naturally and are further depleted by dishonesty. The payoff structure reflects the dual role of credibility: it enables immediate benefits from deception but must be preserved to sustain future influence. To address analytical challenges, we adopt a discrete-time formulation and apply numerical optimization methods. The analysis reveals three key insights. First, lying occurs in episodic bursts rather than continuously, with opportunistic spikes in deception followed by pauses that allow trust to recover. Second, the duration of lying episodes increases with the strength of incentives to deceive. Third, deception is amplified when opportunities persist in stable environments, leading to longer episodes of dishonesty. Managerially, our findings caution against incentive schemes that overemphasize short-term performance, as they may inadvertently foster sustained opportunism. Instead, mechanisms that balance immediate gains with credibility preservation—such as rotating advisors, auditing, or transparency requirements—can mitigate the erosion of trust and sustain collaboration.

Share and Cite:

Fruchter, G. (2026) Opportunism in Supply Chain Recommendations: A Dynamic Optimization Approach. Modern Economy, 17, 26-38. doi: 10.4236/me.2026.171003.

1. Introduction

Trust is a cornerstone of economic and social interactions. Without trust, markets collapse, contracts become unenforceable, and cooperation fails. Nowhere is this more evident than in supply chains, where buyers and suppliers repeatedly exchange information and advice. A rich stream of research, notably by Özalp Özer and colleagues, shows that vendors providing advice or assistance (e.g., supply forecasts, capacity commitments) can distort information when incentives misalign. For example, Trust in Forecast Information Sharing (Özer et al., 2011) examines how suppliers invest under asymmetric forecast information from buyers, revealing opportunities for misrepresentation. Similarly, Information Sharing, Advice Provision, or Delegation: What Leads to Higher Trust and Trustworthiness? (Özer, Subramanian, & Wang, 2018) demonstrates that the form of advice or assistance (sharing vs advice vs delegation) significantly affects trust dynamics. These cases highlight a fundamental trade-off: opportunistic misrepresentation can yield immediate benefits but undermines the long-term credibility required for sustained collaboration.

This tension is particularly salient in supply chain relationships, where advice and recommendations shape critical operational decisions. Prior research has demonstrated both the value of trust and the prevalence of opportunistic misrepresentation in these settings (e.g., Özer et al., 2011; Özer, Subramanian, & Wang, 2018). Yet, the existing literature largely examines advice provision and deception as static choices, abstracting away from the temporal evolution of trust and dishonesty. What remains missing is an understanding of how deception unfolds dynamically—how bursts of misrepresentation interact with the natural decay and recovery of trust, and under what conditions opportunism persists or subsides. Our study addresses this gap by developing a dynamic optimization framework that reveals episodic patterns of deception, the dependence of lying duration on incentive strength, and the amplification of dishonesty in stable environments.

Existing work has provided important insights. Behavioral economics shows that promises and communication affect cooperation in trust games. Operations management studies demonstrate that supply-chain efficiency depends on the credibility of shared information. Psychology emphasizes how dishonesty is shaped by incentives, self-image, and social norms. But despite these advances, one dimension remains largely underexplored: the dynamic nature of deception and trust. How do lying episodes unfold over time? Under what conditions does deception persist, and when does honesty re-emerge to preserve credibility? How do environmental factors, such as persistent opportunities, influence the timing and length of dishonesty?

This paper develops a dynamic optimization model of lying and trust to address these questions. We treat lying intensity as the advisor’s control and trust as the state variable that evolves according to both natural decay and the damaging effects of deception. The advisor’s payoff combines immediate benefits from lying with long-term gains from maintaining credibility. Analytical challenges arising from singular controls motivate a discrete-time formulation solved numerically through nonlinear programming. Simulations reveal that lying occurs in episodes, whose length increases with incentive intensity and environmental persistence, while trust erodes and partially recovers. We complement the computational analysis with an experimental design that enables empirical validation of these predictions.

Our contributions are threefold. First, we develop a tractable dynamic optimization model of deception under trust constraints, grounded in supply chain advisory contexts. Second, we derive novel predictions on the episodic nature of lying and its dependence on incentive strength and environmental persistence. Third, we demonstrate how these dynamics reinterpret deception not as an anomaly but as a strategic intertemporal choice with important implications for supply chain collaboration, advisory services, and organizational trust. While our analysis is theoretical, the framework provides a foundation for future empirical validation.

2. Related Literature

Our paper is most directly connected to research on advice, information sharing, and credibility in supply chains. A rich stream of work by Özer and co-authors has shown how trust and opportunism shape coordination when suppliers provide forecasts or recommendations. Özer et al. (2011) studies forecast information sharing, demonstrating how suppliers’ credibility affects buyers’ ordering and suppliers’ capacity decisions. Özer, Subramanian, and Wang (2018) compare different modes of assistance—information sharing, advice, and delegation—and show how each affects trust and trustworthiness. More broadly, Özer and Zheng (2017) review how trust and trustworthiness can be established in supply chain information sharing. These studies highlight the centrality of credibility in advisory relationships, but they generally treat misrepresentation as a static choice or rely on repeated-game formulations with fixed strategies. Our framework builds on this foundation by modeling deception dynamically: trust is a state variable that both decays naturally and is eroded by dishonesty, while deception is a control variable chosen strategically over time.

A complementary literature in behavioral economics has investigated how individuals balance honesty and incentives. Charness and Dufwenberg (2006) demonstrate the power of promises in fostering cooperation in trust games. Gneezy (2005) shows that lying increases with favorable consequences, while Fischbacher and Föllmi-Heusi (2013) document systematic cheating patterns. Related work by Ellingsen and Johannesson (2004) and Boles, Croson, and Murnighan (2000) highlights how communication, threats, and retribution shape deception. These studies establish that dishonesty is common but context-dependent. Our work extends this literature by formalizing dishonesty as an intertemporal decision problem rather than a one-shot act.

Psychological and organizational perspectives further stress that lying is shaped by relational and reputational concerns. Ariely (2012) provides evidence that dishonesty is bounded by self-concept maintenance, while Levine and Schweitzer (2015) show that “prosocial lies” can, under some conditions, enhance trust. These insights support our modeling approach, which treats trust as an intangible but valuable asset that agents manage dynamically.

Finally, our research connects to formal models of goodwill and reputation. Nerlove and Arrow’s (1962) goodwill accumulation model established the idea of reputation as a depreciating stock. Cabral (2000) and Levin (2003) examine firm reputation and relational contracts in dynamic environments. While these models share the insight that credibility depreciates, they rarely model dishonesty explicitly as a decision variable that depletes trust. Our contribution is to integrate this idea into a dynamic optimization framework tailored to supply chain advisory settings.

3. Model

We consider an advisor-receiver interaction in which the advisor may strategically choose to misrepresent information. The advisor’s influence depends on the level of trust the receiver places in them. While deception may yield immediate benefits, it simultaneously erodes credibility, reducing future opportunities to gain. The model captures this intertemporal trade-off between short-term deception and long-term trust.

The model is dynamic, with time indexed continuously. Let S(t) be the state variable, representing the receiver’s trust in the advisor. Trust is assumed to decay naturally over time and to decline more quickly when deception occurs. Deception intensity is represented by U(t), the control variable, representing the advisor’s level of dishonesty at time t. Higher values correspond to more aggressive lying. U(t) is bounded between 0 (truthful) and 1 (maximum deception). In practical supply-chain settings, U(t) captures the advisor’s degree of misrepresentation at time t. This can reflect several operational forms of deception: the magnitude of forecast inflation, selective withholding of negative information, or a composite measure of strategically biased recommendations. Thus, U(t) serves as a continuous effort variable summarizing how strongly the advisor distorts information to influence the buyer’s decision.

3.1. Trust Dynamics

Trust evolves according to three components: i) A baseline contribution to trust (e.g., from institutional or reputational factors). ii) Erosion from dishonesty. iii) Natural decay of trust in the absence of reinforcement. Formally, trust follows:

dS( t ) dt =cαU( t )δS( t ),S( 0 )= S 0 (1)

where U( t )[ 0,1 ] , c is the baseline trust contribution, α>0 captures the marginal erosion of trust due to lying, and δ>0 is the natural decay rate.

3.2. Payoff Structure

At each instant, the advisor derives utility from two sources: i) The value of trust itself, reflecting the long-term benefit of credibility. ii) Immediate gains from deception, which increase in trust—because lies are more effective when credibility is high—but grow concavely with the level of deception.

The payoff structure therefore captures the dual role of credibility: it provides ongoing value and at the same time amplifies the benefit of lying.

3.3. Objective Function

The advisor selects a deception strategy U(t) over the horizon [0, T] to maximize the present value of discounted utility:

max U( t ) 0 T e ρt ( S( t )+γS( t )f( U( t ) ) )dt, (2)

where the integrand consists of two parts: i) Trust value S( t ) , reflecting the long-term benefit of credibility. ii) Deception gain γS( t )f( U( t ) ) , capturing the immediate benefit of lying, which scales with trust and the incentive parameter γ , f( U ) is concave and increasing, so that lying produces diminishing returns as intensity rises. In the baseline case, f( U )= U , yielding diminishing returns from lying intensity.

3.4. Analytical Challenges

Equation (1) and Equation (2) define a nonlinear optimal control problem. Applying Pontryagin’s Maximum Principle reveals the potential for singular controls, where marginal costs and benefits of lying balance, complicating closed-form characterization. To address this, we adopt a direct transcription approach:

a) The horizon [0, T] is discretized into n intervals of length Δt = T/n.

b) Trust dynamics are imposed as difference equations.

c) The continuous-time problem is reformulated as a nonlinear program that can be solved numerically using modern interior-point algorithms.

This approach ensures tractability and provides accurate approximations to the optimal deception path.

4. Numerical Optimization Approach and Illustrations

We discretize the horizon [0, T] into n intervals of length Δt = T/n with grid points t 0 , t 1 ,, t n . The decision variables are the deception levels Ut and the state variables are the trust levels St. Trust dynamics are imposed as first-order difference constraints:

S t+1 = S t +Δt( cα U t δ S t ), S 0 given (3)

The discounted objective is approximated by a Riemann sum:

max { U t } t=0 n1 e ρtΔt ( S t +γ S t f( U t ) )Δt, (4)

where in the baseline case f( U t )= U t .

We solve the discretized nonlinear program (4) subject to the recursion (3), bounds 0 U t 1 , and nonnegativity of trust. We employ Mathematica’s NMaximize with a primal-dual barrier method, which ensures iterates remain feasible while converging to a Karush-Kuhn-Tucker (KKT) point. To aid convergence, we use continuation in the incentive parameter γ, gradually increasing its value.

Convergence checks include:

a) feasibility of the trust dynamics (3),

b) stability of the objective under grid refinement, and

c) complementary slackness at active bounds.

Baseline parameters are α = 0.1, δ = 0.03, c = 0.2, ρ = 0.03, S0 = 1, T = 100, n = 50. This direct transcription approach follows standard treatments in dynamic optimization and optimal control (see Betts, 2010; Rao, 2009; Dockner et al., 2000).

illustrates these baseline trajectories. With low incentives (γ = 0, 0.5), deception effort remains negligible, and trust is preserved. As incentives increase, lying rises when trust is abundant but declines as trust erodes, producing front-loaded dishonesty. Trust erosion itself acts as a natural cap on sustained deception.

Figure 1. Optimal deception and trust trajectories.

4.1. Extensions: Capturing Episodic Deception

The baseline solutions in vary smoothly, but in practice deception is often episodic. To capture this, we introduce three refinements:

1) Effort budget constraint. Advisors face a finite deception capacity B, limiting cumulative deception across the horizon. This discourages continuous moderate lying and pushes effort into bursts when trust is high: t=0 n1 U t ΔtB .

2) Anti-middle penalty. Behavioral evidence suggests lying is deployed in an all-or-nothing fashion. We add a convex penalty term, λ 01 U t ( 1 U t ) which vanishes at U t =0 and U t =1 but penalizes intermediate deception, reinforcing episodic bursts.

This “anti-middle” penalty is supported by extensive evidence in behavioral economics and psychology showing that individuals often avoid moderate levels of dishonesty and instead gravitate toward either full honesty or more substantial misreporting. Gneezy (2005) documents that even small lies generate moral and psychological costs, creating a threshold effect in lying behavior. Complementing this, Ariely, Loewenstein, and Prelec (2006) show that people manage their self-concept by maintaining a balance between economic gain and moral identity, making partial dishonesty especially aversive. Once individuals cross that moral boundary, however, larger lies can become psychologically easier—a mechanism further reinforced by Ariely and Shalvi (2013), who demonstrate that moral disengagement reduces incremental psychological costs at higher deception levels. Together, these findings provide strong empirical grounding for our modeling assumption that intermediate deception levels carry disproportionately high subjective costs, naturally producing the episodic all-or-nothing patterns predicted by the model.

3) Alternative gain specification. In the baseline, returns are concave, γS( t ) U( t ) , encouraging interior solutions. To amplify episodicity, we also consider linear returns γS( t )U( t ) , which favor concentrated effort. Under these refinements, deception emerges in bursts: lying switches on sharply when trust is high, persists briefly, and shuts down as trust erodes. Trust erosion acts as a natural cap on sustained dishonesty.

illustrates these episodic dynamics. Panel A shows U(t) spiking at discrete intervals; Panel B shows jagged declines in S(t) aligned with those bursts. Compared to , deception is no longer smooth but clustered when trust is abundant.

Figure 2. Episodic optimal deception and trust trajectories. (A). Optimal lying effort U(t)—episodic extension. (B). Trust S(t)—episodic extension.

Panels A and B plot optimal deception effort U(t) and trust S(t) under the episodic refinements, for γ = 0 (red), 0.50 (blue), 1 (green), 2 (purple), and 3 (magenta).

- Low incentives (γ = 0, 0.5). Deception is negligible, and trust remains high.

- Moderate incentives (γ = 1). Lying appears in short bursts early, depleting trust modestly before stabilizing.

- High incentives (γ = 2). Deception spikes more sharply, causing rapid trust decline followed by stabilization at a low level.

- Very high incentives (γ = 3). Lying is highly opportunistic: bursts occur abruptly and intensely, then vanish. Trust collapses quickly and stabilizes at a minimal level.

Together, and show episodic dishonesty as the natural outcome of constrained deception capacity and strong incentives. Bursts exploit high trust while it is available, and erosion then shuts down further lying.

4.2. Measuring Lying Episode Lengths

We define an episode as a maximal consecutive set of periods with Ut > 0. shows how episode duration varies with the incentive intensity γ. Average (and max) episode length increases monotonically with γ: for γ = 0 episodes are absent; as γ rises, lying consolidates into fewer, longer bursts. In our episodic capacity setup, total time spent lying is constrained by the effort budget B; stronger incentives mainly reallocate that fixed capacity into longer, more concentrated episodes.

Line plot of average and maximum episode length against γ{ 0,0.5,1,2,3 } .

4.3. Environmental Persistence

We next introduce a two-state environment (Low/High). High states make deception more tempting; persistence is parameterized by P(H→H) and P(L→L).

(grid of 3D surfaces) introduces environmental persistence. When high-demand states are persistent, episodes lengthen dramatically under strong incentives. Frequent switching shortens deception. Persistence and incentives interact: γ magnifies the impact of stability, producing sustained lying when opportunities remain stable.

Figure 3. (A) Average lying episode length vs. incentive intensity; (B) Lying episode lengths vs transition probabilities.

Summary. Across , three insights emerge:

1) Baseline dishonesty is smooth and front-loaded ().

2) Structural refinements generate bursty episodes that mirror real dishonesty ().

3) Episodes lengthen with incentives and persistence, showing how external environments amplify internal motives to deceive ().

These findings highlight trust erosion as a natural brake but also reveal vulnerabilities: when incentives are high and environments stable, dishonesty becomes prolonged and opportunistic.

5. Discussion and Implications

The numerical analysis highlights that dishonesty is episodic and opportunistic rather than continuous. This insight has broad implications for advisory relationships, where trust acts as both a resource and a constraint. We organize the discussion around three central findings—episodicity, incentives, and environmental persistence—and then extend the implications to supply chain settings.

5.1. Episodicity as a Natural Outcome

In the baseline model (), deception appears as smooth, front-loaded effort. However, when capacity limits and behavioral penalties are introduced (), lying emerges in distinct bursts. Advisors exploit high trust briefly and then stop once credibility erodes. This episodic pattern is consistent with empirical evidence that deception often occurs in concentrated episodes rather than gradual drifts. For managers, this suggests that dishonesty should not be expected to rise smoothly with incentives—it will instead cluster in critical moments when trust is abundant. Monitoring systems must therefore be sensitive to bursts of activity rather than average levels.

5.2. Incentives as a Double-Edged Sword

Stronger incentives make lying more attractive but also prolong deceptive episodes. demonstrates that average episode length increases monotonically with incentive intensity γ. In financial advising, tying compensation too strongly to short-term performance may inadvertently encourage sustained misrepresentation. In supply chains, performance-based contracts may incentivize exaggeration of demand forecasts. The challenge is to calibrate incentives so that they motivate effort without creating extended opportunities for dishonesty.

5.3. Environmental Persistence as an Amplifier

Environmental conditions interact with incentives to shape deception dynamics. As shows, when high-demand states are persistent, deceptive episodes become substantially longer, especially under strong incentives. In volatile environments, dishonesty is self-limiting because trust erodes quickly and opportunities dissipate. But in stable settings, dishonesty can persist and accumulate. This implies that context matters as much as incentives: preventive mechanisms such as auditing, advisor rotation, or third-party verification are especially critical in stable environments.

Each proposed managerial intervention directly targets a specific parameter or dynamic in the model. Auditing increases the effective psychological or economic cost of deception, corresponding to a higher marginal cost of U(t) or a steeper penalty function f(U); this reduces the incentive for episodic lying by making high-intensity deception less attractive. Advisor rotation works through a different channel: it effectively resets or boosts the trust state S(t) by introducing a new advisor who is not burdened by prior erosion, thereby counteracting the model’s natural decay process governed by δ. Rotation also weakens the perceived persistence of the environment (captured in the discounting of future trust gains), reducing incentives to cluster deception when trust is abundant. Transparency mechanisms, such as shared forecasting dashboards, reduce the advisor’s relative influence on S(t) by lowering the marginal gain from deception α, thereby flattening the reward function associated with dishonest behavior. Together, these interventions show how managerial levers can be mapped directly onto core structural elements of the model, providing a practical toolkit for mitigating episodic opportunism.

5.4. Asymmetry in Trust Erosion and Recovery

Our results also reinforce the well-known asymmetry in trust: dishonesty erodes credibility rapidly, while recovery is slow and partial. Even a brief episode of deception leaves lasting scars, with trust stabilizing at lower levels. This fragility suggests that organizations should prioritize proactive trust-building measures (e.g., transparency, verification systems) rather than relying on post-crisis repair.

5.5. Supply Chain Implications

Our findings connect directly to research on information sharing in supply chains. Özer, Zheng, and Chen (2011) show that trust in forecast accuracy affects coordination and efficiency. However, most supply chain models assume static conditions or repeated interactions with fixed strategies. By contrast, our dynamic framework reveals how episodic dishonesty can emerge endogenously and persist when incentives and environmental stability align. For example, in a supply chain contract, a supplier may exaggerate forecasts during stable demand periods to secure favorable terms, only to revert to honesty once credibility is depleted. This episodic perspective highlights the need for dynamic monitoring and contract designs that adapt to changing credibility levels over time.

5.6. Bridging Analytics and behavior

Although our model is stylized, its predictions resonate with behavioral research: episodic lying, incentive-driven dishonesty, and asymmetric trust dynamics are all observed empirically. Embedding these behavioral patterns in an optimal control framework provides a rigorous lens to derive comparative statics and managerial insights. The integration of behavioral evidence with dynamic optimization thus advances both theory and practice, offering new ways to design incentives, monitor dishonesty, and preserve trust.

A limitation of the present framework is its deterministic treatment of trust erosion, which does not capture stochastic discovery of deception or sudden, catastrophic trust collapse—both of which may arise in real advisory relationships.

To guide future empirical work, the model yields several testable predictions. First, the duration and intensity of deceptive episodes should increase when the advisor’s marginal incentive for misrepresentation is higher (i.e., higher α), particularly when trust S(t) is abundant. Second, environments with stronger “anti-middle” psychological penalties should exhibit a more bimodal distribution of deception levels—characterized by long periods of full honesty punctuated by concentrated bursts of high-intensity lying. These hypotheses offer a structured path for behavioral and field experiments to empirically validate the model’s key mechanisms.

6. Conclusion

This paper developed a dynamic model of deception and trust, in which an advisor chooses the intensity of dishonesty over time while trust evolves endogenously. The model captures the fundamental trade-off: deception yields short-term gains but accelerates the erosion of trust, which is itself the basis for future value. By combining analytic structure with numerical simulations, we characterize the optimal patterns of deception and the resulting trust trajectories.

Our results highlight four robust insights. First, dishonesty is episodic, concentrated when trust is abundant. Second, the duration of lying episodes grows with incentive strength, as higher rewards make sustained dishonesty worthwhile. Third, environmental persistence amplifies deception by making opportunities consistently available. Fourth, trust erodes faster than it recovers, underscoring its fragility as a resource. Together, these insights provide a foundation for understanding how incentives and context shape credibility in advisory and contractual relationships.

The analysis opens several avenues for future work. One direction is empirical testing: laboratory and field experiments could examine whether deception indeed follows episodic patterns and whether recovery of trust is asymmetric. Another is model extension: incorporating multiple advisors, endogenous switching of clients, or richer forms of reputational dynamics. Ultimately, bridging dynamic optimization with behavioral evidence can deepen our understanding of how trust is built, eroded, and sustained in organizations and markets.

Acknowledgements

I am deeply grateful to Professor Ernan Haruvy, who first inspired the central idea of this paper and provided insightful comments throughout its development. His intellectual generosity and thoughtful guidance contributed greatly to sharpening both the model and its behavioral interpretation.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Ariely, D. (2012). The (Honest) Truth about Dishonesty: How We Lie to Everyone-Especially Ourselves. HarperCollins.
[2] Ariely, D., & Shalvi, S. (2013). The Dark Side of Creativity: Original Thinkers Can Be More Dishonest. Psychological Science, 22, 13-17.
[3] Ariely, D., Loewenstein, G., & Prelec, D. (2006). What’s the Right Price? A Psychological Perspective on Pricing. Journal of Marketing Research, 43, 190-195.
[4] Betts, J. T. (2010). Practical Methods for Optimal Control and Estimation Using Nonlinear Programming (2nd ed.). SIAM.
[5] Boles, T. L., Croson, R. T. A., & Murnighan, J. K. (2000). Deception and Retribution in Repeated Ultimatum Bargaining. Organizational Behavior and Human Decision Processes, 83, 235-259. [Google Scholar] [CrossRef] [PubMed]
[6] Cabral, L. M. B. (2000). Stretching Firm and Brand Reputation. The RAND Journal of Economics, 31, 658-673. [Google Scholar] [CrossRef
[7] Charness, G., & Dufwenberg, M. (2006). Promises and Partnership. Econometrica, 74, 1579-1601. [Google Scholar] [CrossRef
[8] Dockner, E. J., Jorgensen, S., Long, N. V., & Sorger, G. (2000). Differential Games in Economics and Management Science. Cambridge University Press. [Google Scholar] [CrossRef
[9] Ellingsen, T., & Johannesson, M. (2004). Promises, Threats and Fairness. The Economic Journal, 114, 397-420. [Google Scholar] [CrossRef
[10] Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in Disguise—An Experimental Study on Cheating. Journal of the European Economic Association, 11, 525-547. [Google Scholar] [CrossRef
[11] Gneezy, U. (2005). Deception: The Role of Consequences. American Economic Review, 95, 384-394. [Google Scholar] [CrossRef
[12] Levin, J. (2003). Relational Incentive Contracts. American Economic Review, 93, 835-857. [Google Scholar] [CrossRef
[13] Levine, E. E., & Schweitzer, M. E. (2015). Prosocial Lies: When Deception Breeds Trust. Organizational Behavior and Human Decision Processes, 126, 88-106. [Google Scholar] [CrossRef
[14] Nerlove, M., & Arrow, K. J. (1962). Optimal Advertising Policy under Dynamic Conditions. Economica, 29, 129-142. [Google Scholar] [CrossRef
[15] Ozer, Ö., & Zheng, Y. (2017). Trust, Trustworthiness, and Information Sharing in Supply Chains. Management Science, 63, 1110-1130.
[16] Özer, Ö., Subramanian, U., & Wang, Y. (2018). Information Sharing, Advice Provision, or Delegation: What Leads to Higher Trust and Trustworthiness? Management Science, 64, 474-493. [Google Scholar] [CrossRef
[17] Özer, Ö., Zheng, Y., & Chen, K. (2011). Trust in Forecast Information Sharing. Management Science, 57, 1111-1137. [Google Scholar] [CrossRef
[18] Rao, A. V. (2009). A Survey of Numerical Methods for Optimal Control. Advances in the Astronautical Sciences, 135, 497-528.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.