Pasar al contenido principal

Volumen 11 (2007) No. 1

Volumen 11 (2007) No. 1 imagen

Dice games and stochastic dynamic programming

Henk Tijms

Resumen:

This paper uses dice games such as the game of Pig and the game of Hog to illustrate the powerful method of stochastic dynamic programming. Many students have difficulties in understanding the concepts and the solution method of stochastic dynamic programming, but using challenging dice games this understanding can be greatly enhanced and the essence of stochastic dynamic programming can be explained in a motivating way.

Descargar

Average optimality for semi-Markov control processes

Anna Jaskiewicz and Andrzej S. Nowak

Resumen:

This paper is a survey of some recent results on the average cost optimality equation for semi-Markov control processes. We assume that the embedded Markov chain is $V$-geometric ergodic and show that there exist a solution to the average cost optimality equation as well as an ($\varepsilon$-)optimal stationary policy. Moreover, we also prove the equivalence of two optimality cost criteria: ratio-average and time-average, in the sense that they lead to the same optimal costs and ($\varepsilon$-)optimal stationary policies.

Descargar

Real options on consumption in a small open monetary economy: a stochastic optimal control approach

Francisco Venegas-Martínez

Resumen:

This paper is aimed to develop a stochastic model of a small open monetary economy where risk-averse agents have expectations of the exchange-rate dynamics driven by a mixed diffusion-jump process. The size of a possible exchange-rate depreciation is supposed to have an extreme value distribution of the Fréchet type. Under this framework, an analytical solution of the price of the real option of waiting when consumption can be delayed (a claim that is not traded) is derived. Finally, a Monte Carlo simulation experiment is carried out to obtain numerical approximations of the real option price.

Descargar