Pasar al contenido principal

Volumen 03 (1999) No. 1

Volumen 03 (1999) No. 1 imagen

Loop spaces of configuration spaces

Samuel Gitler

Resumen:

This paper gives a brief introduction to the theory of configuration spaces. Some recent results about the homology of their loop spaces are also presented. We also discuss their relationship to Vassiliev invariants for knots.

Descargar

A direct approach to Blackwell optimality

Rolando Cavazos Cadena and Jean B. Lasserre

Resumen:

This work concerns discrete-time Markov decision processes (MDP's) with denumerable state space and bounded rewards. The main objective is to show that the problem of establishing the existence of Blackwell optimal polices can be approached via well-known techniques in the theory of MDP's endowed with the average reward criterion. The main result can be summarized as follows: Assuming that the Simultaneous Doeblin Condition and mild continuity conditions are satisfied, it is shown that a policy $\pi^*$ is Blackwell optimal if, and only if, the actions prescribed by $\pi^*$ maximize the right-hand side of the average reward optimality equations associated to a suitably defined sequence of MDP's. In contrast with the usual approach, this result is obtained by using standard techniques and does not involve Laurent series expansions.

Descargar

Each copy of the real line in $\mathbb{C}^2$ is removable

Eduardo Santillán Zerón

Resumen:

In 1994, Professors E. M. Chirka, E. L. Stout and G. Lupacciolu showed that a closed subset of $\mathbb{C}^n$ $(n \ge 2)$ is removable for holomorphic functions, if its cover dimension is less than or iqual to $n-2$. Besides, they asked whether closed subsets of $\mathbb{C}^2$ homeomorphic to the real line (the simplest 1-dimensional sets) are removable for holomorphic functions. In this paper we propose a positive answer to that question.

Descargar

Adaptive policies for discrete-time Markov control processes with unbounded costs: Average and discounted criteria

Adolfo Minjárez Sosa

Resumen:

We consider a class of discrete-time Markov processes with Borel state and action spaces, and possibly unbounded costs. The processes evolve according to the system equation $x_(t+1) = F(x_t,a_t, \xi_t)$, $t=$1, 2, ... with i.i.d. $\mathcal{R}^k-$valued random vectors $\xi_t$, whose density $\rho$ is unknown. Assuming observability of $\{\xi_t\}$, we introduce two adaptive policies which are, respectively, asymptotically discounted cost optimal and average cost optimal.

Descargar

Sample path average cost optimality for a class of production-inventory systems

Oscar Vega Amaya

Resumen:

We show the existence of sample-path average cost (SPAC-) optimal policies for a class of production-inventory systems with uncountable state space and strictly unbounded one-step cost -that is, cost that growth without bound outside the compact subsets. In fact, for a specific case, we show that a $K^*$-threshold policy is both expected and sample path average cost optimal, where the constant $K^*$ can be easily computed solving a static optimization problem, which, in turn, is obtained directly from the basic data of the production-inventory system.

Descargar