'Et moi . . . si j'avait su comment en revenir. One service mathematics has rendered the je n'y serais point aile: human race. It has put common sense back where it belongs. on the topmost shelf next Jules Verne (0 the dusty canister labelled 'discarded non sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics . . .'; 'One service logic has rendered com puter science . . .'; 'One service category theory has rendered mathematics . . .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
Inhaltsverzeichnis
1 Semi-Markov and Markov Chains. - 1. 1 Definitions and basic properties. - 1. 2 Algebraic and analytical methods in the study of Markovian systems. - 1. 3 Transient and recurrent processes. - 1. 4 Markovian populations. - 1. 5 Partially observable Markov chains. - 1. 6 Rewards and discounting. - 1. 7 Models and applications. - 1. 8 Dynamic-decision models for clinical diagnosis. - 2 Dynamic and Linear Programming. - 2. 1 Discrete dynamic programming. - 2. 2 A linear programming formulation and an algorithm for computation. - 3 Utility Functions and Decisions under Risk. - 3. 1 Informational lotteries and axioms for utility functions. - 3. 2 Exponential utility functions. - 3. 3 Decisions under risk and uncertainty; event trees. - 3. 4 Probability encoding. - 4 Markovian Decision Processes (Semi-Markov and Markov) with Complete Information (Completely Observable). - 4. 1 Value iteration algorithm (the finite horizon case). - 4. 2 Policy iteration algorithm (the finite horizon optimization). - 4. 3 Policy iteration with discounting. - 4. 4 Optimization algorithm using linear programming. - 4. 5 Risk-sensitive decision processes. - 4. 6 On eliminating sub-optimal decision alternatives in Markov and semi-Markov decision processes. - 5 Partially Observable Markovian Decision Processes. - 5. 1 Finite horizon partially observable Markov decision processes. - 5. 2 The infinite horizon with discounting for partially observable Markov decision processes. - 5. 3 A useful policy iteration algorithm, for discounted (? < 1) partially observable Markov decision processes. - 5. 4 The infinite horizon without discounting for partially observable Markov processes. - 5. 5 Partially observable semi-Markov decision processes. - 5. 6 Risk-sensitive partially observable Markov decision processes. - 6 Policy Constraints in Markov DecisionProcesses. - 6. 1 Methods of investigating policy costraints in Markov decision processes. - 6. 2 Markov decision processes with policy constraints. - 6. 3 Risk-sensitive Markov decision process with policy constraints. - 7 Applications. - 7. 1 The emergency repair control for electrical power systems. - 7. 2 Stochastic models for evaluation of inspection and repair schedules [2]. - 7. 3 A Markovian dicision model for clinical diagnosis and treatment applied to the respiratory system.