bibtype J - Journal Article
ARLID 0502902
utime 20240903170642.1
mtime 20190314235959.9
SCOPUS 85064190063
WOS 000457070200009
DOI 10.14736/kyb-2018-6-1218
title (primary) (eng) Risk-sensitive Average Optimality in Markov Decision Processes
specification
page_count 13 s.
media_type P
serial
ARLID cav_un_epca*0297163
ISSN 0023-5954
title Kybernetika
volume_id 54
volume 6 (2018)
page_num 1218-1230
publisher
name Ústav teorie informace a automatizace AV ČR, v. v. i.
keyword controlled Markov processes
keyword risk-sensitive average optimality
keyword asymptotic behavior
author (primary)
ARLID cav_un_auth*0101196
full_dept Department of Econometrics
share 100%
name1 Sladký
name2 Karel
institution UTIA-B
full_dept (cz) Ekonometrie
full_dept (eng) Department of Econometrics
department (cz) E
department (eng) E
garant K
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
source
url http://library.utia.cas.cz/separaty/2019/E/sladky-0502902.pdf
cas_special
project
ARLID cav_un_auth*0363963
project_id GA18-02739S
agency GA ČR
abstract (eng) In this note attention is focused on finding policies optimizing risk-sensitive optimality criteria in Markov decision chains. To this end we assume that the total reward generated by the Markov process is evaluated by an exponential utility function with a given risk-sensitive coefficient. The ratio of the first two moments depends on the value of the risk-sensitive coefficient, if the risk-sensitive coefficient is equal to zero we speak on risk-neutral models. Observe that the first moment of the generated reward corresponds to the expectation of the total reward and the second central moment of the reward variance. For communicating Markov processes and for some specific classes of unichain processes long run risk-sensitive average reward is independent of the starting state. In this note we present necessary and sufficient condition for existence of optimal policies independent of the starting state in unichain models and characterize the class of average risk-sensitive optimal policies.
action
ARLID cav_un_auth*0373581
name Mathematical Methods in Economy and Industry 2017
dates 20170904
mrcbC20-s 20170906
place Jindřichův Hradec
country CZ
result_subspec WOS
RIV BB
FORD0 10000
FORD1 10100
FORD2 10103
reportyear 2019
num_of_auth 1
inst_support RVO:67985556
permalink http://hdl.handle.net/11104/0295273
confidential S
mrcbC86 1 Article|Proceedings Paper Computer Science Cybernetics
mrcbT16-e COMPUTERSCIENCECYBERNETICS
mrcbT16-j 0.174
mrcbT16-s 0.268
mrcbT16-B 15.991
mrcbT16-D Q4
mrcbT16-E Q3
arlyear 2018
mrcbU14 85064190063 SCOPUS
mrcbU24 PUBMED
mrcbU34 000457070200009 WOS
mrcbU63 cav_un_epca*0297163 Kybernetika 0023-5954 Roč. 54 č. 6 2018 1218 1230 Ústav teorie informace a automatizace AV ČR, v. v. i.