bibtype C - Conference Paper (international conference)
ARLID 0490309
utime 20240103220122.3
mtime 20180614235959.9
title (primary) (eng) Gradient Descent Parameter Learning of Bayesian Networks under Monotonicity Restrictions
specification
page_count 12 s.
media_type P
serial
ARLID cav_un_epca*0490306
ISBN 978-80-7378-361-7
title Proceedings of the 11th Workshop on Uncertainty Processing (WUPES’18)
page_num 153-164
publisher
place Praha
name MatfyzPress, Publishing House of the Faculty of Mathematics and Physics Charles University
year 2018
editor
name1 Kratochvíl
name2 Václav
editor
name1 Vejnarová
name2 Jiřina
keyword Bayesian networks
keyword Learning model parameters
keyword monotonicity condition
author (primary)
ARLID cav_un_auth*0329423
name1 Plajner
name2 Martin
full_dept (cz) Matematická teorie rozhodování
full_dept (eng) Department of Decision Making Theory
department (cz) MTR
department (eng) MTR
institution UTIA-B
full_dept Department of Decision Making Theory
country CZ
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
author
ARLID cav_un_auth*0101228
name1 Vomlel
name2 Jiří
full_dept (cz) Matematická teorie rozhodování
full_dept Department of Decision Making Theory
department (cz) MTR
department MTR
institution UTIA-B
full_dept Department of Decision Making Theory
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
source
url http://library.utia.cas.cz/separaty/2018/MTR/plajner-0490309.pdf
cas_special
project
project_id GA16-12010S
agency GA ČR
country CZ
ARLID cav_un_auth*0332303
project
project_id SGS17/198/OHK4/3T/14
agency ČVUT
ARLID cav_un_auth*0361640
country CZ
abstract (eng) Learning parameters of a probabilistic model is a necessary step in most machine learning modeling tasks. When the model is complex and data volume is small the learning process may fail to provide good results. In this paper we present a method to improve learning results for small data sets by using additional information about the modelled system. This additional information is represented by monotonicity conditions which are restrictions on parameters of the model. Monotonicity simplifies the learning process and also these conditions are often required by the user of the system to hold. \n\nIn this paper we present a generalization of the previously used algorithm for parameter learning of Bayesian Networks under monotonicity conditions. This generalization allows both parents and children in the network to have multiple states. The algorithm is described in detail as well as monotonicity conditions are.\n\nThe presented algorithm is tested on two different data sets. Models are trained on differently sized data subsamples with the proposed method and the general EM algorithm. Learned models are then compared by their ability to fit data. We present empirical results showing the benefit of monotonicity conditions. The difference is especially significant when working with small data samples. The proposed method outperforms the EM algorithm for small sets and provides comparable results for larger sets.
action
ARLID cav_un_auth*0361637
name Workshop on Uncertainty Processing (WUPES’18)
dates 20180606
place Třeboň
country CZ
mrcbC20-s 20180609
RIV JD
FORD0 10000
FORD1 10200
FORD2 10201
reportyear 2019
num_of_auth 2
inst_support RVO:67985556
permalink http://hdl.handle.net/11104/0284592
confidential S
arlyear 2018
mrcbU14 SCOPUS
mrcbU24 PUBMED
mrcbU34 WOS
mrcbU63 cav_un_epca*0490306 Proceedings of the 11th Workshop on Uncertainty Processing (WUPES’18) MatfyzPress, Publishing House of the Faculty of Mathematics and Physics Charles University 2018 Praha 153 164 978-80-7378-361-7
mrcbU67 340 Kratochvíl Václav
mrcbU67 340 Vejnarová Jiřina