bibtype J - Journal Article
ARLID 0581903
utime 20240402215121.5
mtime 20240125235959.9
SCOPUS 85166914975
WOS 001064515700001
DOI 10.1016/j.ijar.2023.108990
title (primary) (eng) Structural learning of mixed noisy-OR Bayesian networks
specification
page_count 18 s.
media_type P
serial
ARLID cav_un_epca*0256774
ISSN 0888-613X
title International Journal of Approximate Reasoning
volume_id 161
publisher
name Elsevier
keyword Bayesian networks
keyword Learning Bayesian networks
keyword Linguistics
keyword Loanwords
author (primary)
ARLID cav_un_auth*0101228
name1 Vomlel
name2 Jiří
institution UTIA-B
full_dept (cz) Matematická teorie rozhodování
full_dept (eng) Department of Decision Making Theory
department (cz) MTR
department (eng) MTR
full_dept Department of Decision Making Theory
share 33
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
author
ARLID cav_un_auth*0216188
name1 Kratochvíl
name2 Václav
institution UTIA-B
full_dept (cz) Matematická teorie rozhodování
full_dept Department of Decision Making Theory
department (cz) MTR
department MTR
full_dept Department of Decision Making Theory
country CZ
share 33
garant K
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
author
ARLID cav_un_auth*0414315
name1 Kratochvíl
name2 F.
country CZ
share 33
source
url http://library.utia.cas.cz/separaty/2024/MTR/vomlel-0581903.pdf
source
url https://www.sciencedirect.com/science/article/pii/S0888613X23001214?via%3Dihub
cas_special
project
project_id GA20-18407S
agency GA ČR
ARLID cav_un_auth*0397557
abstract (eng) In this paper we discuss learning Bayesian networks whose conditional probability tables are either Noisy-OR models or general conditional probability tables. We refer to these models as Mixed Noisy-OR Bayesian Networks. To learn their structure, we modify the Bayesian Information Criterion used for standard Bayesian networks to reflect the number of parameters of a Noisy-OR model. We prove that the log-likelihood function of a Noisy-OR model has a unique maximum and adapt the EM-learning method for the leaky Noisy-OR model. We propose a structure learning algorithm that learns optimal Mixed Noisy-OR Bayesian Networks. We evaluate the proposed approach on synthetic data, where it performs substantially better than standard Bayesian networks. We perform experiments with Bipartite Noisy-OR Bayesian networks of different complexity to find out when the results of Mixed Noisy-OR Bayesian Networks are significantly better than the results of standard Bayesian networks and when they perform similarly. We also study how different penalties based on the number of model parameters affect the quality of the results. Finally, we apply the suggested approach to a problem from the domain of linguistics. Specifically, we use Mixed Noisy-OR Bayesian Networks to model the spread of loanwords in the South-East Asian Archipelago. We perform numerical experiments in which we compare the prediction ability of standard Bayesian networks with Mixed Noisy-OR Bayesian networks and test different pruning methods to reduce the number of parent sets considered.
result_subspec WOS
RIV IN
FORD0 20000
FORD1 20200
FORD2 20202
reportyear 2024
num_of_auth 3
inst_support RVO:67985556
permalink https://hdl.handle.net/11104/0350580
confidential S
article_num 108990
mrcbC91 C
mrcbT16-e COMPUTERSCIENCEARTIFICIALINTELLIGENCE
mrcbT16-j 0.75
mrcbT16-s 0.877
mrcbT16-D Q3
mrcbT16-E Q2
arlyear 2023
mrcbU14 85166914975 SCOPUS
mrcbU24 PUBMED
mrcbU34 001064515700001 WOS
mrcbU63 cav_un_epca*0256774 International Journal of Approximate Reasoning 161 1 2023 0888-613X 1873-4731 Elsevier