bibtype C - Conference Paper (international conference)
ARLID 0434119
utime 20240103204926.1
mtime 20141106235959.9
SCOPUS 84908887413
title (primary) (eng) Pattern Recognition by Probabilistic Neural Networks - Mixtures of Product Components versus Mixtures of Dependence Trees
specification
page_count 11 s.
media_type P
serial
ARLID cav_un_epca*0434276
ISBN 978-989-758-054-3
title NCTA2014 - International Conference on Neural Computation Theory and Applications
page_num 65-75
publisher
place Rome
name SCITEPRESS
year 2014
keyword Probabilistic Neural Networks
keyword Product Mixtures
keyword Mixtures of Dependence Trees
keyword EM Algorithm
author (primary)
ARLID cav_un_auth*0101091
full_dept (cz) Rozpoznávání obrazu
full_dept (eng) Department of Pattern Recognition
department (cz) RO
department (eng) RO
full_dept Department of Pattern Recognition
share 70
name1 Grim
name2 Jiří
institution UTIA-B
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
author
ARLID cav_un_auth*0021092
share 30
name1 Pudil
name2 P.
country CZ
source
url http://library.utia.cas.cz/separaty/2014/RO/grim-0434119.pdf
cas_special
project
ARLID cav_un_auth*0303412
project_id GA14-02652S
agency GA ČR
country CZ
project
ARLID cav_un_auth*0308953
project_id GAP403/12/1557
agency GA ČR
country CZ
abstract (eng) We compare two probabilistic approaches to neural networks - the first one based on the mixtures of product components and the second one using the mixtures of dependence-tree distributions. The product mixture models can be efficiently estimated from data by means of EM algorithm and have some practically important properties. However, in some cases the simplicity of product components could appear too restrictive and a natural idea is to use a more complex mixture of dependence-tree distributions. By considering the concept of dependence tree we can explicitly describe the statistical relationships between pairs of variables at the level of individual components and therefore the approximation power of the resulting mixture may essentially increase. Nonetheless, in application to classification of numerals we have found that both models perform comparably and the contribution of the dependence-tree structures decreases in the course of EM iterations. Thus the optimal estimate of the dependence-tree mixture tends to converge to a simple product mixture model.
action
ARLID cav_un_auth*0308845
name 6-th International Conference on Neural Computation Theory and Applications
dates 22.10.2014-24.10.2014
place Rome
country IT
RIV IN
reportyear 2015
num_of_auth 2
presentation_type PR
inst_support RVO:67985556
permalink http://hdl.handle.net/11104/0238366
confidential S
arlyear 2014
mrcbU14 84908887413 SCOPUS
mrcbU63 cav_un_epca*0434276 NCTA2014 - International Conference on Neural Computation Theory and Applications 978-989-758-054-3 65 75 Rome SCITEPRESS 2014