<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="style/detail_T.xsl"?>
<bibitem type="J">   <ARLID>0425539</ARLID> <utime>20240103203939.7</utime><mtime>20140226235959.9</mtime>   <WOS>000342540700007</WOS> <SCOPUS>84894058260</SCOPUS>  <DOI>10.1016/j.ins.2014.01.048</DOI>           <title language="eng" primary="1">Approximate Bayesian recursive estimation</title>  <specification> <page_count>12 s.</page_count> <media_type>P</media_type> </specification>   <serial><ARLID>cav_un_epca*0256752</ARLID><ISSN>0020-0255</ISSN><title>Information Sciences</title><part_num/><part_title/><volume_id>285</volume_id><volume>1 (2014)</volume><page_num>100-111</page_num><publisher><place/><name>Elsevier</name><year/></publisher></serial>    <keyword>Approximate parameter estimation</keyword>   <keyword>Bayesian recursive estimation</keyword>   <keyword>Kullback–Leibler divergence</keyword>   <keyword>Forgetting</keyword>    <author primary="1"> <ARLID>cav_un_auth*0101124</ARLID> <name1>Kárný</name1> <name2>Miroslav</name2> <full_dept language="cz">Adaptivní systémy</full_dept> <full_dept language="eng">Department of Adaptive Systems</full_dept> <department language="cz">AS</department> <department language="eng">AS</department> <institution>UTIA-B</institution> <full_dept>Department of Adaptive Systems</full_dept>  <share>100</share> <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author>   <source> <url>http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf</url> </source>        <cas_special> <project> <project_id>GA13-13502S</project_id> <agency>GA ČR</agency> <ARLID>cav_un_auth*0292725</ARLID> </project>  <abstract language="eng" primary="1">Bayesian learning provides a firm theoretical basis of the design and exploitation of  algorithms in data-streams processing (preprocessing, change detection, hypothesis testing,  clustering, etc.). Primarily, it relies on a recursive parameter estimation of a firmly  bounded complexity. As a rule, it has to approximate the exact posterior probability density  (pd), which comprises unreduced information about the estimated parameter. In the recursive  treatment of the data stream, the latest approximate pd is usually updated using the  treated parametric model and the newest data and then approximated. The fact that  approximation errors may accumulate over time course is mostly neglected in the estimator  design and, at most, checked ex post. The paper inspects the estimator design with  respect to the error accumulation and concludes that a sort of forgetting (pd flattening)  is an indispensable part of a reliable approximate recursive estimation.</abstract>     <reportyear>2015</reportyear>  <RIV>BB</RIV>      <num_of_auth>1</num_of_auth>  <unknown tag="mrcbC52"> 4 A 4a 20231122140119.7 </unknown> <inst_support> RVO:67985556 </inst_support>  <permalink>http://hdl.handle.net/11104/0231504</permalink>   <confidential>S</confidential>          <unknown tag="mrcbT16-e">COMPUTERSCIENCEINFORMATIONSYSTEMS</unknown> <unknown tag="mrcbT16-j">0.873</unknown> <unknown tag="mrcbT16-s">2.226</unknown> <unknown tag="mrcbT16-4">Q1</unknown> <unknown tag="mrcbT16-B">74.327</unknown> <unknown tag="mrcbT16-C">96.043</unknown> <unknown tag="mrcbT16-D">Q2</unknown> <unknown tag="mrcbT16-E">Q1*</unknown> <arlyear>2014</arlyear>    <unknown tag="mrcbTft">  Soubory v repozitáři: karny-0425539.pdf </unknown>    <unknown tag="mrcbU14"> 84894058260 SCOPUS </unknown> <unknown tag="mrcbU34"> 000342540700007 WOS </unknown> <unknown tag="mrcbU63"> cav_un_epca*0256752 Information Sciences 0020-0255 1872-6291 Roč. 285 č. 1 2014 100 111 Elsevier </unknown> </cas_special> </bibitem>