<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="style/detail_T.xsl"?>
<bibitem type="C">   <ARLID>0522204</ARLID> <utime>20240103223735.4</utime><mtime>20200214235959.9</mtime>              <title language="eng" primary="1">Orthogonal Approximation of Marginal Likelihood of Generative Models</title>  <specification> <page_count>9 s.</page_count> <media_type>E</media_type> </specification>   <serial><ARLID>cav_un_epca*0522203</ARLID><title>Bayesian Deep Learning NeurIPS 2019 Workshop</title><part_num/><part_title/><publisher><place>Vancouver</place><name>University of Oxford Computer Science department</name><year>2019</year></publisher></serial>    <keyword>approximation</keyword>   <keyword>generative models</keyword>   <keyword>orthogonal combinations</keyword>    <author primary="1"> <ARLID>cav_un_auth*0101207</ARLID> <name1>Šmídl</name1> <name2>Václav</name2> <institution>UTIA-B</institution> <full_dept language="cz">Adaptivní systémy</full_dept> <full_dept language="eng">Department of Adaptive Systems</full_dept> <department language="cz">AS</department> <department language="eng">AS</department> <full_dept>Department of Adaptive Systems</full_dept> <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author> <author primary="0"> <ARLID>cav_un_auth*0389606</ARLID> <name1>Bím</name1> <name2>J.</name2> <country>CZ</country> </author> <author primary="0"> <ARLID>cav_un_auth*0307300</ARLID> <name1>Pevný</name1> <name2>T.</name2> <country>CZ</country> </author>   <source> <url>http://library.utia.cas.cz/separaty/2020/AS/smidl-0522204.pdf</url> </source>         <cas_special> <project> <project_id>GA18-21409S</project_id> <agency>GA ČR</agency> <ARLID>cav_un_auth*0374053</ARLID> </project>  <abstract language="eng" primary="1">This paper presents a new approximation of the marginal likelihood of generative models which is used as a score for anomaly detection. The score is motivated by the shortcoming of the popular reconstruction error that it can behave arbitrarily outside the known samples. The proposed score corrects this by orthogonal combination of the reconstruction error and the likelihood in the latent space. As experimentally shown on benchmark problems from anomaly detection and illustrated on a toy problem, this combination lends the score robustness to outliers. Generative models evaluated with this score outperformed the competing methods especially in tasks of learning distribution from data corrupted by anomalies. Finally, the score is compatible with contemporary generative models, namely variational auto-encoders and generative adversarial networks</abstract>    <action target="WRD"> <ARLID>cav_un_auth*0389608</ARLID> <name>NeurIPS 2019</name> <dates>20191208</dates> <unknown tag="mrcbC20-s">20191214</unknown> <place>Vancouver</place> <country>CA</country>  </action>  <RIV>BD</RIV> <FORD0>10000</FORD0> <FORD1>10200</FORD1> <FORD2>10201</FORD2>    <reportyear>2021</reportyear>      <num_of_auth>3</num_of_auth>  <presentation_type> PO </presentation_type> <inst_support> RVO:67985556 </inst_support>  <permalink>http://hdl.handle.net/11104/0308914</permalink>  <unknown tag="mrcbC61"> 1 </unknown>  <confidential>S</confidential>  <article_num> 48 </article_num>       <arlyear>2019</arlyear>       <unknown tag="mrcbU14"> SCOPUS </unknown> <unknown tag="mrcbU24"> PUBMED </unknown> <unknown tag="mrcbU34"> WOS </unknown> <unknown tag="mrcbU63"> cav_un_epca*0522203 Bayesian Deep Learning NeurIPS 2019 Workshop Vancouver University of Oxford Computer Science department 2019 </unknown> </cas_special> </bibitem>