<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="style/detail_T.xsl"?>
<bibitem type="A">   <ARLID>0640701</ARLID> <utime>20260218131316.2</utime><mtime>20251031235959.9</mtime>    <DOI>10.5281/zenodo.15854639</DOI>           <title language="eng" primary="1">Fault Detection Using Reinforcement Learning</title>  <specification> <page_count>1 s.</page_count> <media_type>E</media_type> </specification>   <serial><ARLID>cav_un_epca*0646239</ARLID><title>DYNALIFE 2025 : Quantum Information and Decision Making in Life Sciences: Book of Abstracts</title><part_num/><part_title/><page_num>20-20</page_num><publisher><place>Prague</place><name>Czech University of Life Sciences Prague</name><year>2025</year></publisher><editor><name1>Guy</name1><name2>Tatiana Valentine</name2></editor><editor><name1>Pelikán</name1><name2>Martin</name2></editor><editor><name1>Kárný</name1><name2>Miroslav</name2></editor><editor><name1>Gaj</name1><name2>Aleksej</name2></editor><editor><name1>Ružejnikov</name1><name2>Jurij</name2></editor><editor><name1>Ruman</name1><name2>Marko</name2></editor></serial>    <keyword>Reinforcement Learning</keyword>   <keyword>Fault Detection</keyword>   <keyword>Markov Decision Processes</keyword>    <author primary="1"> <ARLID>cav_un_auth*0483444</ARLID> <name1>Jedlička</name1> <name2>Adam</name2> <institution>UTIA-B</institution> <full_dept language="cz">Adaptivní systémy</full_dept> <full_dept language="eng">Department of Adaptive Systems</full_dept> <department language="cz">AS</department> <department language="eng">AS</department> <country>CZ</country>  <share>100</share> <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author>   <source> <url>https://zenodo.org/records/16900460</url> </source>         <cas_special> <project> <project_id>CA21169</project_id> <agency>EU-COST</agency> <country>XE</country> <ARLID>cav_un_auth*0452289</ARLID> </project>  <abstract language="eng" primary="1">A wide variety of tasks including modeling biological problems can be modeled by Markov Decision Process (MDP). Reinforcement learning (RL) is an approach to solving MDP. In RL the agent learns to make the optimal actions by using feedback (reinforcement) signal. Due to its ability to handle dynamic environments with high uncertainty, RL has been successfully applied to various biological problems: generating novel molecular structures in drug discovery [1], predicting protein folding [2], etc. A very basic application of MDP terms in the example of the task of discovering new drugs mentioned earlier is as follows. A generative model (agent) learns a series of actions to create new molecules (states) for maximizing a score given by a predefined score function. RL is applied similarly in an example of genome assembly and other tasks in biology. Apart from these tasks, RL can also be used for so-called fault detection. Fault detection (FD) refers to a problem, where there are two or more processes that follow a different mathematical model (for example change of parameter) and the task is to determine which process is followed at the given time (which process generated the data). An example of the utility of FD in biology is highlighted in [3], where a fault is monitored in the "Cad System in E-coli" (CSEC) model. CSEC models localization and dynamics of the pH sensor and transcriptional regulator CadC in cells. Another example is mentioned in [4], where a water treatment process is monitored for faults in order to ensure its stability. It is important to mention, that neither of the above-mentioned articles relating to the usage of FD in biology does not use RL to find faults as it is not a widely used approach. However, while not widely used, it might perform better in certain cases with large datasets. The proposed poster will i) introduce some basic FD methods ii) briefly introduce mechanisms of RL and outline its use for CSEC model [5].</abstract>    <action target="WRD"> <ARLID>cav_un_auth*0491465</ARLID> <name>DYNALIFE 2025 : Conference on QUANTUM INFORMATION AND DECISION MAKING IN LIFE SCIENCES</name> <dates>20250428</dates> <unknown tag="mrcbC20-s">20250429</unknown> <place>Prague</place> <country>CZ</country>  </action>  <RIV>BB</RIV> <FORD0>10000</FORD0> <FORD1>10100</FORD1> <FORD2>10101</FORD2>   <reportyear>2026</reportyear>      <num_of_auth>1</num_of_auth>  <presentation_type> PO </presentation_type>  <permalink>https://hdl.handle.net/11104/0371109</permalink>  <cooperation> <ARLID>cav_un_auth*0322033</ARLID> <name>Česká zemědělská univerzita v Praze, Provozně ekonomická fakulta</name> <institution>PEF ČZU</institution> <country>CZ</country> </cooperation>  <confidential>S</confidential>        <arlyear>2025</arlyear>       <unknown tag="mrcbU02"> A2 </unknown> <unknown tag="mrcbU14"> SCOPUS </unknown> <unknown tag="mrcbU24"> PUBMED </unknown> <unknown tag="mrcbU34"> WOS </unknown> <unknown tag="mrcbU63"> cav_un_epca*0646239 DYNALIFE 2025 : Quantum Information and Decision Making in Life Sciences: Book of Abstracts Czech University of Life Sciences Prague 2025 Prague 20 20 </unknown> <unknown tag="mrcbU67"> Guy Tatiana Valentine 340 </unknown> <unknown tag="mrcbU67"> Pelikán Martin 340 </unknown> <unknown tag="mrcbU67"> Kárný Miroslav 340 </unknown> <unknown tag="mrcbU67"> Gaj Aleksej 340 </unknown> <unknown tag="mrcbU67"> Ružejnikov Jurij 340 </unknown> <unknown tag="mrcbU67"> Ruman Marko 340 </unknown> </cas_special> </bibitem>