<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="style/detail_T.xsl"?>
<bibitem type="C">   <ARLID>0583748</ARLID> <utime>20250317090929.5</utime><mtime>20240306235959.9</mtime>   <SCOPUS>85191338505</SCOPUS>  <DOI>10.5220/0012397600003660</DOI>           <title language="eng" primary="1">Avoiding Undesirable Solutions of Deep Blind Image Deconvolution</title>  <specification> <page_count>7 s.</page_count> <media_type>E</media_type> </specification>   <serial><ARLID>cav_un_epca*0583747</ARLID><ISBN>978-989-758-679-8</ISBN><ISSN>2184-4321</ISSN><title>Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2024)</title><part_num/><part_title/><page_num>559-566</page_num><publisher><place>Setúbal</place><name>SciTePress</name><year>2024</year></publisher><editor><name1>Radeva</name1><name2>Petia</name2></editor><editor><name1>Furnari</name1><name2>Antonino</name2></editor><editor><name1>Bouatouch</name1><name2>Kadi</name2></editor><editor><name1>Sousa</name1><name2>A. Augusto</name2></editor></serial>    <keyword>Blind Image Deconvolution</keyword>   <keyword>Deep Image Prior</keyword>   <keyword>No-Blur</keyword>   <keyword>Variational Bayes</keyword>    <author primary="1"> <ARLID>cav_un_auth*0464277</ARLID> <name1>Brožová</name1> <name2>Antonie</name2> <institution>UTIA-B</institution> <full_dept language="cz">Adaptivní systémy</full_dept> <full_dept language="eng">Department of Adaptive Systems</full_dept> <department language="cz">AS</department> <department language="eng">AS</department> <country>CZ</country>  <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author> <author primary="0"> <ARLID>cav_un_auth*0101207</ARLID> <name1>Šmídl</name1> <name2>Václav</name2> <institution>UTIA-B</institution> <full_dept language="cz">Adaptivní systémy</full_dept> <full_dept>Department of Adaptive Systems</full_dept> <department language="cz">AS</department> <department>AS</department>  <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author>   <source> <url>http://library.utia.cas.cz/separaty/2024/AS/brozova-0583748.pdf</url> </source> <source> <url>https://www.scitepress.org/Link.aspx?doi=10.5220/0012397600003660</url>  </source>        <cas_special> <project> <project_id>GA20-27939S</project_id> <agency>GA ČR</agency> <ARLID>cav_un_auth*0391986</ARLID> </project> <project> <project_id>GA24-10400S</project_id> <agency>GA ČR</agency> <country>CZ</country> <ARLID>cav_un_auth*0464279</ARLID> </project>  <abstract language="eng" primary="1">Blind image deconvolution (BID) is a severely ill-posed optimization problem requiring additional information, typically in the form of regularization. Deep image prior (DIP) promises to model a naturally looking image due to a well-chosen structure of a neural network. The use of DIP in BID results in a significant perfor-mance improvement in terms of average PSNR. In this contribution, we offer qualitative analysis of selected DIP-based methods w.r.t. two types of undesired solutions: blurred image (no-blur) and a visually corrupted image (solution with artifacts). We perform a sensitivity study showing which aspects of the DIP-based algorithms help to avoid which undesired mode. We confirm that the no-blur can be avoided using either sharp image prior or tuning of the hyperparameters of the optimizer. The artifact solution is a harder problem since variations that suppress the artifacts often suppress good solutions as well. Switching to the structural similarity index measure fro m L 2 norm in loss was found to be the most successful approach to mitigate the artifacts.</abstract>    <action target="WRD"> <ARLID>cav_un_auth*0464278</ARLID> <name>International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2024) /19./</name> <dates>20240227</dates> <unknown tag="mrcbC20-s">20240229</unknown> <place>Roma</place> <country>IT</country>  </action>  <RIV>IN</RIV> <FORD0>10000</FORD0> <FORD1>10200</FORD1> <FORD2>10201</FORD2>    <reportyear>2025</reportyear>      <num_of_auth>2</num_of_auth>  <presentation_type> PR </presentation_type> <inst_support> RVO:67985556 </inst_support>  <permalink>https://hdl.handle.net/11104/0353240</permalink>   <confidential>S</confidential>         <arlyear>2024</arlyear>       <unknown tag="mrcbU14"> 85191338505 SCOPUS </unknown> <unknown tag="mrcbU24"> PUBMED </unknown> <unknown tag="mrcbU34"> WOS </unknown> <unknown tag="mrcbU63"> cav_un_epca*0583747 Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2024) SciTePress 2024 Setúbal 559 566 978-989-758-679-8 2184-4321 </unknown> <unknown tag="mrcbU67"> Radeva Petia 340 </unknown> <unknown tag="mrcbU67"> Furnari Antonino 340 </unknown> <unknown tag="mrcbU67"> Bouatouch Kadi 340 </unknown> <unknown tag="mrcbU67"> Sousa A. Augusto 340 </unknown> </cas_special> </bibitem>