<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="style/detail_T.xsl"?>
<bibitem type="C">   <ARLID>0519063</ARLID> <utime>20240103223336.0</utime><mtime>20200107235959.9</mtime>   <SCOPUS>85082437719</SCOPUS>  <DOI>10.1109/ICCVW.2019.00283</DOI>           <title language="eng" primary="1">Intra-Frame Object Tracking by Deblatting</title>  <specification> <page_count>10 s.</page_count> <media_type>E</media_type> </specification>   <serial><ARLID>cav_un_epca*0519174</ARLID><ISBN>978-1-7281-5024-6</ISBN><ISSN>2473-9936</ISSN><title>Proceedings of the IEEE International Conference on Computer Vision 2019 (ICCV 2019)</title><part_num/><part_title/><page_num>2300-2309</page_num><publisher><place>Piscataway</place><name>IEEE</name><year>2019</year></publisher></serial>    <keyword>visual object tracking</keyword>   <keyword>fast moving objects</keyword>   <keyword>deblurring</keyword>    <author primary="1"> <ARLID>cav_un_auth*0293863</ARLID> <name1>Kotera</name1> <name2>Jan</name2> <institution>UTIA-B</institution> <full_dept language="cz">Zpracování obrazové informace</full_dept> <full_dept language="eng">Image processing</full_dept> <department language="cz">ZOI</department> <department language="eng">ZOI</department> <full_dept>Department of Image Processing</full_dept> <country>CZ</country>  <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author> <author primary="0"> <ARLID>cav_un_auth*0352947</ARLID> <name1>Rozumnyi</name1> <name2>D.</name2> <country>CZ</country> </author> <author primary="0"> <ARLID>cav_un_auth*0101209</ARLID> <name1>Šroubek</name1> <name2>Filip</name2> <institution>UTIA-B</institution> <full_dept language="cz">Zpracování obrazové informace</full_dept> <full_dept>Department of Image Processing</full_dept> <department language="cz">ZOI</department> <department>ZOI</department> <full_dept>Department of Image Processing</full_dept>  <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author> <author primary="0"> <ARLID>cav_un_auth*0075799</ARLID> <name1>Matas</name1> <name2>J.</name2> <country>CZ</country> </author>   <source> <url>http://library.utia.cas.cz/separaty/2019/ZOI/kotera-0519063.pdf</url> </source>        <cas_special> <project> <project_id>GA18-05360S</project_id> <agency>GA ČR</agency> <ARLID>cav_un_auth*0361425</ARLID> </project> <project> <project_id>AP1701</project_id> <agency>AV ČR</agency> <country>CZ</country> <ARLID>cav_un_auth*0349658</ARLID> </project>  <abstract language="eng" primary="1">Objects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects elapse non-negligible distance during exposure time of a single frame and therefore their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by standard trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. The trajectory is then estimated by fitting a piecewise quadratic curve, which models physically justifiable trajectories. As a result, tracked objects are precisely localized with higher temporal resolution than by conventional trackers. The proposed TbD tracker was evaluated on a newly created dataset of videos with ground truth obtained by a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms baseline both in recall and trajectory accuracy.</abstract>    <action target="WRD"> <ARLID>cav_un_auth*0386778</ARLID> <name>ICCV 2019. International conference on computer vision</name> <dates>20191027</dates> <unknown tag="mrcbC20-s">20191102</unknown> <place>Seoul</place> <country>KR</country>  </action>  <RIV>JD</RIV> <FORD0>10000</FORD0> <FORD1>10200</FORD1> <FORD2>10201</FORD2>    <reportyear>2020</reportyear>      <num_of_auth>4</num_of_auth>  <presentation_type> PR </presentation_type> <inst_support> RVO:67985556 </inst_support>  <permalink>http://hdl.handle.net/11104/0304198</permalink>  <cooperation> <ARLID>cav_un_auth*0386528</ARLID> <name>FEL, ČVUT</name> <institution>FEL, ČVUT</institution> <country>CZ</country> </cooperation>  <confidential>S</confidential>        <arlyear>2019</arlyear>       <unknown tag="mrcbU14"> 85082437719 SCOPUS </unknown> <unknown tag="mrcbU24"> PUBMED </unknown> <unknown tag="mrcbU34"> WOS </unknown> <unknown tag="mrcbU63"> cav_un_epca*0519174 Proceedings of the IEEE International Conference on Computer Vision 2019 (ICCV 2019) 978-1-7281-5024-6 2473-9936 2473-9944 2300 2309 Piscataway IEEE 2019 </unknown> </cas_special> </bibitem>