<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="style/detail_T.xsl"?>
<bibitem type="C">   <ARLID>0643905</ARLID> <utime>20260225142355.2</utime><mtime>20260105235959.9</mtime>              <title language="eng" primary="1">MatTag: Practical material tagging using visual fingerprints</title>  <specification> <page_count>11 s.</page_count> <media_type>E</media_type> </specification>   <serial><ARLID>cav_un_epca*0643904</ARLID><ISSN>1613-0073</ISSN><title>Proceedings of the MANER Conference Mainz/Darmstadt 2025 (MANER 2025)</title><part_num/><part_title/><publisher><place>Germany</place><name>CEUR-WS</name><year>2025</year></publisher><editor><name1>Urban</name1><name2>Philipp</name2></editor><editor><name1>von Castell</name1><name2>Christoph Freiherr</name2></editor><editor><name1>Hardeberg</name1><name2>Jon Yngve</name2></editor><editor><name1>Fleming</name1><name2>Roland W.</name2></editor><editor><name1>Gigilashvili</name1><name2>Davit</name2></editor></serial>    <keyword>Material Appearance</keyword>   <keyword>Perceptual Attributes</keyword>   <keyword>Visual Fingerprint</keyword>   <keyword>Smartphone Application</keyword>   <keyword>Material Retrieval</keyword>    <author primary="1"> <ARLID>cav_un_auth*0500278</ARLID> <name1>Staš</name1> <name2>Adam</name2> <institution>UTIA-B</institution> <full_dept language="cz">Rozpoznávání obrazu</full_dept> <full_dept language="eng">Department of Pattern Recognition</full_dept> <department language="cz">RO</department> <department language="eng">RO</department> <country>CZ</country>  <share>50</share> <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author> <author primary="0"> <ARLID>cav_un_auth*0500279</ARLID> <name1>Pilař</name1> <name2>Daniel</name2> <institution>UTIA-B</institution> <full_dept language="cz">Rozpoznávání obrazu</full_dept> <full_dept>Department of Pattern Recognition</full_dept> <department language="cz">RO</department> <department>RO</department> <country>CZ</country>  <share>25</share> <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author> <author primary="0"> <ARLID>cav_un_auth*0101086</ARLID> <name1>Filip</name1> <name2>Jiří</name2> <institution>UTIA-B</institution> <full_dept language="cz">Rozpoznávání obrazu</full_dept> <full_dept>Department of Pattern Recognition</full_dept> <department language="cz">RO</department> <department>RO</department>  <share>25</share> <garant>K</garant> <fullinstit>Ústav teorie informace a automatizace AV ČR, v. v. i.</fullinstit> </author>   <source> <source_type>PDF</source_type> <source_size>9 MB</source_size> <url>https://library.utia.cas.cz/separaty/2026/RO/filip-0643905.pdf</url> </source>        <cas_special> <project> <project_id>GA22-17529S</project_id> <agency>GA ČR</agency> <country>CZ</country> <ARLID>cav_un_auth*0439849</ARLID> </project>  <abstract language="eng" primary="1">Assessment of material properties is essential for tasks such as similar material retrieval or swatch comparison in industrial design, manufacturing, and quality control. While many material similarity measures exist, they often fail to align with human perception. In this paper, we introduce a novel smartphone application using a machine learning model that leverages a perceptual representation known as the visual fingerprint of materials—linking image-based measurements to intuitive, human-understandable attributes. Trained on human ratings collected through psychophysical studies, the model can predict a material’s visual fingerprint using just two photographs captured under different lighting conditions. The application employ this model to assess any planar material sample using only a printed registration template and a flashlight. The app captures two photographs and predicts the material’s perceptual attributes. We demonstrate several practical use cases, including building personal material databases, retrieving visually similar materials, and exploring materials that match user-defined perceptual criteria. By enabling perceptually grounded comparisons and metadata extraction, our application provides a standardized representation of material appearance. This marks a step toward more intuitive and interoperable use of material properties across diverse digital environments.</abstract>    <action target="WRD"> <ARLID>cav_un_auth*0500280</ARLID> <name>MANER 2025</name> <dates>20250629</dates> <unknown tag="mrcbC20-s">20250629</unknown> <place>Mainz/Darmstadt</place> <country>DE</country>  </action>  <RIV>IN</RIV> <FORD0>20000</FORD0> <FORD1>20200</FORD1> <FORD2>20201</FORD2>    <reportyear>2026</reportyear>      <num_of_auth>3</num_of_auth>  <presentation_type> PR </presentation_type> <inst_support> RVO:67985556 </inst_support>  <permalink>https://hdl.handle.net/11104/0374431</permalink>   <confidential>S</confidential>   <article_num> 4 </article_num>       <arlyear>2025</arlyear>       <unknown tag="mrcbU14"> SCOPUS </unknown> <unknown tag="mrcbU24"> PUBMED </unknown> <unknown tag="mrcbU34"> WOS </unknown> <unknown tag="mrcbU63"> cav_un_epca*0643904 Proceedings of the MANER Conference Mainz/Darmstadt 2025 (MANER 2025) 1613-0073 Germany CEUR-WS 2025 Vol-4135 CEUR Workshop Proceedings 4135 </unknown> <unknown tag="mrcbU67"> Urban Philipp 340 </unknown> <unknown tag="mrcbU67"> von Castell Christoph Freiherr 340 </unknown> <unknown tag="mrcbU67"> Hardeberg Jon Yngve 340 </unknown> <unknown tag="mrcbU67"> Fleming Roland W. 340 </unknown> <unknown tag="mrcbU67"> Gigilashvili Davit 340 </unknown> </cas_special> </bibitem>