bibtype C - Conference Paper (international conference)
ARLID 0569632
utime 20240402213643.4
mtime 20230306235959.9
DOI 10.5220/0011616200003417
title (primary) (eng) Optimal Activation Function for Anisotropic BRDF Modeling
specification
page_count 8 s.
media_type P
serial
ARLID cav_un_epca*0569631
ISBN 978-989-758-634-7
ISSN 2184-4321
title Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP
page_num 162-169
publisher
place Lisbon
name SciTePress
year 2023
editor
name1 Sousa
name2 A. Augusto
editor
name1 Bashford-Rogers
name2 Thomas
editor
name1 Bouatouch
name2 Kadi
keyword Anisotropic BRDF models
keyword neural network
keyword activation function
keyword BTF
author (primary)
ARLID cav_un_auth*0101165
name1 Mikeš
name2 Stanislav
institution UTIA-B
full_dept (cz) Rozpoznávání obrazu
full_dept (eng) Department of Pattern Recognition
department (cz) RO
department (eng) RO
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
author
ARLID cav_un_auth*0101093
name1 Haindl
name2 Michal
institution UTIA-B
full_dept (cz) Rozpoznávání obrazu
full_dept Department of Pattern Recognition
department (cz) RO
department RO
full_dept Department of Pattern Recognition
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
source
url http://library.utia.cas.cz/separaty/2023/RO/mikes-0569632.pdf
cas_special
project
project_id GA19-12340S
agency GA ČR
country CZ
ARLID cav_un_auth*0376011
abstract (eng) We present simple and fast neural anisotropic Bidirectional Reflectance Distribution Function (NN-BRDF) efficient models, capable of accurately estimating unmeasured combinations of illumination and viewing angles from sparse Bidirectional Texture Function (BTF) measurement of neighboring points in the illumination/viewing hemisphere. Our models are optimized for the best-performing activation function from nineteen widely used nonlinear functions and can be directly used in rendering. We demonstrate that the activation function significantly influences the modeling precision. The models enable us to reach significant time and cost-saving in not trivial and costly BTF measurements while maintaining acceptably low modeling error. The presented models learn well, even from only three percent of the original BTF measurements, and we can prove this by precise evaluation of the modeling error, which is smaller than the errors of alternative analytical BRDF models.
action
ARLID cav_un_auth*0446965
name International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP 2023 /18./
dates 20230219
mrcbC20-s 20230221
place Lisbon
country PT
RIV BD
FORD0 20000
FORD1 20200
FORD2 20205
reportyear 2024
num_of_auth 2
presentation_type PR
inst_support RVO:67985556
permalink https://hdl.handle.net/11104/0341255
confidential S
arlyear 2023
mrcbU14 SCOPUS
mrcbU24 PUBMED
mrcbU34 WOS
mrcbU63 cav_un_epca*0569631 Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP SciTePress 2023 Lisbon 162 169 978-989-758-634-7 2184-4321
mrcbU67 Sousa A. Augusto 340
mrcbU67 Bashford-Rogers Thomas 340
mrcbU67 Bouatouch Kadi 340