bibtype C - Conference Paper (international conference)
ARLID 0534541
utime 20240103224732.5
mtime 20201117235959.9
SCOPUS 85093082913
DOI 10.1007/978-3-030-58526-6_31
title (primary) (eng) Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network
specification
page_count 18 s.
media_type P
serial
ARLID cav_un_epca*0534540
ISBN 978-3-030-58525-9
ISSN 0302-9743
title ECCV 2020
part_title LNCS
page_num 522-539
publisher
place Cham
name Springer Nature Switzerland AG 2020
year 2020
editor
name1 Vedaldi
name2 Andrea
editor
name1 Bischof
name2 Horst
editor
name1 Brox
name2 Thomas
editor
name1 Frahm
name2 Jan-Michael
keyword Convolutional neural network acceleration
keyword Low-rank tensor decomposition
keyword Degeneracy correction
author (primary)
ARLID cav_un_auth*0382249
name1 Phan
name2 A. H.
country RU
author
ARLID cav_un_auth*0399439
name1 Sobolev
name2 K.
country RU
author
ARLID cav_un_auth*0399440
name1 Sozykin
name2 K.
country RU
author
ARLID cav_un_auth*0399441
name1 Ermilov
name2 D.
country RU
author
ARLID cav_un_auth*0399442
name1 Gusak
name2 J.
country RU
author
ARLID cav_un_auth*0101212
name1 Tichavský
name2 Petr
institution UTIA-B
full_dept (cz) Stochastická informatika
full_dept Department of Stochastic Informatics
department (cz) SI
department SI
full_dept Department of Stochastic Informatics
share 10
fullinstit Ústav teorie informace a automatizace AV ČR, v. v. i.
author
ARLID cav_un_auth*0399443
name1 Glukhov
name2 V.
country CN
author
ARLID cav_un_auth*0399444
name1 Oseledets
name2 I.
country RU
author
ARLID cav_un_auth*0382250
name1 Cichocki
name2 A.
country RU
source
url http://library.utia.cas.cz/separaty/2020/SI/tichavsky-0534541.pdf
cas_special
abstract (eng) Most state-of-the-art deep neural networks are overparameterized and exhibit a high computational cost. A straightforward approach to this problem is to replace convolutional kernels with its low-rank tensor approximations, whereas the Canonical Polyadic tensor Decomposition is one of the most suited models. However, fitting the convolutional tensors by numerical optimization algorithms often encounters diverging components, i.e.,extremely large rank-one tensors but canceling each other. Such degeneracy often causes the non-interpretable result and numerical instability for the neural network ne-tuning. This paper is the first study on degeneracy in the tensor decomposition of convolutional kernels. We present a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression while preserving the high quality performance of the neural networks. We evaluate our approach on popular CNN architectures for image classification and show that our method results in much lower accuracy degradation and provides consistent performance.\n
action
ARLID cav_un_auth*0399445
name European Conference on Computer Vision 2020 /16./
dates 20200823
mrcbC20-s 20200828
place Glasgow
country GB
RIV JD
FORD0 20000
FORD1 20200
FORD2 20201
reportyear 2021
num_of_auth 9
presentation_type PR
inst_support RVO:67985556
permalink http://hdl.handle.net/11104/0313191
confidential S
arlyear 2020
mrcbU14 85093082913 SCOPUS
mrcbU24 PUBMED
mrcbU34 WOS
mrcbU63 cav_un_epca*0534540 ECCV 2020 978-3-030-58525-9 0302-9743 1611-3349 522 539 Cham Springer Nature Switzerland AG 2020 2020 Lecture Notes in Computer Science 12374 LNCS
mrcbU67 340 Vedaldi Andrea
mrcbU67 340 Bischof Horst
mrcbU67 340 Brox Thomas
mrcbU67 340 Frahm Jan-Michael