Efficient Multi-Domain Learning by Covariance Normalization


Overview


The problem of multi-domain learning of deep networks is considered. An adaptive layer is induced per target domain and a novel procedure, denoted covariance normalization (CovNorm), proposed to reduce its parameters. CovNorm is a data driven method of fairly simple implementation, requiring two principal component analyzes (PCA) and fine-tuning of a mini-adaptation layer. Nevertheless, it is shown, both theoretically and experimentally, to have several advantages over previous approaches, such as batch normalization or geometric matrix approximations. Furthermore, CovNorm can be deployed both when target datasets are available sequentially or simultaneously. Experiments show that, in both cases, it has performance comparable to a fully fine-tuned network, using as few as 0.13% of the corresponding parameters per target domain.

paper

Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

Paper

Repository

Bibtex

Models


paper

CovNorm: the architecture of covariance normalization.

Highlights

The task specific weights (adapter: A) are decomposed into a whitening matrix and a recoloring matrix to match the covariance of the input and output features.

Benefit

  1. The task specific weights can be small light.
  2. With the task specific weights, the model can be adapted to multiple targets with very light extra computation and parameters.

Analysis


paper

Effective dimensions: Ratio of effective dimensions for different network layers in VGG (the effectiveness dimension means the channles that contain 90% energy given by eigenvalues).

paper

Result: The CovNorm consumes 1.25x complexity compared to a single network (VGG) and achieves 78.5% on the ten datasets on average, which is 2% better than training 10 independent network (FNFT).

Authors



Yunsheng Li

UC San Diego