Local search quantization (LSQ)
Local search quantization (LSQ) is an non-orthogonal MCQ method.
LSQ uses fully dimensional codebooks. Codebook update is done via least squares, and encoding is done with iterated local search (ILS), using randomized iterated conditional modes (ICM) as a local search subroutine.
Rayuela.encoding_icm
โ Function.encoding_icm(X, oldB, C, ilsiter, icmiter, randord, npert, cpp=true, V=false) -> B
Given data and chain codebooks, find codes using iterated local search with ICM.
Arguments
X::Matrix{T}
:d
-by-n
data to quantizeOldB::Matrix{Int16}
:m
-by-n
initial set of codesilsiter::Integer
: Number of iterated local search (ILS) iterationsicmiter::Integer
: Number of iterated conditional modes (ICM) iterationsrandord::Bool
: Whether to use random ordernpert::Integer
: Number of codes to perturbcpp::Bool=true
: Whether to use the c++ implementationV::Bool=false
: Whehter to print progress
Returns
B::Matrix{Int16}
:m
-by-n
matrix with the new codes
Rayuela.train_lsq
โ Function.train_lsq(X, m, h, R, B, C, niter, ilsiter, icmiter, randord, npert, cpp=true, V=false) -> C, B, obj
Train a local-search quantizer. This method is typically initialized by Chain quantization (ChainQ)
Arguments
X::Matrix{T}
:d
-by-n
data to quantizem::Integer
: Number of codebooksh::Integer
: Number of entries in each codebook (typically 256)R::Matrix{T}
:d
-by-d
rotation matrix for initializationB::Matrix{Int16}
:m
-by-n
matrix with pre-trained codes for initializationC::Vector{Matrix{T}}
:m
-long vector withd
-by-h
matrices. Each matrix is a pretrained codebook of size approximatelyd
-by-h
niter::Integer
: Number of iterations to useilster::Integer
: Number of iterated local search (ILS) iterationsicmiter::Integer
: Number of iterated conditional modes (ICM) iterationsrandord::Bool
: Whether to visit the nodes in a random order in ICMnpert::Integer
: Number of codes to perturbcpp::Bool
: Whether to use a c++ implementation for encodingV::Bool
: Whether to print progress
Returns
C::Vector{Matrix{T}}
:m
-long vector withd
-by-h
matrix entries. Each matrix is a codebook of size approximatelyd
-by-h
B::Matrix{Int16}
:m
-by-n
matrix with the codesobj::Vector{T}
:niter
-long vector with the quantization error after each iteration
Reference
Martinez, J., Clement, J., Hoos, H. H., & Little, J. J. (2016). Revisiting additive quantization. In European Conference on Computer Vision (pp. 137-153). Springer, Cham. [PDF]