Skip to content

[ICASSP 2026] Re-LL1: An Effective Regularized ( L , L , 1 ) -Tensor Decomposition Method For Video Background Modeling and Foreground Separation

Notifications You must be signed in to change notification settings

thanhle88/Robust-Block-term-Tensor-Decomposition

 
 

Repository files navigation

Robust-Block-term-Tensor-Decomposition

This repository contains an implementation of an ADMM solver for block-term tensor decomposition, designed for high-dimensional tensor data such as video sequences or multi-spectral images. The implementation uses a block-wise update strategy, where each low-rank component (or block) is updated individually. This approach leverages the inherent structure of the tensor decomposition model, enhancing reconstruction accuracy as each block contributes a fixed, distinct component.

The project also includes an accelerated variant that incorporates nonlinear acceleration (e.g., Anderson acceleration) to further speed up convergence.

Table of Contents

Overview

Block-term tensor decomposition (BTD) aims to approximate an input tensor $ X \in \mathbb{R}^{I \times J \times K} $ as a sum of rank-(L) block components plus a sparse error tensor: $ X \approx \sum_{r=1}^{R}\left( A_r B_r^\top \circ c_r \right) + E, $ where each block $(A_r, B_r, c_r)$ corresponds to a distinct low-rank contribution, and the operator $\circ$ indicates that the vector $c_r$ is applied along the frontal slices.

The ADMM solver is tailored to update each block separately (block-wise) to take advantage of the fixed roles of individual blocks. This strategy leads to improved accuracy as the subproblems (updating each block’s factors) are optimized in isolation. In addition, an accelerated variant based on nonlinear acceleration is provided to boost the overall convergence speed.

Features

  • Block-wise ADMM: Updates each block $(A_r, B_r, c_r)$ individually for enhanced reconstruction precision.
  • Sparse Outlier Separation: Incorporates a soft-thresholding operator for robust recovery of the sparse error tensor $E$.
  • Dual Update with ADMM: Enforces consistency between the low-rank approximation and the observed tensor.
  • Nonlinear Acceleration: Optionally accelerates convergence by leveraging past iterates using an Anderson-type acceleration scheme.
  • Performance Analysis: Includes a detailed analysis of computational complexity and experimental performance on synthetic and real-world tensor data.

Algorithm Details

Block-wise ADMM

Our solver decomposes the tensor $X$ as:

$X \approx \sum_{r=1}^{R}\left( A_r B_r^\top \circ c_r \right) + E$

where each block $(A_r, B_r, c_r)$ is updated separately. This block-wise update strategy helps isolate and optimize the contribution of each low-rank block, resulting in a more accurate decomposition.

Stopping Criteria

The algorithm monitors convergence by computing the relative reconstruction error:

$\text{error} = \frac{|X - L - E|_F}{|X|_F}$

where:

  • $L = \sum_{r=1}^{R}\left( A_r B_r^\top \circ c_r \right)$ is the reconstructed low-rank tensor,
  • $E$ is the sparse outlier tensor, and
  • $|\cdot|_F$ denotes the Frobenius norm.

The algorithm stops when this error falls below a preset tolerance $\varepsilon$ or when the maximum number of iterations is reached.

Performance Analysis

The per-iteration computational cost is mainly determined by:

  • Low-Rank Reconstruction: Multiplications $A_r B_r^\top$ for each block, costing roughly $\mathcal{O}(ILJ)$ per block and repeated over $K$ slices.
  • SVD-based Updates: Each block performs a truncated SVD on a weighted residual matrix, with cost $\mathcal{O}(\min(I, J) \cdot I \cdot J)$.
  • Coefficient Updates for $c_r$: Each vector $c_r \in \mathbb{R}^K$ is updated by looping over $K$ slices and solving a $K \times K$ linear system, which is efficient for moderate $K$.
  • Dual and Outlier Updates: These are computed on the entire tensor of size $I \times J \times K$, costing $\mathcal{O}(IJK)$.

Thus, the overall per-iteration complexity is approximately:

$\mathcal{O}\Bigl( R\cdot \min(I, J) \cdot I \cdot J + IJK \Bigr)$

For large-scale video data where $I$, $J$, and $K$ are high but $L$ and $R$ are relatively small, the SVD operations typically dominate the computation time.

Empirical results show that our block-wise ADMM method achieves high reconstruction accuracy with competitive computation times. The accelerated variant, leveraging nonlinear (Anderson) acceleration, further reduces the number of iterations required to reach the convergence criteria.

Experimental Results

The performance of our algorithm was evaluated on several real-world video datasets (e.g., Highway, PETS2006, office, and pedestrians). We report two sets of metrics: one for video background separation and one for video foreground separation. The evaluation metrics include the F1 score and mean Average Precision (mAP).

Performance on Video Background Separation

The following table summarizes the performance of various methods (e.g., tenRPCA, Grasta, fastRPCA, BRTF) on the background separation task across different datasets. The evaluation metrics include the F1 score and mean Average Precision (mAP).

Dataset Tensor Size Metric tenRPCA Grasta fastRPCA BRTF BLT (Ours) Accelerated BLT (Ours)
Highway 240×320×300 F1 score 63.77 18.31 68.04 73.38 71.65 74.57
mAP 47.72 12.96 61.92 60.96 61.78 62.17
PETS2006 240×320×300 F1 score 36.65 0.36 68.11 70.08 69.50 71.32
mAP 34.04 0.69 64.80 70.40 61.97 60.10
office 240×360×300 F1 score 57.83 5.97 31.97 47.55 36.88 44.55
mAP 49.72 4.56 40.37 49.41 32.93 41.35
pedestrians 240×360×300 F1 score 67.58 2.97 77.16 77.16 77.18 77.15
mAP 76.48 0.32 79.61 79.62 79.48 79.62

Performance on Video Foreground Separation

The table below shows the performance on the foreground separation task.

Dataset Tensor Size Metric tenRPCA Grasta fastRPCA BRTF BLT (Ours) Accelerated BLT (Ours)
Highway 240×320×300 F1 score 64.46 Nan 67.61 Nan 71.65 74.57
mAP 48.77 39.41 61.93 19.64 61.51 61.73
PETS2006 240×320×300 F1 score 42.57 Nan 67.62 Nan 69.50 71.32
mAP 35.03 2.58 64.41 1.17 58.83 55.36
office 240×360×300 F1 score 58.04 Nan 31.62 Nan 36.88 44.54
mAP 50.57 30.62 40.38 6.44 39.30 39.07
pedestrians 240×360×300 F1 score 68.97 Nan 77.23 Nan 77.34 77.15
mAP 76.96 18.71 78.28 1.58 52.44 52.57

Results Interpretation

Foreground grid

  • Background Separation:
    Table 1 shows that the Accelerated BLT method (ours) achieves the highest F1 scores on the Highway, PETS2006, and pedestrians datasets, with competitive mAP values, indicating strong background modeling performance.

  • Foreground Separation:
    Table 2 demonstrates that, while some competing methods yield "Nan" results in certain cases (e.g., Grasta and BRTF on some datasets), our proposed methods (BLT and Accelerated BLT) provide robust performance with high F1 scores and reasonable mAP values.

Overall, these results validate the effectiveness of our block-wise ADMM approach. By updating each block individually, the algorithm accurately captures the distinct contributions of each component in the low-rank tensor approximation. Furthermore, the incorporation of accelerated nonlinear techniques significantly improves convergence speed, leading to better performance in both background and foreground separation tasks.

Reference

If you use this code, kindly please cite the following paper

[1] N.Q. Dang, L.T. Thanh, N.L. Trung, K. Abed-Meraim. "Re-LL1: An Effective Regularized (L,L,1)-Tensor Decomposition Method For Video Background Modeling and Foreground Separation". Proc. 51th IEEE ICASSP, 2026.

About

[ICASSP 2026] Re-LL1: An Effective Regularized ( L , L , 1 ) -Tensor Decomposition Method For Video Background Modeling and Foreground Separation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 100.0%