Jiahao Zhu1
Kang You2
Dandan Ding1
Zhan Ma2
1Hangzhou Normal University
2Nanjing University
Code [GitHub]
Unpublished [Paper]

Abstract

Reflectance attributes in LiDAR point clouds provide essential information for downstream tasks but remain underexplored in neural compression methods. To address this, we introduce SerLiC, a serialization-based neural compression framework to fully exploit the intrinsic characteristics of LiDAR reflectance. SerLiC first transforms 3D LiDAR point clouds into 1D sequences via scan-order serialization, offering a device-centric perspective for reflectance analysis. Each point is then tokenized into a contextual representation comprising its sensor scanning index, radial distance, and prior reflectance, for effective dependencies exploration. For efficient sequential modeling, Mamba is incorporated with a dual parallelization scheme, enabling simultaneous autoregressive dependency capture and fast processing. Extensive experiments demonstrate that SerLiC attains over 2× volume reduction against the original reflectance data, outperforming the state-of-the-art method by up to 22% reduction of compressed bits while using only 2% of its parameters. Moreover, a lightweight version of SerLiC achieves ≥ 10 fps (frames per second) with just 111K parameters, which is attractive for real-world applications.

Overview

Contribution

  • We propose SerLiC, a lossless reflectance compression method for LiDAR point clouds, leveraging scan-order serialization to transform a 3D point cloud to 1D point sequences for efficient representation.
  • We generate LiDAR information (scanning index and radial distance) for each point, along with the previous decoded reflectance, as context to exploit point depen- dencies in a sequence, supported by the selective state space model with a dual parallelization mechanism.
  • SerLiC delivers notable performance on benchmark datasets, offering high compression efficiency, ultra- low complexity, and strong robustness. Its light version runs 30 fps with frame pipelining and 10 fps without, with only 111K model parameters.
  • Compression Backbone

    The input 3D LiDAR point cloud is first serialized into 1D ordered point sequences, which are then divided into windows for parallel processing. For each window, a Mamba-driven autoregressive coding (MDAC) scheme is employed, which embeds scanning index ( F i p o s ), radial distance ( F i ρ ), and prior reflectance ( F i 1 x ) as context to generate the probability mass function (PMF) for the reflectance intensity of the target ( i -th) point.

    Compression Performance

    We presents a detailed comparison of the overall bit rate and CR gains of SerLiC against G-PCC (RAHT), G-PCC (Predlift), and Unicorn.

    Conclusion

    This paper presents SerLiC, a serialization-based neural compression framework tailored for LiDAR reflectance attribute. By leveraging scan-order serialization, SerLiC transforms 3D point clouds into 1D sequences, aligning with LiDAR scanning mechanisms and enabling efficient sequential modeling. The Mamba model with physics-informed tokenization further enhances its ability to capture points correlations autoregressively, while maintaining linear-time complexity. Its high efficiency, ultra-low complexity, and strong robustness make it a practical solution for real LiDAR applications. Future work will extend SerLiC to lossy compression for higher compression efficiency while preserving essential information for downstream tasks.