YOGA: Yet Another Geometry-based Point Cloud Compressor

Junteng Zhang1
Tong Chen2
Dandan Ding1
Zhan Ma2
1Hangzhou Normal University
2Nanjing University

Code [GitHub]
Unpublished [Paper]

Abstract

A learning-based YOGA (Yet Another Geometry-based Point Cloud Compressor) is proposed. It is flexible, allowing for the separable lossy compression of geometry and color attributes, and variablerate coding using a single neural model; it is high-efficiency, significantly outperforming the latest G-PCC standard quantitatively and qualitatively, e.g., 25% BD-BR gains using PCQM (Point Cloud Quality Metric) as the distortion assessment, and it is lightweight, e.g., similar runtime as the G-PCC codec, owing to the use of sparse convolution and parallel entropy coding. To this end, YOGA adopts a unified end-to-end learning-based backbone for separate geometry and attribute compression. The backbone uses a two-layer structure, where the downscaled thumbnail point cloud is encoded using G-PCC at the base layer, and upon G-PCC compressed priors, multiscale sparse convolutions are stacked at the enhancement layer to effectively characterize spatial correlations to compactly represent the full-resolution sample. In addition, YOGA integrates the adaptive quantization and entropy model group to enable variable-rate control, as well as adaptive filters for better quality restoration.


Overview

   



Compression Performance

We evaluate YOGA's performance in terms of geometry, attributes, and overall compression:

   
   



Visualization

We provide visual comparisons of the reconstructed point clouds from G-PCC and YOGA.





Citation

Paper accepted in ACM International Conference on Multimedia. Citation available soon.
If YOGA is useful for your research, please consider citing.