주요 메뉴 바로가기 보조 메뉴 바로가기 본문 바로가기

콘텐츠 본문

논문 국내 국내전문학술지(KCI급) Learned Video Compression for Attribute images in Video-based Point Cloud Compression

  • 학술지 구분 국내전문학술지(KCI급)
  • 게재년월 2025-12
  • 저자명 Seungmin Noh, Yongheon Kim, Kaeun Lee, Haechul Choi
  • 학술지명 방송미디어공학회논문지
  • 발행처명 한국방송미디어공학회
  • 발행국가 국내
  • 논문언어 외국어
  • 전체저자수 4

논문 초록 (Abstract)

3D point clouds are widely used in applications such as autonomous driving and augmented reality, and Video-based Point Cloud Compression (V-PCC) has been the primary approach for compressing such data. 

V-PCC projects 3D data onto two-dimensional (2D) images and compresses them using traditional video coding standards. 

In this study, we propose replacing conventional codecs with a neural network–based model, Deep Contextual Video Compression (DCVC), to compress 2D attribute images of point cloud. 

To adapt DCVC, originally trained on natural images, to attribute images, the proposed method employs an N-stage cascaded training strategy for fine-tuning, Experimental results on MPEG-I Common Test Condition sequences show that the proposed model achieves an average BD-rate gain of 28.00% over the baseline DCVC model and provides superior reconstruction quality, even at low bitrates. 

These findings demonstrate the feasibility of deploying learned video codecs for point cloud compression.