Draft - Distilled Recurrent All-Pairs Field Transforms For Optical Flow
Résumé
This paper addresses the challenge of utilizing learning-based algorithms for 3D scene reconstruction on resource-constrained end-user devices. Although integrating deep learning methods into the reconstruction pipeline has demonstrated superior performance to classical techniques, the resulting large models could be impractical for resource-limited devices. We propose an efficient solution by introducing a method to compress deep learning models used in 3D reconstruction workflows. Our approach, named DRAFT, employs knowledge distillation (KD), adapted and extended for complex feature and context extraction tasks related to optical flow. New distillation components based on algebraic sign-pattern matrices (SPM) and inertia enhance the KD process. Empirical validation on KITTI and Sintel benchmark datasets reveals that DRAFT consistently achieves comparable or superior performance to state-of-the-art models such as RAFT, FlowID, GMFlow, and Anyflow while significantly reducing model size. This contribution enhances the feasibility of deploying learning-based 3D scene reconstruction frameworks on edge systems. It contributes to the discourse on resource-efficient deep learning methodologies, particularly for optical flow and stereo matching. Our code is available at https://github.com/christian-tchenko/DRAFT.git.