SPDAGG-TransNet: Integrating Symmetric Positive Definite Networks with Transformers for UAV-Human Action Recognition
Résumé
The advent of unmanned aerial vehicles (UAVs) has initiated a revolutionary era in human action recognition, profoundly influencing various domains. This transition underscores the critical necessity for comprehensive benchmarks crucial for formulating and evaluating UAV-centric models tailored to human behavior analysis.
This paper presents an novel approach called SPDAGG-TransNet network for UAV-human action recognition, leveraging the resilience of skeletal-based features amidst these obstacles. Our approach revolves around a deep neural network adept at capturing the intricate spatial and temporal dimensions of human actions, leading to the development of Semi-Positive Definite (SPD) matrix representations. These representations are then transformed using a transformer encoder before being classified using a Multilayer Perceptron (MLP). To assess the effectiveness of our approach, we conduct thorough evaluations using publicly available datasets such as the UAV-Human Action Recognition and UAV-Gesture datasets. Our findings underscore the state-of-the-art performance achieved by our method, highlighting its potential to significantly advance UAV-based human action recognition.