ICPR 2024 Competition on VISual Tracking in Adverse Conditions (VISTAC)
Résumé
Tracking objects in nighttime video presents significant challenges due to factors like low visibility, poor illumination, and the presence of distant, small, or low-contrast objects. Traditional tracking methods, which rely on frame differencing, are largely ineffective under these conditions. Although deep learning has advanced the field, it demands extensive labeled data, which is often difficult to obtain for rare or extreme events. Historically, research in spatial and temporal data models and database systems has been conducted independently, limiting progress in handling complex spatiotemporal scenarios. Nighttime video analysis involves complex, dynamic, and context-dependent spatiotemporal events, requiring sophisticated algorithms capable of accurately monitoring and detecting these events across both spatial and temporal dimensions. Such advancements are crucial for enhancing applications in surveillance, security, and autonomous navigation systems. To address the gap in specialized datasets, we introduce the Night Vision Spatiotemporal Infrared-Video Dataset (NV-SID). This dataset comprises 100 annotated nighttime infrared videos, setting a new benchmark for evaluating deep learning-based object-tracking algorithms. Furthermore, the target of this challenge was to set a new standard for evaluating deep learning-based object-tracking algorithms, utilizing the proposed Qualitative Precision (QP) metric. NV-SID is a key component of the VISTAC Challenge, part of the ICPR 2024 Challenge on VISual Tracking in Adverse Conditions, which aims to push the boundaries of nighttime object tracking technology. Here, we have briefly described the methods used by the participants, and compiled their results. We have also mentioned further research avenues regarding this task. Challenge details and dataset can be found at: https://sites.google.com/view/ju-nvisot/home.