leaderboard of Challenge
- Rank Team name Score
- 1 XMU MAC 16331.856
- 2 KIOS CoE 15280.395
- 3 HBW 11280.563
- 4 HKU-TRANSGP 11063.355
- 5 Explore-bots 17.092
accepted papers
Authors: Pavel Petracek, Vit Kratky, Matej Petrlik, Martin Saska
Abstract: Digital documentation of large interiors of historical buildings is an exhausting task since most of the areas of interest are beyond typical human reach. We advocate the use of autonomous teams of multi-rotor UAVs capable of agile control while perceiving the environment and planning in real time using on-board computation only. Autonomous UAVs speed up the documentation process by several orders of magnitude while allowing for a repeatable, accurate, and condition-independent solution capable of precise collision-free operation at great heights. The developed multi-robot approach allows for performing tasks requiring dynamic scene illumination in large-scale real-world scenarios, a process previously applicable only in small-scale laboratory-like conditions. Experimental analyses range from single-UAV imaging to specialized lighting techniques requiring accurate coordination of multiple UAVs. The system’s robustness is demonstrated in more than two hundred autonomous flights in fifteen historical monuments requiring superior safety while lacking access to external localization.
PDF
Authors: Jiri Horyna, Martin Saska
Abstract: This paper presents an autonomous swarm system designed to be an enabling technology for achieving resilience to both partial and complete dropouts of localization of individual vehicles in large teams. The challenge of creating a resilient swarm system across diverse mission types is closely tied to maintaining accurate state awareness, regardless of changing environmental conditions and external threats like jamming and spoofing of primary localization data. Leveraging purely relative measurements and onboard sensor data to ensure accurate state awareness despite intermittent localization failures is extremely important for enhancing security, resilience, and safety of cooperating systems including edge autonomous devices. By combining approaches increasing resilience during both partial and complete localization dropouts, the paper bridges the gap in enhancing the resilience of drone swarm operations, allowing them to adapt dynamically across a wide range of mission types. Herein, we introduce and discuss the description and results of these state-of-the-art distributed state estimation techniques, which significantly strengthen swarm system security against vulnerabilities posed by emerging threats.
PDF
Authors: Vaclav Pritzl, Petr Stepan, Martin Saska
Abstract: Existing research has achieved impressive results in giving the Unmanned Aerial Vehicles (UAVs) the ability to operate in challenging conditions thanks to the fusion of multiple sensory modalities and utilizing multiple UAVs, but many parts of the environment remain unreachable for current UAV approaches. Designing a cooperating UAV team capable of flying through constrained passages while simultaneously achieving accurate localization requires developing new methods for cooperative localization, navigation, multi-UAV path planning, and coordination. Our approach to multi-UAV cooperative flight utilizes relative localization using direct UAV detections from a 3D Light Detection and Ranging (LiDAR) and hierarchical team structure. A larger primary UAV (pUAV), equipped with 3D LiDAR, can quickly and accurately map large areas while having accurate localization robust to decreased visibility conditions. A miniature secondary UAV (sUAV), equipped with cameras, can fit into tight passages and explore spaces unreachable for larger UAVs. Combining UAVs of different sizes and sensory equipment effectively increases the operational space of the UAV team while increasing its robustness to challenging conditions. In this paper, we describe the methods enabling our approach, namely the LiDAR-based relative localization and relative pose estimation, cooperative UAV guiding, and multi-UAV exploration. The described approaches have been successfully deployed in multiple real-world experiments with all the algorithms running on board the UAVs with no external localization system nor external computational resources.
PDF
Authors: Jiaming Wu, YANG Lyu,Jiakai Gao
Abstract: To address the challenges of extrinsic calibration and motion fusion in modular odometry systems, this paper proposes a method to integrate both extrinsic calibration and fused localization into a unified system. First, we introduce a non-decoupled optimization approach for Online Targetless Extrinsic Calibration. Instead of decoupling the motion equation for extrinsic calibration, we formulate it as a graph optimization problem, which is solved iteratively using g2o. This method prevents rotational errors from propagating into translation errors by incorporating multiple motion constraints through additional edges in the graph. Next, we propose a Multi-State Loosely-Coupled SLAM Framework. The motion data from multiple sensors are transformed into a unified world coordinate system using the extrinsic parameters obtained from the calibration process. These transformed poses are then incorporated into the back-end optimization, where GTSAM is used to manage the factor graph and the constraints on the pose transformations. This framework effectively fuses the motion information from the modular odometry systems into a cohesive solution. Through experiments, we validate the effectiveness of the Online Targetless Extrinsic Calibration, achieving the required precision.
PDF
Authors: Aditya Rauniyar, Micah Corah, Sebastian Scherer
Abstract: Motion capture has become increasingly important, not only in computer animation but also in emerging fields like the metaverse and humanoid training. Capturing outdoor environments offers extended horizon scenes but introduces challenges with occlusions and obstacles. Recent approaches using multi-drone systems to capture multiple actor scenes often fail to account for multi-view consistency and reasoning across cameras in cluttered environments. Coordinated motion Capture (CoCap), inspired by Conflict-Based Search (CBS), addresses this issue by coordinating view planning to ensure multi-view reasoning during conflicts. In comparison to Sequential Planning and unconstrained methods, CoCap achieves performance similar to ideal, unconstrained cases. It also introduces a real-time, coverage-based heuristic, making it well-suited for dense environments.
PDF
Authors: Michal Werner, Tomáš Báča, Petr Štibinger, Daniela Doubravová, Jaroslav Šolc, Jan Rusňák, Martin Saska
Abstract: A novel method for autonomous localization of multiple sources of gamma radiation using a group of Micro Aerial Vehicles (MAVs) is presented in this paper. The method utilizes an extremely lightweight (44 g) Compton camera MiniPIX TPX3. The compact size of the detector allows for deployment onboard safe and agile small-scale Unmanned Aerial Vehicles (UAVs). The proposed radiation mapping approach fuses measurements from multiple distributed Compton camera sensors to accurately estimate the positions of multiple radioactive sources in real time. Unlike commonly used intensity-based detectors, the Compton camera reconstructs the set of possible directions towards a radiation source from just a single ionizing particle. Therefore, the proposed approach can localize radiation sources without having to estimate the gradient of a radiation field or contour lines, which require longer measurements. The instant estimation is able to fully exploit the potential of highly mobile MAVs. The radiation mapping method is combined with an active search strategy, which coordinates the future actions of the MAVs in order to improve the quality of the estimate of the sources’ positions, as well as to explore the area of interest faster. The proposed solution is evaluated in simulation and real-world experiments with multiple Cesium-137 radiation sources.
PDF
Authors: Yupeng Yang, Yiwei Lyu, Yanze Zhang, Ian Gao, Wenhao Luo
Abstract: This paper proposes a novel data-driven control strategy for maintaining connectivity in networked multi-robot systems. Existing approaches often rely on a pre-determined communication model specifying whether pairwise robots can communicate given their relative distance to guide the connectivity-aware control design, which may not capture real-world communication conditions. To relax that assumption, we present the concept of Data-driven Connectivity Barrier Certificates, which utilize Control Barrier Functions (CBF) and Gaussian Processes (GP) to characterize the admissible control space for pairwise robots based on communication performance observed online. This allows robots to maintain a satisfying level of pairwise communication quality (measured by the received signal strength) while in motion. Then we propose a Data-driven Connectivity Maintenance (DCM) algorithm that combines (1) online learning of the communication signal strength and (2) a bi-level optimization-based control framework for the robot team to enforce global connectivity of the realistic multi-robot communication graph and minimally deviate from their task-related motions. We demonstrate the effectiveness of the algorithm through simulations with up to 20 robots.
PDF
Authors: Tianchen Deng, Guole Shen, Xun Chen, Hongming Shen, Yanbo Wang, Weidong Chen, Jingchuan Wang
Abstract: Neural implicit scene representations have recently shown promising results in dense visual SLAM. However, existing implicit SLAM algorithms are constrained to single-agent scenarios, and falls difficulty in large indoor scenes and long sequences, due to their single, global radiance field with finite capacity. To this end, we propose a novel multi-agent collaborative SLAM framework with joint scene representation, distributed camera tracking, intra-to-inter loop closure, and sub-map fusion. Specifically, we propose a distributed learning framework for multi-agent neural SLAM system to improve multi-agents cooperation and communication bandwidth efficiency. A novel intra-to-inter loop closure method is designed to achieve local (single-agent) and global map consistency. Our framework supports single-agent and multi-agents operation. Furthermore, to the best of our knowledge, there is no real-world dataset for NeRF-based/GS-based SLAM that provides both continuous-time trajectories groundtruth and high-accuracy 3D meshes groundtruth. To this end, we introduce the first real-world dataset covering both single-agent and multi-agent scenarios, ranging from small rooms to large-scale environments, with high-accuracy ground truth for 3D reconstruction meshes and continuous-time camera trajectory. This dataset can advance the development of the community. Experiments on
various datasets demonstrate the superiority of the proposed method in both camera tracking and mapping. The dataset and code will open-source in Github.