Langanzeige der Metadaten
DC ElementWertSprache
dc.contributor.authorJordan, Florian-
dc.contributor.authorBaum, Winfried-
dc.contributor.authorFrese, Christian-
dc.date.accessioned2024-07-16T09:14:15Z-
dc.date.available2024-07-16T09:14:15Z-
dc.date.issued2024-
dc.identifier.urihttps://fordatis.fraunhofer.de/handle/fordatis/410-
dc.identifier.urihttp://dx.doi.org/10.24406/fordatis/357-
dc.description.abstractDuring accidents involving hazardous chemicals, people in the area may be put at risk of harm. (Semi-)autonomous robots can mitigate this threat by removing leaking containers. However, teleoperation requires extensive training and is difficult in practice. To overcome these limitations, we implemented a perception system on an autonomous excavator that locates individual barrels in chaotic scenes for extraction. Following the human-in-the-loop principle, operators can remotely select which barrel to remove. An efficient U-Net-style, DCAN-flavored neural network is trained using synthetic and collected real-world RGB data (5,000 synthetic and 593 real images) and compared to an inference-heavy Mask R-CNN model. In experiments on a leave-out test set, created from the excavator, our model yielded an ODS mIoU of 85.14% and mAP of 72.19%, while Mask R-CNN achieved an ODS mIoU of 86.6% and mAP of 84.31%. With roughly 0.00584s inference time on 800×576 32-bit tensors, our model is faster than Mask R-CNN with an inference time of roughly 0.0491s. Using the robot calibration data, the point clouds of multiple LiDAR sensors are fused with the RGB segmentation to find local cylinder models for each barrel, delivering the exact poses for extraction using the motion planner to find a collision-free motion plan. Force measurements were included in the gripper to avoid deforming the barrel. Field trials showed that the barrels can be reliably extracted without any damage.en
dc.description.sponsorshipFunded by the German Federal Ministry of Education and Research (BMBF) within the project AKIT-PRO (13N15673).en
dc.language.isoenen
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/en
dc.subjectInstance Segmentationen
dc.subject.ddcDDC::000 Informatik, Informationswissenschaft, allgemeine Werke::000 Informatik, Wissen, Systeme::005 Computerprogrammierung, Programme, Datenen
dc.titleBarrel Pile Instance Segmentationen
dc.typeImageen
dc.contributor.funderBundesministerium für Bildung und Forschung BMBF (Deutschland)en
dc.description.technicalinformationThis dataset provides roughly 5.000 synthetically generated images, 593 real images collected with a DSLR camera and 47 real images collected online from an autonomous excavator of chaotic barrel piles together with instance mappings according to these images. The structure separates the synthetic, real and online data, for each subset there is a 'RGB' folder containing the images and a 'Instance' folder containing the instance mappings (different colors for each barrel instance) in PNG format.en
fordatis.instituteIOSB Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertungen
fordatis.instituteIPA Fraunhofer-Institut für Produktionstechnik und Automatisierungen
fordatis.rawdatatrueen
fordatis.sponsorship.FundingProgrammeInnovationen im Einsatz – Praxisleuchttürme der zivilen Sicherheiten
fordatis.sponsorship.projectid13N15673en
fordatis.sponsorship.projectnameAutonomieKIT für die Umrüstung von Arbeitsmaschinen in kooperierende Nutzfahrzeuge zur Unterstützung von Rettungskräftenen
fordatis.sponsorship.projectacronymAKIT-PROen
fordatis.sponsorship.ResearchFrameworkProgrammForschung für die zivile Sicherheiten
fordatis.date.start2022-
fordatis.date.end2022-
Enthalten in den Sammlungen:Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
Barrel-Instance-Segmentation.tar.xzTarball Archive of all RGB and instances masks images17,28 GBTarballÖffnen/Download

Versionshistorie
Version Ressource Datum Zusammenfassung
1 fordatis/410 2024-07-16 11:14:15.0

Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons