Simulator with Computer Vision for Detection, Tracking, and Distance Calculation of Moving Objects

Authors

DOI:

https://doi.org/10.31637/epsir-2024-812

Keywords:

Artificial Vision, Collision Detection, Deep Learning, Artificial Intelligence, Road Safety, You Only Look Once, Object Tracking, Traffic Simulator

Abstract

Introduction: In the framework of a research on computer vision systems for motorcycle collision prevention, a digital simulator has been developed that evaluates relevant traffic scenarios. Methodology: The simulator analyzes synthetic video sequences of various traffic environments using computer vision models. It uses the YOLO algorithm, known for its speed and accuracy in object detection, to identify, classify and track vehicles, pedestrians and other moving objects. Results: The system is able to estimate Euclidean distance and project the trajectory of items from the rider's perspective, replicating what would be captured by a vision system on a real motorcycle. The adaptability of YOLO allows its use in multiple contexts without the need for intensive retraining. Discussion: The simulator provides a controlled environment to evaluate the performance of collision detection algorithms in critical scenarios, allowing repeatable testing without real risks. Conclusions: This simulator facilitates the validation of collision avoidance algorithms, providing a safe and efficient environment to test their performance in critical traffic situations.

Downloads

Download data is not yet available.

Author Biography

Leonardo Valderrama García, Corporación Universitaria Minuto de Dios

University Research Professor with extensive experience in Systems Engineering and Electronic Engineering; Master's Degree in Artificial Intelligence and Specialization in Visual Analytics and Big Data; Experience in software development and electronic applications. Currently, trainer in emerging technologies and designer of solutions based on artificial intelligence and data analysis for the public sector.

References

APEX Simulación y Tecnología. (18 de agosto de 2020). Simulador de motocicleta – Software [Archivo de Video]. YouTube. https://www.youtube.com/watch?v=MyaN1J5q6V8&t=1s

Bharadwaj, R., Gajbhiye, P., Rathi, A., Sonawane, A. y Uplenchwar, R. (2023). Lane, car, traffic sign and collision detection in simulated environment using GTA-V. En J. S. Raj, I. Perikos y V. E. Balas (Eds.), Intelligent sustainable systems. ICoISS 2023. Lecture notes in networks and systems (Vol. 665, pp. 465-476). Springer. https://doi.org/10.1007/978-981-99-1726-6_36 DOI: https://doi.org/10.1007/978-981-99-1726-6_36

Garg, R., Wadhwa, N., Ansari, S. y Barron, J. T. (2019). Learning single camera depth estimation using dual-pixels. En Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 7628-7637). https://acortar.link/09sWYo DOI: https://doi.org/10.1109/ICCV.2019.00772

Lee, J. H. y Kim, C. S. (2019). Monocular depth estimation using relative depth maps. En Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 9729-9738). https://bit.ly/4cSqPyZ DOI: https://doi.org/10.1109/CVPR.2019.00996

Makris, S. y Aivaliotis, P. (2022). AI-based vision system for collision detection in HRC applications. Procedia CIRP, 104, 1-6. https://doi.org/10.1016/j.procir.2022.01.021 DOI: https://doi.org/10.1016/j.procir.2022.02.171

Müller, U., Ben, J., Cosatto, E., Flepp, B. y Cun, Y. L. (2018). Off-road obstacle avoidance through end-to-end learning. En Advances in Neural Information Processing Systems (pp. 4278-4287). https://acortar.link/hyy9hd

Redmon, J. y Farhadi, A. (2018). YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767. https://arxiv.org/abs/1804.02767

Redmon, J., Divvala, M., Girshick, R. y Farhadi, A. (2016). You only look once: Unified, real-time object detection. En Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1-9). https://doi.org/10.1109/CVPR.2016.91 DOI: https://doi.org/10.1109/CVPR.2016.91

Ros, G., Sellart, L., Materzynska, J., Vázquez, D. y López, A. M. (2016). The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes. En Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3234-3243). https://acortar.link/4Ym1Rw DOI: https://doi.org/10.1109/CVPR.2016.352

Shim, K., Kim, J., Lee, G. y Shim, B. (2023). Depth-relative self attention for monocular depth estimation. En Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. IJCAI-23 (pp. 1395-1403). https://www.ijcai.org/proceedings/2023/0155.pdf DOI: https://doi.org/10.24963/ijcai.2023/155

Published

2024-09-30

How to Cite

Valderrama García, L. (2024). Simulator with Computer Vision for Detection, Tracking, and Distance Calculation of Moving Objects. European Public & Social Innovation Review, 9, 1–16. https://doi.org/10.31637/epsir-2024-812

Issue

Section

Research and Artificial Intelligence