TY - JOUR
T1 - YLOMF
T2 - You Look Once for Multi-Feature—A Multi-Feature Multi-Task Network for Intelligent Real-Time Perception in Autonomous Driving
AU - Anoop, B. N.
AU - Abhilash, S. K.
AU - Madhav Nookala, Venu
AU - Prita, S.
AU - Raghavendra, S.
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Road traffic accidents claim 1.35 million lives annually, making them the leading cause of death among individuals aged 5–29 years, according to the world health organization (WHO). Low- and middle-income countries, despite having only 60% of the world’s vehicles, account for 93% of fatalities due to speeding, impaired driving, inadequate safety measures, and poor infrastructure. In the era of Industry 4.0, intelligent driving assistance systems offer a transformative solution to mitigate human errors, particularly those caused by fatigue or drowsiness. This study presents You Look Once for Multi-Feature(YLOMF), a novel single-stage, vision-based hierarchical model for autonomous driving. It features modular feature component heads, ensuring customizable deployment and real-time efficiency on edge devices. Trained on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset and evaluated on Berkeley Deep Drive Dataset (BDD100K), Indian Driving Dataset (IDD), and Audi Autonomous Driving Dataset (A2D2), YLOMF exhibits robust cross-domain generalization. It achieves 85.1% IoU for lane segmentation, 81.56% accuracy in 2D object detection, and 31.5% (easy), 26.38% (medium), and 23.9% (hard) accuracy in 3D instance detection, surpassing state-of-the-art benchmarks. Depth estimation performance was validated using absolute relative error, squared relative error, RMSE, and logged RMSE. By delivering a computationally efficient and highly accurate perception framework, YLOMF enhances scene understanding and object recognition in real time. Its integration into autonomous systems offers significant potential for reducing road accidents and improving overall traffic safety in safety-critical environments.
AB - Road traffic accidents claim 1.35 million lives annually, making them the leading cause of death among individuals aged 5–29 years, according to the world health organization (WHO). Low- and middle-income countries, despite having only 60% of the world’s vehicles, account for 93% of fatalities due to speeding, impaired driving, inadequate safety measures, and poor infrastructure. In the era of Industry 4.0, intelligent driving assistance systems offer a transformative solution to mitigate human errors, particularly those caused by fatigue or drowsiness. This study presents You Look Once for Multi-Feature(YLOMF), a novel single-stage, vision-based hierarchical model for autonomous driving. It features modular feature component heads, ensuring customizable deployment and real-time efficiency on edge devices. Trained on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset and evaluated on Berkeley Deep Drive Dataset (BDD100K), Indian Driving Dataset (IDD), and Audi Autonomous Driving Dataset (A2D2), YLOMF exhibits robust cross-domain generalization. It achieves 85.1% IoU for lane segmentation, 81.56% accuracy in 2D object detection, and 31.5% (easy), 26.38% (medium), and 23.9% (hard) accuracy in 3D instance detection, surpassing state-of-the-art benchmarks. Depth estimation performance was validated using absolute relative error, squared relative error, RMSE, and logged RMSE. By delivering a computationally efficient and highly accurate perception framework, YLOMF enhances scene understanding and object recognition in real time. Its integration into autonomous systems offers significant potential for reducing road accidents and improving overall traffic safety in safety-critical environments.
UR - https://www.scopus.com/pages/publications/105009500804
UR - https://www.scopus.com/pages/publications/105009500804#tab=citedBy
U2 - 10.1109/ACCESS.2025.3583341
DO - 10.1109/ACCESS.2025.3583341
M3 - Article
AN - SCOPUS:105009500804
SN - 2169-3536
VL - 13
SP - 110867
EP - 110881
JO - IEEE Access
JF - IEEE Access
ER -