TY - GEN
T1 - Landmark Detection for Auto Landing of Quadcopter Using YOLOv5
AU - More, Deeptej Sandeep
AU - Suresh, Shilpa
AU - D’Souza, Jeane Marina
AU - Asha, C. S.
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2023
Y1 - 2023
N2 - The vision-based system is a crucial component of unmanned aerial vehicle (UAV) autonomous flying and is frequently regarded as a difficult phase to accomplish. Most UAV accidents happen while landing or due to obstacles in the path. Hence, it is considered as one of the most important to think about the auto landing of UAVs to reduce accidents. Some technologies, like GPS, frequently don’t function indoors or in places where GPS transmissions aren’t allowed. They can land up to a few meters but lack accuracy. A system that operates in such circumstances is required to overcome this and be far more suitable. Cameras are used to offer much information about their surroundings and may be helpful in certain circumstances. A vision-based system’s accuracy can be as low as a few centimeters and better than GPS-based location estimation. This work involves designing a vision-based landing system that can recognize a marker by providing the bounding box around it. Typically, the H mark is employed in helicopter landing pads for vision-based landing systems. Here, the position is identified using the YOLOv5 algorithm. An image of A4-sized sheet and 2ft × 2ft printed with H mark is taken using quadcopter and is used for data set. The algorithm is tested to locate the marker at any orientation and scale. Thus, YOLOv5 identifies the marker at any distance, orientation, or size in the given data set and performs better than SVM-based approach. This could be further used to find the distance on the ground from the UAV center that aids in auto landing.
AB - The vision-based system is a crucial component of unmanned aerial vehicle (UAV) autonomous flying and is frequently regarded as a difficult phase to accomplish. Most UAV accidents happen while landing or due to obstacles in the path. Hence, it is considered as one of the most important to think about the auto landing of UAVs to reduce accidents. Some technologies, like GPS, frequently don’t function indoors or in places where GPS transmissions aren’t allowed. They can land up to a few meters but lack accuracy. A system that operates in such circumstances is required to overcome this and be far more suitable. Cameras are used to offer much information about their surroundings and may be helpful in certain circumstances. A vision-based system’s accuracy can be as low as a few centimeters and better than GPS-based location estimation. This work involves designing a vision-based landing system that can recognize a marker by providing the bounding box around it. Typically, the H mark is employed in helicopter landing pads for vision-based landing systems. Here, the position is identified using the YOLOv5 algorithm. An image of A4-sized sheet and 2ft × 2ft printed with H mark is taken using quadcopter and is used for data set. The algorithm is tested to locate the marker at any orientation and scale. Thus, YOLOv5 identifies the marker at any distance, orientation, or size in the given data set and performs better than SVM-based approach. This could be further used to find the distance on the ground from the UAV center that aids in auto landing.
UR - http://www.scopus.com/inward/record.url?scp=85177807210&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85177807210&partnerID=8YFLogxK
U2 - 10.1007/978-981-99-4634-1_1
DO - 10.1007/978-981-99-4634-1_1
M3 - Conference contribution
AN - SCOPUS:85177807210
SN - 9789819946334
T3 - Lecture Notes in Electrical Engineering
SP - 3
EP - 12
BT - Intelligent Control, Robotics, and Industrial Automation - Proceedings of International Conference, RCAAI 2022
A2 - Sharma, Sanjay
A2 - Subudhi, Bidyadhar
A2 - Sahu, Umesh Kumar
PB - Springer Science and Business Media Deutschland GmbH
T2 - International Conference on Robotics, Control, Automation and Artificial Intelligence, RCAAI 2022
Y2 - 24 November 2022 through 26 November 2022
ER -