Enhancing YOLOv8 Object Detection with Shape-IoU Loss and Local Convolution for Small Target Recognition

Qi Zhang, Jingyuan Zhang, Shilei Yang

Abstract


In light of the prevailing challenges, including the suboptimal recall rate and the substantial computing demands of the You Only Look Once (YOLO) series target detection algorithm, the study proposes an innovative approach to enhance the algorithm's loss function. This enhancement is achieved by introducing the SIOUs loss function, which is derived from the YOLOv8 network architecture. Additionally, the study proposes an adjustment to the calculation method of the weight of the loss function, employing the SlideLoss loss function. Then, this paper introduces SPD-Conv and PConv local convolution structure to improve the Backbone layer and proposes a computer object detection model based on loss function and local convolution. It was found that the prediction accuracy of the study model was above 90% on both the training and validation sets. Compared with the conventional YOLOv8 network, the average accuracy of the study model was increased by 13.19%. In the COCO and ImageNet datasets, the number of parameters, FLOPs, reasoning speed, and other indicators of the proposed model were better than the traditional YOLOv8. This indicated that the proposed algorithm could significantly improve the detection speed under the condition of improving the detection accuracy, realizing the balance between the two, and being more effective and reliable. The example results showed that the average coverage rate and the center position error of the research model were increased by 5.7% and 2.3%, respectively, which could effectively improve the operation accuracy. The research results can provide high detection accuracy for the target identification problems in various fields and have important application value in medical image analysis, industrial quality inspection, and other fields.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i21.8287

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.