Optimized Model for Texture Image Recognition with Adaptive Trilinear Pooling

Aiwu Chen

Abstract


Existing techniques often focus on single aspects such as accuracy or response time, with limited efforts made toward optimizing multiple indicators simultaneously. Therefore, this paper proposes an image recognition model based on trilinear pooling, which combines multiple algorithms and employs model pruning to meet the dual demands of accuracy and efficiency. The experiment was run on an NVIDIA RTX 4090 GPU, using AudioSet's balanced training subset for training. The validation set was the Eval subset, and the test set consisted of 10 common sounds, each with 100 samples per class. Train for 100 epochs using the Adam optimizer. Use Detection Transformer, Cascade Single Shot MultiBox Detector, and Efficient Hybrid Backbone Object Detection as baselines. The results indicate that the proposed model achieved an accuracy of 92.7% at the end of the iteration. Within a 95% confidence level, the confidence intervals for images of bird sounds, car sounds, and natural environment sounds are (0.91, 0.99), (0.88, 1.00), and (0.90, 1.00), respectively. The model's response time ranges from 18.5 ms to 23.1 ms during testing. The average accuracy across 200 tests remains above 90%, and the model consistently outperforms comparison models under varying levels of Gaussian noise and lighting conditions. These results demonstrate that the proposed method effectively improves image recognition performance in terms of both accuracy and efficiency. This work offers a new perspective and practical example for optimizing image recognition, which is expected to support the development of more accurate and efficient intelligent technologies.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i31.10487

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.