March 2025
·
16 Reads
Textile Research Journal
In industrial applications where device capacity, computational performance, and thermal management are limited, we propose the YOLOvT-Light model for fabric defect detection. This model incorporates the convolutional block attention module (CBAM)-EfficientNet backbone network, balancing detection speed and precision while significantly reducing complexity and maintaining high precision. GhostConv replaces standard convolution in the neck section, effectively reducing parameters and computational cost through simple linear transformations. Additionally, the integration of Faster Block and C2f modules retains local feature fusion capabilities while further decreasing parameters and computation. Experimental results using the DAGM2007 dataset demonstrate that YOLOvT-Light significantly reduces weight size (9.50 MB), computation performance (13.9 Gflops), and parameter count (6.11 M) compared with the baseline model, while improving inference speed (223 fps), without sacrificing precision. This lightweight architecture ensures the feasibility of deploying the model on resource-constrained devices, making it suitable for real-time, cost-effective, and safe defect detection in textile manufacturing environments. This study provides a reliable solution for developing efficient, lightweight detection models applicable to real-world industrial settings.