A Defect Detection Algorithm of Denim Fabric Based on Cascading Feature Extraction Architecture

Shuangbao Ma , Renchao Zhang , Yujie Dong , Yuhui Feng and Guoqin Zhang

Abstract

Abstract: Defect detection is one of the key factors in fabric quality control. To improve the speed and accuracy of denim fabric defect detection, this paper proposes a defect detection algorithm based on cascading feature extraction architecture. Firstly, this paper extracts these weight parameters of the pre-trained VGG16 model on the large dataset ImageNet and uses its portability to train the defect detection classifier and the defect recognition classifier respectively. Secondly, retraining and adjusting partial weight parameters of the convolution layer were retrained and adjusted from of these two training models on the high-definition fabric defect dataset. The last step is merging these two models to get the defect detection algorithm based on cascading architecture. Then there are two comparative experiments between this improved defect detection algorithm and other feature extraction methods, such as VGG16, ResNet-50, and Xception. The results of experiments show that the defect detection accuracy of this defect detection algorithm can reach 94.3% and the speed is also increased by 1–3 percentage points.

Keywords: Cascading Feature Extraction Architecture , Denim Defect Detection , ImageNet , Robustness , Transfer Learning

1. Introduction

Quality inspection is a significant part in textile production management and fabric defect detection is the most important process in quality inspection. The speed of the manual inspection method is only 5– 20 m/min, which requires high requirements on these workers’ technology and experience, and at the same time it has disadvantages of low efficiency, false inspection, and high rate of missed inspection, more importantly the accuracy of this detection way is only 60%–70% [1]. And some defect detection algorithms based on image processing contain many problems, such as high image acquisition requirements, low detection accuracy, slow processing speed, poor robustness, and so on [2], which have made it more challengeable in the application industrial field. With the development of the technology of computer, many new feature extraction networks are put forward, such as VGG16, AlexNet, ResNet- 50, YOLOv3, and so on, which bring some new breakthroughs in the research of fabric defect detection.

Gradually the research focus has shifted to the fabric defect detection with these deep learning networks [3]. The defect targets are identified by the color fabric defect detection based on the convolutional neural network, reaching the accuracy of 87.5%. In [2,3], the authors adopt vector machine classification algorithm to complete training fabric defect detection model, to solve the problem in classifying small samples and high latitude data. However, the amount of calculation is very large and the cost is also very high. And input image pixels are reduced as it speeds up the calculation, which to a certain extent, weakens the characteristics of the original image. A method to segment jacquard fabric is proposed and these defects are checked out by a decision region formed motion energy and energy variance. This method can only be used to judge whether there is a defect without identifying what type it is in [4]. In the previous work [5], we have clearly pointed out that denim fabric defect detection algorithm based on the optimized Gabor filter and the average detection rate of common defects of denim is only 91.25%.

This paper proposes a defect detection algorithm based on cascading feature extraction architecture. Pre-trained model spatial hierarchy features on large image datasets are portable in different recognition problems, so it can effectively compensate for the model accuracy problem caused by non-large image training set training by using this feature. The main research idea of this paper is to extract the characteristics of the pre-trained model VGG16, analyze and improve the retraining weight parameters and to obtain the defect detection model and the defect recognition model, and then to obtain a fast defect detection model in the merging dual model.

This paper is structured as following. Section 2 discusses the model and method. In Section 3, we propose this defect detection algorithm based on cascading feature extraction architecture, and we analyze and compare this method to the other feature extraction networks models in denim fabric defect detection. Section 4 concludes this paper.

2. Model and Method

2.1 Flowchart of Detection Algorithm

As shown in Fig. 1, a defect detection algorithm of denim fabric based on cascading feature extraction architecture and transfer learning is proposed, which extracts the weight parameters of the pre-trained VGG16 model from the large-scale image training set ImageNet and retrains the defect classification classifier.

Fig. 1.
Flowcharts of the defect detection algorithm.

Firstly, the pre-training weight parameters of VGG16 are extracted to release the original dense fully- connected layer, followed by adding their own dense fully-connected layer, freezing the convolution layer and pooling layer of the original model, using data enhancement and dropout technology to suppress over fitting, and training defect identification model A and defect classification model B, respectively. Finally, the fabric defect detection algorithm is obtained by merging model A and model B.

2.2 Cascading Feature Extraction Architecture

In convolution network, the information extracted by the bottom layer is local and highly universal, such as visual edge, color, texture, and so on, while the layer near the top extracts more abstract concepts, such as a specific category outline and shape. In this paper, the top several layers of convolution base are thawed to retrain the weight, and abstract representations in the model are relearned to make these representations more relevant to defect detecting. In order to determine the characteristic professional parameters of thawing several layers of convolution layer retraining, this paper gives an image of scratch defect to the model.

The output characteristic images of all convolution bases of VGG16 model are visualized from three dimensions of width, height and depth, and the contents of each channel are drawn into two-dimensional (2D) images. Through visual verification, it is found that the first layer of convolution layer is a variety of edge detectors. And almost all the information in the original hole image is preserved, as showed in Fig. 2.

Fig. 2.
A 2D image visualization of 64 channels in the first layer of the convolution layer.
Fig. 3.
The 2D visualization of images information of partial channels on the 7th–16th layers of the convolution layer: (a) 8th floor, (b) 9th floor, (c) 10th floor, (d) 11th floor, (e) 12th floor, and (f) 13th floor.

With the deepening of the number of layers, the convolution layer provides more abstract features with less information about image visual content, more and more blank filters, and more and more information about categories. Therefore, the 8th–13th layer of the defrosting convolution base is selected, and the visual content information of the image disappears gradually after the 8th layer in this paper. The 2D layer two-channel visualization 2D maps are shown in Fig. 3. After the 8th–13th layer of the defrosted convolution base, the 14th–16th layer added by the custom combination is re-trained. At this point, the defect detection bilateral model architecture adjustment proposed in the paper is shown in Fig. 3.

2.3 Framework of Defect Detection Model

In this paper, the whole convolution base of VGG16 is extracted twice. And the dense layer is added to the convolution base to extend the existing model and train the densely connected classifiers of the defect recognition model A and the defect classification model B, respectively, before running the entire model end-to-end on the input data to ensure each input image. Both pass through the convolution base. The defect recognition model A trains a two-category defect detection model, and the defect classification model B trains an 8-class defect type recognition model. The two models are fused to obtain the final defect recognition model. When model A detects a defect, it will send the defect image to model B for defect type identification. Otherwise, it will be judged as flawless by model A, which can speed up the detection of defects as be shown in Fig. 4.

Fig. 4.
Adjustment diagram of defect detection model framework proposed in this study.

3. Experiment and Discussion

3.1 Dataset

The fabric dataset used in this paper comes from 4,036 super clear fabric data images, including 2,960 flawless images and 2,676 flawed images, published by Jiangsu Sunshine Group. There are 42 types of defects including bow hanging, spot, warp skipping, double dimension, knot, fabric thinning, hole making, oil stain, warp missing, weft missing and so on. All of these are confirmed by experienced cloth inspectors manually. Then the label "lmg" manual labeling is used. The resolution of each image is 2560×1920. The image features and details are clear, and the corresponding XML format information file is generated, including the types of defects. The dataset is shown in Fig. 5.

3.2 Data Preprocessing

A large difference between the data of images with defects and those without defects in the dataset, will result in a data imbalance problem, leading to overfitting of model training [6]. At the same time, if it is used to train hardware limitations (2 GTX1080Ti 11G GPU), the original image resolution will be higher, and the calculation during the training will be a large amount. Therefore, the training data needs to be pre-processed before model training. The development language is Python 3.7 and the deep learning framework is TensorFlow in these experiments.

Fig. 5.
Partial images in dataset: (a) hole making, (b) warp skipping, (c) oil stains, (d) spot, (e) fabric thinning, (f) bow hanging, (g) warp missing, and (h) weft missing.

In this paper, eight kinds of defect samples are selected including hole making, warp skipping, oil stain, spot, fabric thinning, bow hanging, warp missing, and weft missing which are shown in Fig. 5. In order to improve the accuracy of model detection, more feature details of the defect image are needed. If the size is reduced directly, it will sacrifice the details of the defect image and reduce the amount of information that the image carries. Therefore, according to the location and area of the defects, the test samples the defect image 4–8 times in a window with the size of 320×320. Then it uses the data enhancement technology to expand the dataset during the training. Data enhancement uses a variety of random transformations from existing training samples to enhance the generalization ability of the model. In this paper, all data enhancement adopts transformation parameters such as 90° random rotation range, horizontal and vertical translation 0.3, 90° random staggering, the perspective transformation 0.2, Zoom range 0.1, horizontal and vertical random flipping to amplify data. Some images after data enhancement of common defects holing are shown in Fig. 6.

Finally, for the training of defect detection model A, 1,876 flawless images and 1,876 flawed images are selected, which are divided into 3,002 training sets, 376 verification sets and 374 test sets according to the proportion of 8:1:1. For the training of defect recognition model B, this paper selects 1,086 defect images of eight categories according to the ratio of 1:1, a total of 8,544 as training sets, which are respectively divided into 6,834 training sets, 855 verification sets and 855 test sets according to the ratio of 8:1:1. For the model test and other comparative tests after merging, 1,600 original images without training and testing are used, including 800 flawed images and 800 flawless images of eight categories.

Fig. 6.
Partial enlarged image after data enhancement applied to a holed image, the original image is shown in (a) while these data enhancement images are displayed in (b).
3.3 Model Training

In the model training, in order to obtain the model with high precision and powerful performance, it is necessary to adjust the super parameters repeatedly to prevent the overfitting of training.

In the process of training the model, it is necessary to monitor the training accuracy, training loss, verification of accuracy and loss. If it is found that the performance of the model verification data begins to decline, it can be judged as over fitting. In order to prevent overfitting, in addition to data enhancement and dropout technology, early stop is also set in the program in addition to data enhancement and dropout technology. When verification accuracy is not optimized for 15 consecutive rounds of training, the training will be terminated in advance. Through feedback, we adjust the super parameters repeatedly, and try different dropout ratio and so on, until the model achieves the best performance. For model A, the loss curve and accurate curve of its training process are shown in Fig. 7. For Model B, both the loss curve and accuracy curve of its training process are shown in Fig. 8.

Fig. 7.
(a) Training loss and (b) training accuracy curve after 100 rounds of model A training.

After model training, the model A and model B are tested on the test set, respectively. The results restrain the maximum probability value. After taking the average of the first three values, the accuracy of defect detection model A is 0.967, and the accuracy of defect recognition model B is 0.916. Then the accuracy of the two models is 0.943 after merging.

Fig. 8.
(a) Training loss and (b) training accuracy curve after 100 rounds of model B training.
3.4 Experiments and Analysis

In the first experiment, VGG16, ResNet-50, and Xception feature extraction method and the proposed method are trained respectively by using the training dataset, and the weight parameters of different models. The test time and accuracy rate of four methods in dealing with the testing dataset, which are shown in Table 1.

Table 1.
Performance comparison results of different methods

The weight parameters of the defect detection algorithm based on cascading feature extraction architecture in this paper are quite obvious according to the classification results in Table 1. But for most of the flawless images which can be filtered by the defect detection model, and the latter type identification model is not needed. Therefore, the comprehensive calculation of test time is the shortest, and the detection accuracy reaches 94.3%.

In the second experiment, the traditional feature extraction methods are repeated, and the directional gradient histogram (HOG) feature and the paper proposed in literature [7,8] are used respectively. Selecting the local binary mode (LBP) feature method is proposed in paper [9,10] by using the test dataset in this paper to test and compare to these methods in this paper. The experimental results are shown in Table 2.

Table 2.
Accuracy rate of different algorithms

Compared to the mainstream single models, the advantages of defect detection algorithm based on cascading feature extraction architecture are obvious. According to the experimental results in Table 2, the test accuracy of the defect detection method based on the hog feature is only 70.51%, and the main reason is that this algorithm is to detect the extracted image block, and detect part of the cutting edge as defect when cutting, without processing the cutting edge; In another method for obtaining the LBP features of the local texture of the fabric and then achieving the unsupervised detection of defects, the accuracy rate is only 69.65% in the test dataset.

It can be seen that the traditional detection algorithms are less robust, requiring higher data samples, and have weak generalization ability in Table 2. And the detection time of the two traditional defect detection algorithms is limited by the number of detection samples and the pixel size of the defect, so the effective statistics cannot be performed. However, the accuracy of the two traditional detection algorithms is improved by 23.79% and 23.85%, respectively. What is more, the detection speed proposed in the paper is constant and fast and not affected by the size of the defect.

4. Conclusion

This paper proposes a defect detection algorithm based on cascading feature extraction architecture, aiming to improve the speed and accuracy of denim fabric defect detection. (1) The experimental results show that this method has obvious advantages over the mainstream single model algorithms and traditional featured detection algorithms. As a result, the accuracy rate is up to 94.30% and the speed is also increased by 1–3 percentage points. (2) Eight kinds of defect fabrics, such as normal fabric and holed fabric, warp skipping, oil stain, spot, fabric thinness, hanging bow, warp missing and weft missing, can be accurately identified with this article method, with faster speed, more types of fabric defects detected and better robustness so as to basically meet the needs of industrial production. (3) In the later work, we will continue to optimize the model and use other deep learning networks and transfer learning to improve the accuracy and speed of fabric defect detection, such as ResNet-50, YOLOv3, and YOLOv5 characteristic network.

Acknowledgement

This paper is supported by grants from State Key Laboratory of New Textile Materials and Advanced Processing Technologies in Wuhan Textile University (No. FZ2020005).

Biography

Shuangbao Ma
https://orcid.org/0000-0001-7101-423X

He received the Ph.D. degree in control science and engineering from Huazhong University of Science and Technology in 2013. His current research interests include control, image processing and deep learning.

Biography

Renchao Zhang
https://orcid.org/0000-0003-2691-7661

He is currently studying for a master's degree at Wuhan Textile University. His current research interests include facial recognition and deep learning.

Biography

Yujie Dong
https://orcid.org/0000-0002-5825-3524

She is currently studying for a master's degree in Electronic Science and Technology at Wuhan Textile University. Her current research interests include intelligent detection and control, image processing and deep learning.

Biography

Yuhui Feng
https://orcid.org/0000-0001-5993-5845

She graduated from Wuhan Textile University in 2021 with a bachelor's degree in automation. She is pursuing graduate studies in electronic science and technology at Wuhan Textile University, with research interests including control, image processing and deep learning.

Biography

Guoqin Zhang
https://orcid.org/0000-0002-7137-5198

She received a master’s degree in power electronics and electric drive from Wuhan University in 2013. Her current research interests include control and power con-version.

References

  • 1 Y . Ding and Z. Yang, "Fabric image defect detection based on fission particle filter algorithm," Textile Auxiliaries, vol. 36, no. 4, pp. 60-64, 2019.custom:[[[-]]]
  • 2 S. Zhao, J. Zhang, J. Wang, and C. Xu, "Fabric defect detection algorithm based on two-stage deep transfer learning," Journal of Mechanical Engineering, vol. 57, no. 17, pp. 86-97, 2021.doi:[[[10.3901/jme.2021.17.086]]]
  • 3 N. Lang, D. Wang, P . Cheng, S. Zuo, and P . Zhang, "Virtual-sample-based defect detection algorithm for aluminum tube surface," Measurement Science and Technology, vol. 32, no. 8, article no. 085001, 2021. https://doi.org/10.1088/1361-6501/abf865doi:[[[10.1088/1361-6501/abf865]]]
  • 4 Q. Xu and L. Zhou, "Straw defect detection algorithm based on pruned YOLOv3," in Proceedings of 2021 4th International Conference on Control and Computer Vision, Macau, China, 2021, pp. 64-69.doi:[[[10.1145/3484274.3484285]]]
  • 5 H. Y . Ngan, G. K. Pang, and N. H. Y ung, "Motif-based defect detection for patterned fabric," Pattern Recognition, vol. 41, no. 6, pp. 1878-1894, 2008.doi:[[[10.1016/j.patcog.2007.11.014]]]
  • 6 S. Ma, W. Liu, C. You, S. Jia, and Y . Wu, "An improved defect detection algorithm of jean fabric based on optimized Gabor filter," Journal of Information Processing Systems, vol. 16, no. 5, pp. 1008-1014, 2020.doi:[[[10.3745/JIPS.02.0140]]]
  • 7 J. Luo and K. Lu, "Yarn-dyed fabric defect detection based on convolution neural network and transfer learning," Shanghai Textile Science & Technology, vol. 47, no. 6, pp. 52-56, 2019.custom:[[[-]]]
  • 8 C. Li, G. Gao, Z. Liu, Q. Liu, and W. Li, "Fabric defect detection algorithm based on histogram of oriented gradient and low-rank decomposition," Journal of Textile Research, vol. 38, no. 3, pp. 149-154, 2017.doi:[[[10.13475/j.fzxb.20160304106]]]
  • 9 J. Zhou, J. Wang, and W. Gao, "Unsupervised fabric defect segmentation using local texture feature," Journal of Textile Research, vol. 37, no. 12, pp. 43-48, 2016.doi:[[[10.1080/00405000.2015.1131440]]]
  • 10 D. Yapi, M. S. Allili, and N. Baaziz, "Automatic fabric defect detection using learning-based local textural distributions in the contourlet domain," IEEE Transactions on Automation Science and Engineering, vol. 15, no. 3, pp. 1014-1026, 2018.doi:[[[10.1109/tase.2017.2696748]]]

Table 1.

Performance comparison results of different methods
Method Weight parameter (million) Test time (piece/s) Accuracy rate (%)
VGG16 138 3.4 88.76
ResNet-50 25 1.2 91.32
Xception 22 0.9 93.10
Proposed method 213 0.5 94.30

Table 2.

Accuracy rate of different algorithms
Algorithm Accuracy rate (%)
HOG + Softmax 70.51
LBP + Softmax 69.65
Proposed method 94.30
Flowcharts of the defect detection algorithm.
A 2D image visualization of 64 channels in the first layer of the convolution layer.
The 2D visualization of images information of partial channels on the 7th–16th layers of the convolution layer: (a) 8th floor, (b) 9th floor, (c) 10th floor, (d) 11th floor, (e) 12th floor, and (f) 13th floor.
Adjustment diagram of defect detection model framework proposed in this study.
Partial images in dataset: (a) hole making, (b) warp skipping, (c) oil stains, (d) spot, (e) fabric thinning, (f) bow hanging, (g) warp missing, and (h) weft missing.
Partial enlarged image after data enhancement applied to a holed image, the original image is shown in (a) while these data enhancement images are displayed in (b).
(a) Training loss and (b) training accuracy curve after 100 rounds of model A training.
(a) Training loss and (b) training accuracy curve after 100 rounds of model B training.