Penerapan ResNet-50 CNN untuk Optimalisasi Klasifikasi pada Data Fashion

Authors

  • Arimbi Puspitasari Universitas Islam Negeri Sunan Ampel Surabaya, Angola
  • Diana Sava Salsabila Universitas Islam Negeri Sunan Ampel Surabaya, Indonesia
  • Dwi Roliawati Universitas Islam Negeri Sunan Ampel Surabaya, Indonesia

Keywords:

Convolutional Neural Network (CNN), ResNet-50, Klasifikasi Gambar, Fashion

Abstract

Penelitian ini bertujuan untuk menerapkan arsitektur Convolutional Neural Network (CNN) berbasis ResNet-50 dalam klasifikasi gambar produk fashion. Model ini dikembangkan untuk mengenali berbagai kategori produk, seperti kaos, celana, dan sepatu, dengan memanfaatkan konsep residual learning yang memungkinkan jaringan mempelajari fitur visual kompleks secara lebih efektif. Metode penelitian meliputi pengumpulan dan pemrosesan data gambar, pelatihan model CNN menggunakan ResNet-50, serta evaluasi performa menggunakan metrik accuracy, precision, recall, dan F1-score. Hasil eksperimen menunjukkan bahwa model mencapai accuracy sebesar 99,44% pada data training dan 97,83% pada data testing, menunjukkan kemampuan generalisasi yang baik terhadap data yang belum pernah dilihat sebelumnya. Evaluasi lebih lanjut menggunakan confusion matrix menunjukkan bahwa sebagian besar sampel diklasifikasikan dengan benar, meskipun masih terdapat beberapa kesalahan prediksi pada kategori tertentu. Dengan rata-rata precision, recall, dan F1-score mencapai 98%, model ini terbukti memiliki performa tinggi dalam klasifikasi gambar fashion. Hasil penelitian ini menunjukkan bahwa ResNet-50 dapat menjadi solusi yang andal untuk sistem rekomendasi produk, katalog digital, dan pengelolaan inventaris berbasis gambar, meskipun masih terdapat ruang untuk peningkatan terutama pada kelas yang sulit diklasifikasikan

References

[1] A. Kurniawan, E. Erlangga, T. Tanjung, F. Ariani, Y. Aprilinda, and R. Y. Endra, “Review of Deep Learning Using Convolutional Neural Network Model,” in Conference on Industrial Sciences, Engineering and Technology toward Digital Era (eICISET 2023), Trans Tech Publications Ltd, Mar. 2024, pp. 49–55. doi: 10.4028/p-kzq3xe.

[2] G. Litjens et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42. Elsevier B.V., pp. 60–88, Dec. 2020. doi: 10.1016/j.media.2017.07.005.

[3] D. Shen, G. Wu, and H. Il Suk, “Deep Learning in Medical Image Analysis,” Annu. Rev. Biomed. Eng., vol. 19, no. Volume 19, 2017, pp. 221–248, Jun. 2020, doi: 10.1146/ANNUREV-BIOENG-071516-044442/CITE/REFWORKS.

[4] S. Hussain, S. M. Anwar, and M. Majid, “Segmentation of glioma tumors in brain using deep convolutional neural network,” Neurocomputing, vol. 282, pp. 248–261, Mar. 2020, doi: 10.1016/j.neucom.2017.12.032.

[5] D. Bordoloi, “A Deep Learning-based Approach for Medical Image Analysis and Diagnosis,” https://www.publishoa.com/index.php/journal/article/view/1436.

[6] M. Z. Che Azemin, R. Hassan, M. I. Mohd Tamrin, and M. A. Md Ali, “COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings,” Int. J. Biomed. Imaging, vol. 2020, 2020, doi: 10.1155/2020/8828855.

[7] H. Alaeddine and M. Jihene, “Deep Residual Network in Network,” Comput. Intell. Neurosci., vol. 2021, 2021, doi: 10.1155/2021/6659083.

[8] N. ALHAWAS and Z. TÜFEKCİ, “The Effectiveness of Transfer Learning and Fine-Tuning Approach for Automated Mango Variety Classification,” Eur. J. Sci. Technol., Mar. 2022, doi: 10.31590/ejosat.1082217.

[9] B. BEKTAŞ EKİCİ and S. T. USTAOĞLU, “Deep Learning for Physical Damage Detection in Buildings: A Comparison of Transfer Learning Methods,” Turkish J. Sci. Technol., vol. 18, pp. 291–299, Sep. 2023, doi: 10.55525/tjst.1291814.

[10] T. Zebin and S. Rezvy, “COVID-19 detection and disease progression visualization: Deep learning on chest X-rays for classification and coarse localization,” Appl. Intell., vol. 51, no. 2, pp. 1010–1021, Feb. 2021, doi: 10.1007/S10489-020-01867-1/FIGURES/8.

[11] A. Sharma, S. Rani, and D. Gupta, “Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases,” Int. J. Biomed. Imaging, vol. 2020, no. 1, p. 8889023, Jan. 2020, doi: 10.1155/2020/8889023.

[12] Q. A. Al-haija, “Breast Cancer Diagnosis in Histopathological Images Using ResNet-50 Convolutional Neural Network,” vol. 50, 2020.

[13] Y. Liu, H. Pu, and D. Sun, “Trends in Food Science & Technology Efficient extraction of deep image features using convolutional neural network ( CNN ) for applications in detecting and analysing complex food matrices,” Trends Food Sci. Technol., vol. 113, no. April, pp. 193–204, 2021, doi: 10.1016/j.tifs.2021.04.042.

[14] A. Shabbir et al., “Satellite and Scene Image Classification Based on Transfer Learning and Fine Tuning of ResNet-50,” vol. 2021, 2021, doi: 10.1155/2021/5843816.

[15] B. Kolisnik, I. Hogan, and F. Zulkernine, “Condition-CNN : A hierarchical multi-label fashion image classification model,” Expert Syst. Appl., vol. 182, no. May, p. 115195, 2021, doi: 10.1016/j.eswa.2021.115195.

[16] A. S. B. Reddy and D. S. Juliet, “Transfer Learning with ResNet-50 for Malaria Cell-Image Classification,” no. August, 2020, doi: 10.1109/ICCSP.2019.8697909.

[17] D. A. Hakim, A. Jamal, A. S. Nugroho, A. A. Septiandri, and B. Wiweko, “Embryo Grading after In Vitro Fertilization using YOLO,” Lontar Komput. J. Ilm. Teknol. Inf., vol. 13, no. 3, p. 137, 2022, doi: 10.24843/lkjiti.2022.v13.i03.p01.

[18] L. Qin, Y. Yang, D. Huang, N. Zhu, H. Yang, and Z. Xu, “Visual Tracking With Siamese Network Based on Fast Attention Network,” IEEE Access, vol. 10, pp. 35632–35642, 2022, doi: 10.1109/ACCESS.2022.3163717.

[19] S. Iqbal and A. N. Qureshi, “A Heteromorphous Deep CNN Framework for Medical Image Segmentation Using Local Binary Pattern,” IEEE Access, vol. 10, pp. 63466–63480, 2022, doi: 10.1109/ACCESS.2022.3183331.

[20] A. Ansari, S. Raees, and N. Rahman, “Tree-Based Convolutional Neural Networks for Image Classification,” in Proceedings of the 3rd International Conference on ICT for Digital, Smart, and Sustainable Development, ICIDSSD 2022, 24-25 March 2022, New Delhi, India, EAI, 2023. doi: 10.4108/eai.24-3-2022.2318997.

[21] C. Shi, X. Zhao, and L. Wang, “A multi-branch feature fusion strategy based on an attention mechanism for remote sensing image scene classification,” Remote Sens., vol. 13, no. 10, 2021, doi: 10.3390/rs13101950.

[22] G. M. M. Alshmrani, Q. Ni, R. Jiang, H. Pervaiz, and N. M. Elshennawy, “A deep learning architecture for multi-class lung diseases classification using chest X-ray (CXR) images,” Alexandria Eng. J., vol. 64, pp. 923–935, 2023, doi: 10.1016/j.aej.2022.10.053.

[23] S. Farhadpour, T. A. Warner, and A. E. Maxwell, “Selecting and Interpreting Multiclass Loss and Accuracy Assessment Metrics for Classifications with Class Imbalance: Guidance and Best Practices,” Remote Sens., vol. 16, no. 3, pp. 1–22, 2024, doi: 10.3390/rs16030533.

[24] S. Sathyanarayanan and B. R. Tantri, “Confusion Matrix-Based Performance Evaluation Metrics,” vol. 27, no. 4, 2024.

[25] R. Rani Saritha and R. Gunasundari, “ENHANCED TRANSFORMER-BASED DEEP KERNEL FUSED SELF ATTENTION MODEL FOR LUNG NODULE SEGMENTATION AND CLASSIFICATION,” Arch. Tech. Sci., vol. 31, pp. 175–191, Oct. 2024, doi: 10.70102/afts.2024.1631.175.

[26] B. Subramanian, B. Olimov, S. M. Naik, S. Kim, K. H. Park, and J. Kim, “An integrated mediapipe-optimized GRU model for Indian sign language recognition,” Sci. Rep., vol. 12, no. 1, pp. 1–16, 2022, doi: 10.1038/s41598-022-15998-7.

[27] J. T. Samudra, R. Rosnelly, and Z. Situmorang, “Comparative Analysis of SVM and Perceptron Algorithms in Classification of Work Programs,” MATRIK J. Manajemen, Tek. Inform. dan Rekayasa Komput., vol. 22, pp. 285–298, Mar. 2023, doi: 10.30812/matrik.v22i2.2479.

Published

2025-06-02

Citation Check