The Comparison of GAN and CNN Models in the Innovation of Coloring Madura and Bali Batik
Abstract
This study aims to innovate automatic coloring of batik patterns using deep learning models. Specifically, it compares the performance of Generative Adversarial Network (GAN) with pretrained Caffe-based Convolutional Neural Networks (CNN) in coloring images of Madura and Bali batik. The dataset consists of 388 Madura batik images for training, 97 for validation, and 20 distinct images of both Bali and Madura batik for testing. This dataset was obtained through web scraping from batik posts on social media platforms like Instagram, Bing Image Search using specific keywords, and Kaggle, followed by a manual combination and cleaning process. The GAN model was trained with varying epochs (40, 80, 150), while the CNN utilized pretrained Caffe weights. Evaluation was conducted using Peak Signal-to-Noise Ratio (PSNR), Fréchet Inception Distance (FID), Mean Squared Error (MSE), and Structural Similarity Index (SSIM). The results indicate that the GAN model with 150 epochs outperformed the CNN, achieving a PSNR of 29.702, an FID of 84.016, an MSE of 511.8812, and an SSIM of 0.9925, demonstrating superior color creation and artistic detail in batik. Conversely, the CNN model exhibited lower performance, with a PSNR of 28.218, an FID of 200.271, and an SSIM of 0.7925, indicating its limitations in preserving the intricate patterns and colors of batik. This research demonstrates the applicability of GAN in automatic batik coloring, potentially providing innovative solutions for the batik industry while maintaining the cultural and artistic integrity of traditional designs.
Full Text:
PDFReferences
R. . S. Suminto, “BATIK MADURA: Menilik Ciri Khas dan Makna Filosofinya,” Corak, vol. 4, no. 1, pp. 1–12, 2015, doi: 10.24821/corak.v4i1.2356.
I. R. Salma, M. Masiswo, Y. Satria, and A. A. Wibowo, “Pengembangan Motif Batik Khas Bali,” Din. Kerajinan dan Batik Maj. Ilm., vol. 32, no. 1, p. 23, 2016, doi: 10.22322/dkb.v32i1.1168.
M. Ricky and M. E. Al Rivan, “Implementasi Deep Convolutional Generative Adversarial Network untuk Pewarnaan Citra Grayscale,” J. Tek. Inform. dan Sist. Inf., vol. 8, no. 3, pp. 556–566, 2022, doi: 10.28932/jutisi.v8i3.5218.
P. Nepali, R. K. Sah, and S. Kc C, “Auto Colorization of Gray Images using GAN,” Proc. 12th IOE Grad. Conf., vol. 12, no. October 2022, pp. 992–1000, 2022.
R. R. Kalendesang and D. H. Setiabudi, “Pewarnaan Otomatis Sketsa Gambar Menggunakan Metode Conditional GAN Untuk Mempercepat Proses Pewarnaan,” J. Infra, vol. 10, no. 2, 2022.
J. Lohdefink, A. Bar, N. M. Schmidt, F. Huger, P. Schlicht, and T. Fingscheidt, “On low-bitrate image compression for distributed automotive perception: Higher peak snr does not mean better semantic segmentation,” IEEE Intell. Veh. Symp. Proc., vol. 2019-June, pp. 424–431, 2019, doi: 10.1109/IVS.2019.8813813.
R. Kapoor, B. Manocha, and A. Dobhal, “Black and White Image Color Restoration using Caffe Model,” Int. J. Res. Eng. Emerg. Trends, vol. 4, no. 2, pp. 61–67, 2020.
A. Himawan, A. Priadana, and A. Murdiyanto, “Implementation of Web Scraping to Build a Web-Based Instagram Account Data Downloader Application,” IJID (International J. Informatics Dev., vol. 9, no. 2, pp. 59–65, 2020, doi: 10.14421/ijid.2020.09201.
Ridwang, A. A. Ilham, I. Nurtanio, and Syafaruddin, “Image search optimization with web scraping, text processing and cosine similarity algorithms,” 2020 IEEE Int. Conf. Commun. Networks Satell. Comnetsat 2020 - Proc., no. December, pp. 346–350, 2020, doi: 10.1109/Comnetsat50391.2020.9328982.
M. Vicente-Mariño and M. Varela-Rodríguez, “Scattered Images: scrapers and the representation of cancer on Instagram,” Cuadernos.info, no. 49, pp. 72–97, 2021, doi: 10.7764/cdi.49.27809.
A. Althbaity, M. M. Dessouky, and I. R. Khan, “Colorization Of Grayscale Images Using Deep Learning,” Proc. - 2022 14th IEEE Int. Conf. Comput. Intell. Commun. Networks, CICN 2022, vol. 6, no. 11, pp. 131–138, 2022, doi: 10.1109/CICN56167.2022.10008319.
W. Zhu, Z. Wang, Q. Li, and C. Zhu, “A Method of Enhancing Silk Digital Printing Color Prediction through Pix2Pix GAN-Based Approaches,” Appl. Sci., vol. 14, no. 1, 2024, doi: 10.3390/app14010011.
R. Sankar, A. Nair, P. Abhinav, S. K. P. Mothukuri, and S. G. Koolagudi, “Image Colorization Using GANs and Perceptual Loss,” 2020 Int. Conf. Artif. Intell. Signal Process. AISP 2020, no. 1, pp. 0–3, 2020, doi: 10.1109/AISP48273.2020.9073284.
A. A. Tolpadi et al., “Synthetic Inflammation Imaging with PatchGAN Deep Learning Networks,” Bioengineering, vol. 10, no. 5, 2023, doi: 10.3390/bioengineering10050516.
C. Guo, X. Chen, Y. Chen, and C. Yu, “Multi-Stage Attentive Network for Motion Deblurring via Binary Cross-Entropy Loss,” Entropy, vol. 24, no. 10, pp. 1–14, 2022, doi: 10.3390/e24101414.
M. A. R. Ramadhan, T. Apriliyan, N. Ananta, and A. A. Zakkyfriza, “Perbandingan Jumlah Layer Pada Convolutional Neural Network Untuk Meningkatkan Akurasi Dalam Klasifikasi Gambar,” vol. 2, no. 5, pp. 211–217, 2024.
H. Chen et al., “Pre-trained image processing transformer,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 12294–12305, 2021, doi: 10.1109/CVPR46437.2021.01212.
M. Feuerpfeil, “Conditional Generative Adversarial Network : generate new face images based on Conditional Generative Adversarial Network : generate new face images based on attributes,” no. July, 2020, doi: 10.13140/RG.2.2.32736.81925.
Y. Wu, Y. Bai, Z. Lan, and S. Yao, “A Structural-Similarity Conditional GAN Method to Generate Real-Time Topology for Shell-Infill Structures,” Int. J. Comput. Methods, no. May, 2023, doi: 10.1142/S0219876223410074.
K. Fu, J. Peng, H. Zhang, X. Wang, and F. Jiang, “Image super-resolution based on generative adversarial networks: A brief review,” Comput. Mater. Contin., vol. 64, no. 3, pp. 1977–1997, 2020, doi: 10.32604/cmc.2020.09882.
DOI: https://doi.org/10.12962/jaree.v9i2.467
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.