Neural Style Transfer
DOI:
https://doi.org/10.46610/JoDDM.2025.v10i03.004Keywords:
Convolutional neural networks, Content features, Neural style transfer, Style features, VGG19 modelAbstract
Neural Style Transfer (NST) is a deep learning method that creates a new image by combining the content of one image with the style of another. This technique relies on Convolutional Neural Networks (CNNs) to extract high-level content features from deeper layers and style features from lower layers of a pre-trained deep learning model. The goal of this project is to enhance the final image so that it closely resembles the original image in terms of content, while still capturing the style and texture from the reference image. The deep learning model is trained using two types of losses: content loss and style loss. The content loss ensures that the main structure and meaning of the original image are preserved, while the style loss maintains the color, texture, and patterns from the style image. The model consists of 19 layers, including 16 convolutional layers and 3 fully connected layers, making it both simple and effective for tasks like feature extraction and image classification.
References
C. Zhao, “A survey on image style transfer approaches using deep learning,” Journal of Physics: Conference Series, vol. 1453, no. 1, p. 012129, Jan. 2020, doi: https://doi.org/10.1088/1742-6596/1453/1/012129
A. Singh, V. Jaiswal, G. Joshi, A. Sanjeeve, S. Gite, and K. Kotecha, “Neural style transfer: A critical review,” IEEE Access, vol. 9, pp. 131583–131613, 2021, doi: https://doi.org/10.1109/access.2021.3112996
L. Liu, L. Niu, J. Tang, and Y. Ding, “VSRDiff: Learning inter-frame temporal coherence in diffusion model for video super-resolution,” IEEE Access, vol. 13, pp. 11447–11462, 2025, doi: https://doi.org/10.1109/access.2025.3529758
Y. Deng, F. Tang, W. Dong, W. Sun, F. Huang, and C. Xu, “Arbitrary style transfer via multi-adaptation network,” Proceedings of the 28th ACM International Conference on Multimedia, pp. 2719-2727, Oct. 2020, doi: https://doi.org/10.1145/3394171.3414015
S. Islam et al., “Generative adversarial networks (GANs) in medical imaging: Advancements, applications, and challenges,” in IEEE Access, vol. 12, pp. 35728–35753, 2024, doi: https://doi.org/10.1109/ACCESS.2024.3370848
S. Liu et al., “AdaAttN: Revisit attention mechanism in arbitrary neural style transfer,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 2021, pp. 6629–6638, doi: https://doi.org/10.1109/ICCV48922.2021.00658
Y. Zhang et al., “Domain enhanced arbitrary image style transfer via contrastive learning,” in SIGGRAPH ’22: ACM SIGGRAPH 2022 Conference Proceedings, New York, NY, USA: Association for Computing Machinery, Jul. 2022, pp. 1–8. doi: https://doi.org/10.1145/3528233.3530736
C. Zhang, X. Xu, L. Wang, Z. Dai, and J. Yang, “S2WAT: Image style transfer via hierarchical vision transformer using strips window attention,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 7, pp. 7024–7032, Mar. 2024, doi: https://doi.org/10.1609/aaai.v38i7.28529
J. Cheng, A. Jaiswal, Y. Wu, P. Natarajan and P. Natarajan, “Style-aware normalized loss for improving arbitrary style transfer,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 134–143, doi: https://doi.org/10.1109/CVPR46437.2021.00020
A. Singh, S. Hingane, X. Gong and Z. Wang, “SAFIN: Arbitrary style transfer with self-attentive factorized instance normalization,” 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 2021, pp. 1–6, doi: https://doi.org/10.1109/ICME51207.2021.9428124