Journal Press India®

Ensemble WGAN (EWG): Advancing Image Synthesis and Deepfake Detection with Heterogeneous Discriminator Approach

Vol 3 , Issue 2 , July - December 2023 | Pages: 69-97 | Research Paper  

https://doi.org/10.17492/computology.v3i2.2304


Author Details ( * ) denotes Corresponding author

1. * Preeti Sharma, JRF, School of Computer Sciences, UPES, Dehradun, Uttarakhand, India (preetiii.kashyup@gmail.com)
2. Manoj Kumar, Associate Professor, Engineering and Information Sciences, University of Wollongong , Dubai, Dubai, United Arab Emirates (wss.manojkumar@gmail.com)
3. Hitesh Kumar Sharma, senior associate professor, school ofcomputer science, upes, Dehradun, Uttarakhand, India (hkshitesh@gmail.com)

In the present time, deepfakes pose a big threat to the security of our society. Concerns regarding these fake images being used for malevolent reasons on social networking sites have increased. As a solution it, this paper proposed a new model called EWG (Ensemble WGAN) which helps to detect deepfake using its unique ensemble architecture. The EWG model is an expansion of the WGAN architecture that improves deepfake detection and GAN training issues. It employs a voting ensemble of three unique discriminators and a single generator. The approach works with generator weights updated by the best discriminator on each epoch. The model dynamically selects the best discriminator based on a unique diverse loss function that combines adversarial loss and the SSIM metric, boosting diversified performance. Leveraging the “Indian Actor Images Dataset” and “5-Celebrity Faces,” the EWG model achieves remarkable deepfake detection accuracy of 98.480% and 96.417%, with computation times of 1813.251 and 2197.011 seconds. Furthermore, it mitigates GAN training challenges like mode collapses, gradient penalties, and convergence and provides superior image quality, surpassing basic WGAN and other state-of-the-art methods. The EWG model demonstrates its dependability and potential for countering deepfakes and improving GAN capabilities.

Keywords

Deep Learning; Digital Forensics; Generative Adversarial Networks (GAN); Ensemble GAN Model; Deepfake

  1. Rezaei, M., Näppi, J. J., Lippert, C., Meinel, C. & Yoshida, H. (2020). Generative multi-adversarial network for striking the right balance in abdominal image segmentation. Internationla Journal of Computer Assisted Radiology and Surgery, 15(11), 1847–1858. Retrieved from doi:10.1007/s11548-020-02254-4.
  2. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. (2020). Generative adversarial networks. Communication of ACM, 63(11), 139-144. Retrieve from doi: 10.1145/3422622.
  3. Radford, A., Metz, L. & Chintala, S. (2016). Unsupervised representation learning with deep convolutional generative adversarial networks. 4th International Conference Learn. Represent. ICLR 2016., pp. 1–16.
  4. Cao, Y.-J., Jia, L. L., Chen, Y. X., Lin, N., Yang, C., Zhang, B., Liu, Z., Li, X. X., & Dai, H.H. (2019). Recent advances of generative adversarial networks in computer vision. IEEE Access, 7(C), 14985–15006. Retrieved from doi: 10.1109/ACCESS.2018.2886814.
  5. Isola, P., Zhu, J. Y., Zhou, T. & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. Proceedings - 30th IEEE Conference Computer Vision Pattern Recognition, CVPR 2017, pp. 5967–5976. Retrieved from doi: 10.1109/CVPR.2017.632.
  6. Quan, F., Lang, B. & Liu, Y. (2022). ARRPNGAN: Text-to-image GAN with attention regularization and region proposal networks. Signal Processing: Image Communication, 106, 116728. Retrieved from doi: 10.1016/j.image.2022.116728.
  7. Arjovsky, M., Chintala, S. & Bottou, L. (2017). Wasserstein generative adversarial networks. 34th International Conference Machince Learning (ICML 2017), 1, 298–321.
  8. Khanuja, S. S. & Khanuja, H. K. (2021). GAN challenges and optimal solutions. International Research Journal of Engineering and Technology (IRJET), 8(10), 836–840.
  9. Ganaie, M. A., Hu, M., Malik, A. K., Tanveer, M. & Suganthan, P. N. (2022). Ensemble deep learning: A review. Engineering Applications of Artificial Intelligence, 115, 105151. Retrieved from doi: 10.1016/j.engappai.2022.105151.
  10. Wu, Z., He, C., Yang, L. & Kuang, F. (2021). Attentive evolutionary generative adversarial network. Applied Intelligence, 51(3), 1747–1761. Retrieved from doi: 10.1007/s10489-020-01917-8.
  11. Aggarwal, A., Mittal, M. & Battineni, G. (2021). Generative adversarial network: An overview of theory and applications. International Journal of Information Management Data Insights, 1(1), 100004. Retrieved from  doi: 10.1016/j.jjimei.2020.100004.
  12. Wang, Y., Zhang, L. & van de Weijer, J. (2016). Ensembles of generative adversarial networks. Retrieved from http://arxiv.org/abs/1612.00991
  13. Zhang, R., Isola, P., Efros, A. A., Shechtman, E. & Wang, O. (n.d.). The unreasonable effectiveness of deep features as a perceptual metric. Retrieved from https://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_The_Unreasonable_Effectiveness_CVPR_2018_paper.pdf
  14. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612. Retrieved from doi: 10.1109/TIP.2003.819861.
  15. Aduwala, S. A., Arigala, M., Desai, S., Quan, H. J. & Eirinaki, M. (2021). Deepfake detection using GAN discriminators. IEEE 7th International Conference on Big Data Computing Service and Applications (BigDataService), pp. 69–77. Retrieved from doi: 10.1109/BigDataService52369.2021.00014.
  16. Xie, Y., Lin, T., Chen, Z., Xiong, W., Ran, Q. & Shang, C. (2022). A lightweight ensemble discriminator for generative adversarial networks. Knowledge-Based System, 250, 108975. Retrieved from doi: 10.1016/j.knosys.2022.108975.
  17. Sharma, P., Kumar, M. & Sharma, H. (2022). Comprehensive analyses of image forgery detection methods from traditional to deep learning approaches: An evaluation. Multimedia Tools and Applications, 82(12), 18117-18150.
  18. https://www.kaggle.com/datasets/dansbecker/5-celebrity-faces-dataset
  19. https://www.kaggle.com/datasets/iamsouravbanerjee/indian-actor-images-dataset.
  20. Yaniv, B., Galanti, T., Benaim, S. & Wolf, L. (2021). Evaluation metrics for conditional image generation. International Journal of Computer Vision, 129, 1712-1731.
  21. Kinakh, V., Drozdova, M., Quétant, G., Golling, T. & Voloshynovskiy, S. (2021). Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN. arXiv preprint arXiv:2112.09653.
  22. Castro, F. M., Manuel, J., Marín-Jiménez, N. G., Cordelia, S. & Karteek, A. (2017). End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 233-248.
  23. Kumar, M. & Sharma. H. K. (2023). A GAN-based model of deepfake detection in social media. Procedia Computer Science, 218, 2153-2162.
  24. Shmelkov, K., Cordelia, S. & Karteek, A. (2018). How good is my GAN? In Proceedings of the European Conference on Computer Vision (ECCV), pp. 213-229.
  25. Li, C., Zi, W. & Hairong, Q. (2018). Fast-converging conditional generative adversarial networks for image synthesis. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2132-2136. IEEE.
  26. Chavdarova, T. & François, F. (2018). Sgan: An alternative training of generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9407-9415.
  27. Brock, A., Donahue, J. & Simonyan, K. (2018). Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096.
Abstract Views: 2
PDF Views: 118

By continuing to use this website, you consent to the use of cookies in accordance with our Cookie Policy.