Generating Sea Surface Object Image Using Image-to-Image Translation

Publications

Share / Export Citation / Email / Print / Text size:

International Journal of Advanced Network, Monitoring and Controls

Xi'an Technological University

Subject: Computer Science, Software Engineering

GET ALERTS

eISSN: 2470-8038

DESCRIPTION

0
Reader(s)
0
Visit(s)

Comment(s)
0
Share(s)

SEARCH WITHIN CONTENT

FIND ARTICLE

Volume / Issue / page

Related articles

VOLUME 6 , ISSUE 2 (Jul 2021) > List of articles

Generating Sea Surface Object Image Using Image-to-Image Translation

Wenbin Yin * / Jun Yu * / Zhiyi Hu *

Keywords : Image Generation, Conditional Generative Adversarial Network, Sea Surface Object, Image-To-Image Translation.

Citation Information : International Journal of Advanced Network, Monitoring and Controls. Volume 6, Issue 2, Pages 48-55, DOI: https://doi.org/10.21307/ijanmc-2021-016

License : (CC-BY-NC-ND 4.0)

Published Online: 12-July-2021

ARTICLE

ABSTRACT

Graphical ABSTRACT

Content not available Share

FIGURES & TABLES

Figure 1.

Conditional generative adversarial network Structure.

Figure 2.

Network structure of Generator

Figure 3.

Network structure of Discriminator

Figure 4.

Data annotation example

Figure 5.

The Results of Comparative Experiment

REFERENCES

  1. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., Generative adversarial nets, Advances in neural information processing systems (2014), pp. 2672–2680.
  2. M. Mirza, S. Osindero, Conditional Generative Adversarial Nets, Computer Science (2014) pp.2672–2680.
  3. P. Isola, J.-Y. Zhu, T. Zhou, et al. Image-to-image translation with conditional adversarial networks, in the IEEE conference on computer vision and pattern recognition (CVPR) (2017), pp. 1125–1134.
  4. T. C. Wang, M. Y. Liu, J. Y. Zhu, A. Tao, J. Kautz, B. Catanzaro, High-resolution image synthesis and semantic manipulation with conditional GANs, In the IEEE conference on computer vision and pattern recognition (CVPR) (2018), pp. 8798–8807.
  5. M. Zhai, L. Chen, F. Tung, J. He, M. Nawhal, G. Mori, Lifelong gan: Continual learning for conditional image generation. In the IEEE International Conference on Computer Vision(ICCV) (2019), pp. 2759–2768.
  6. D. Bau, H. Strobelt, W. Peebles, et al. Semantic photo manipulation with a generative image prior[J]. arXiv preprint arXiv:2005.07727, 2020.
  7. X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, Infogan: Interpretable representation learning by information maximizing generative adversarial nets, Advances in neural information processing systems (NIPS) (2016), pp. 2172–2180.
  8. J.Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, E. Shechtman, Toward multimodal image-to-image translation, Advances in neural information processing systems (NIPS) (2017), pp. 465–476.
  9. J. Y. Zhu, T. Park, P. Isola, et al., Unpaired image-to-image translation using cycle-consistent adversarial networks, In the IEEE international conference on computer vision (ICCV) (2017), pp. 2223–2232.
  10. W. Xian, P. Sangkloy, V. Agrawal, et al. Texturegan: Controlling deep image synthesis with texture patches, In the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 8456–8465.
  11. Y. Lu, S. Wu, Y. W. Tai, et al., Image generation from sketch constraint using contextual gan, in the European Conference on Computer Vision (ECCV) (2018), pp. 205–220.
  12. A. Gonzalez-Garcia, J. Van De Weijer, Y. Bengio, Image-to-image translation for cross-domain disentanglement, Advances in neural information processing systems (NIPS) (2018), pp. 1287–1298.
  13. H. Tang, D. Xu, G. Liu, W. Wang, N. Sebe, Y. Yan, Cycle in cycle generative adversarial networks for keypoint-guided image generation. In the 27th ACM International Conference on Multimedia (2019, October), pp. 2052–2060.
  14. Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, L. Carin, Triangle generative adversarial networks. In NIPS. (2017) pp. 5253–5262.
  15. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR. (2018)
  16. Huang X, Liu M Y, Belongie S, et al. Multimodal unsupervised image-to-image translation. In ECCV. (2018)
  17. M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-image translation networks. In NIPS, 2017.
  18. Taigman, Y., Polyak, A., Wolf, L. Unsupervised cross-domain image generation. In ICLR. (2017)
  19. K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, D. Krishnan, Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR. (2017)
  20. E. Hosseini-Asl, Y. Zhou, C. Xiong, R. Socher, Augmented cyclic adversarial learning for low resource domain adaptation (2018). arXiv preprint arXiv:1807.00374.
  21. M. Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, J. Kautz, Few-shot unsupervised image-to-image translation, In the IEEE International Conference on Computer Vision (ICCV) (2019), pp. 10551–10560.
  22. T. C. Wang, M. Y. Liu, A. Tao, G. Liu, J. Kautz, B. Catanzaro, Few-shot video-to-video synthesis, (2019) arXiv preprint arXiv:1910.12713.
  23. A. Torralba, Contextual priming for object detection, International journal of computer vision, 2003, 53(2): 169–191.
    [CROSSREF]
  24. X. Wang and A. Gupta. Generative image modeling using style and structure adversarial networks. In ECCV. (2016)
  25. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR. (2016)
  26. D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon. Pixel-level domain transfer. In ECCV. (2016)
  27. K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition. In the IEEE con ference on computer vision and pattern recognition (CVPR) (2016), pp. 770–778.

EXTRA FILES

COMMENTS