DETEKSI DAN PEMULIHAN SERANGAN ADVERSARIAL : STUDI TENTANG FAST GRADIENT SIGN METHOD DAN AUTOENCODER UNTUK PENCITRAAN MEDIS
DOI:
https://doi.org/10.24252/instek.v10i1.54073Keywords:
Adversarial Attacks, Adversarial Robustness, Autoencoder, Fast Gradient Sign Method, XceptionAbstract
Penelitian ini membahas tantangan serangan Fast Gradient Sign Method (FGSM) dalam pencitraan medis dan keamanan data. Karena layanan kesehatan semakin mengandalkan model Machine Learning, jaminan integritas dan kerahasiaan dalam data medis menjadi sangat penting. Studi ini meninjau literatur terkait serangan adversarial pada Machine Learning, khususnya dalam konteks aplikasi pencitraan medis. Kerentanan ini mencakup risiko manipulasi data yang dapat menyebabkan kesalahan diagnosis atau pelanggaran privasi. Literatur yang dianalisis mencakup jurnal medis dan konferensi pembelajaran mesin guna memahami metode serangan, dampaknya, serta pendekatan mitigasinya.
Hasil penelitian menunjukkan bahwa serangan adversarial dapat secara signifikan menurunkan performa model deep learning. Untuk mengatasinya, Autoencoder terbukti efektif dalam memulihkan data yang diserang dan meningkatkan akurasi prediksi. Selain itu, teknik augmentasi data dapat memperkuat ketahanan model, terutama pada dataset yang tidak seimbang, serta mengurangi risiko overfitting. Dengan menerapkan kerangka kerja SEMMA (Sample, Explore, Modify, Model, and Assess), studi ini membuktikan bahwa serangan FGSM yang hanya menambahkan sedikit noise dapat menggagalkan prediksi model, namun dapat dipulihkan secara efektif oleh Autoencoder. Penelitian ini memberikan kontribusi penting dalam memahami serangan adversarial dalam bidang medis dan menawarkan strategi pertahanan yang menjanjikan.
Downloads
References
[1] A. S. Chauhan, R. Singh, N. Priyadarshi, B. Twala, S. Suthar, and S. Swami, “Unleashing the power of advanced technologies for revolutionary medical imaging: pioneering the healthcare frontier with artificial intelligence,” Discover Artificial Intelligence, vol. 4, no. 1, p. 58, Aug. 2024, doi: 10.1007/s44163-024-00161-0.
[2] L. Pinto-Coelho, “How Artificial Intelligence Is Shaping Medical Imaging Technology: A Survey of Innovations and Applications,” Dec. 01, 2023, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/bioengineering10121435.
[3] A. E. Putra, K. Kartini, and A. P. Sari, “Metode Convolutional Neural Network dan Extreme Gradient Boost untuk Mengklasifikasi Penyakit Pneumonia,” JASIEK (Jurnal Aplikasi Sains, Informasi, Elektronika dan Komputer), vol. 6, no. 1, pp. 33–40, Jul. 2024, doi: 10.26905/jasiek.v6i1.11464.
[4] J. Al-Jaroodi, N. Mohamed, and E. Abukhousa, “Health 4.0: On the Way to Realizing the Healthcare of the Future,” IEEE Access, vol. 8, pp. 211189–211210, 2020, doi: 10.1109/ACCESS.2020.3038858.
[5] B. J. Erickson, P. Korfiatis, Z. Akkus, and T. L. Kline, “Machine Learning for Medical Imaging,” RadioGraphics, vol. 37, no. 2, pp. 505–515, Mar. 2017, doi: 10.1148/rg.2017160130.
[6] M. Adelusola, “Integrated AI Solutions: From Combating Fake News to Revolutionizing Healthcare and VLSI.” [Online]. Available: https://www.researchgate.net/publication/387018390
[7] J. Zhang and C. Li, “Adversarial Examples: Opportunities and Challenges,” IEEE Trans Neural Netw Learn Syst, vol. 31, no. 7, pp. 2578–2593, Jul. 2020, doi: 10.1109/TNNLS.2019.2933524.
[8] L. Wu, Z. Zhu, C. Tai, and W. E, “Understanding and Enhancing the Transferability of Adversarial Examples,” Feb. 2018, [Online]. Available: http://arxiv.org/abs/1802.09707
[9] S. A. Khowaja, I. H. Lee, K. Dev, M. A. Jarwar, and N. M. F. Qureshi, “Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data,” Jan. 2022, doi: 10.1109/TNSE.2022.3188575.
[10] S. M. A. Naqvi, M. Shabaz, M. A. Khan, and S. I. Hassan, “Adversarial Attacks on Visual Objects Using the Fast Gradient Sign Method,” J Grid Comput, vol. 21, no. 4, p. 52, Dec. 2023, doi: 10.1007/s10723-023-09684-9.
[11] G. R. Machado, E. Silva, and R. R. Goldschmidt, “Adversarial Machine Learning in Image Classification: A Survey Toward the Defender’s Perspective,” ACM Comput Surv, vol. 55, no. 1, pp. 1–38, Jan. 2023, doi: 10.1145/3485133.
[12] S. B. Weber, S. Stein, M. Pilgermann, and T. Schrader, “Attack Detection for Medical Cyber-Physical Systems–A Systematic Literature Review,” IEEE Access, vol. 11, pp. 41796–41815, 2023, doi: 10.1109/ACCESS.2023.3270225.
[13] K. Berahmand, F. Daneshfar, E. S. Salehi, Y. Li, and Y. Xu, “Autoencoders and their applications in machine learning: a survey,” Artif Intell Rev, vol. 57, no. 2, p. 28, Feb. 2024, doi: 10.1007/s10462-023-10662-6.
[14] P. Li, X. Rao, J. Blase, Y. Zhang, X. Chu, and C. Zhang, “CleanML: A Study for Evaluating the Impact of Data Cleaning on ML Classification Tasks,” in 2021 IEEE 37th International Conference on Data Engineering (ICDE), IEEE, Apr. 2021, pp. 13–24. doi: 10.1109/ICDE51399.2021.00009.
[15] G. Spigler, “Denoising Autoencoders for Overgeneralization in Neural Networks,” Sep. 2017, doi: 10.1109/TPAMI.2019.2909876.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Jesen Winardo, Yefta Christian, Tony Tan

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.