A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models
Agarap, A. F. (2018). Deep learning using rectified linear units (relu). CoRR abs/1803.08375.
Asokan, M. J. B. G. A. N. (2019). Making targeted black-box evasion attacks effective and efficient. CoRR abs/1906.03397.
Bai, W., C. Quan, and Z. Luo (2017). Alleviating adversarial attacks via convolutional autoencoder. In 2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), pp. 53-58. IEEE.
Bakator, M. and D. Radosav (2018). Deep learning and medical diagnosis: A review of literature. Multimodal Technologies and Interaction 2, 47.
Baker, B., I. Kanitscheider, T. Markov, Y.Wu, G. Powell, B. McGrew, and I. Mordatch (2019). Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528 .
Bhat, X. Y. P. H. Q. Z. R. R. and X. Li (2017, July). Adversarial examples: Attacks and defenses for deep learning. CoRR abs/1712.07107.
Board, F. S. (2017). Artificial intelligence and machine learning in financial services: Market developments and financial stability implications. Financial Stability Board, 45.
Carlini, A. A. N. and D. A. Wagner (2018, February). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. CoRR abs/1802.00420.
Carlini, N. and D. A.Wagner (2016). Towards evaluating the robustness of neural networks. CoRR abs/1608.04644.
Chen, I. and B. Sirkeci-Mergen (2018). A comparative study of autoencoders against adversarial attacks. nt'l Conf. IP, Comp. Vision, and Pattern Recognition.
Clevert, D.-A., T. Unterthiner, and S. Hochreiter (2015). Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 .
Gachagan, C. N. W. I. A. and S. Marshall (2018, November). Activation functions: Comparison of trends in practice and research for deep learning. CoRR abs/1811.03378.
Goodfellow, I. J., J. Shlens, and C. Szegedy (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 .
Han, J. and C. Moraga (1995). The in uence of the sigmoid function parameters on the speed of backpropagation learning. In International Workshop on Artificial Neural Networks, pp. 195-201. Springer.
Harding, S., P. Rajivan, B. I. Bertenthal, and C. Gonzalez (2018). Human decisions on targeted and non-targeted adversarial sample. In CogSci.
Isakov, M., V. Gadepally, K. M. Gettings, and M. A. Kinsy (2019). Survey of attacks and defenses on edge-deployed neural networks. In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1-8.
Jagielski, M., A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li (2018). Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 19-35.
Jiang, J. G. Y. Z. X. H. Y. and J. Sun (2019, November). Rnn-test: Adversarial testing framework for recurrent neural network systems. CoRR.
Kingma, D. and J. Ba (2014, December). Adam: A method for stochastic optimization. International Conference on Learning Representations.
Kolagari, K. A. R. T. and M. Zoppelt (2019, January). Attacks on machine learning: Lurking danger for accountability. CoRR.
Lee, S. P. H. R. S. L. S. and J. Lee (2019, November). Learning predict-and-simulate policies from unorganized human motion data. ACM Transactions on Graphics 38, 1-11.
Li, B. and Y. Vorobeychik (2018). Evasionrobust classification on binary domains. ACM Trans. Knowl. Discov. Data 12 (4), 50:1-50:32.
Mahfuz, R. S. R. and A. E. Gamal (2018, December). Combatting adversarial attacks through denoising and dimensionality reduction: A cascaded autoencoder approach. CoRR abs/1812.03087.
Mao, F. C. N. C. H. and H. Hu (2018, November). Assessing four neural networks on handwritten digit recognition dataset (MNIST). CoRR abs/1811.08278.
Nita-Rotaru, A. C. A. O. C. and B. Kim (2019, April). Are self-driving cars secure? evasion attacks against deep neural networks for steering angle prediction. CoRR abs/1904.07370.
Principe, S. Y. J. C. (2018, March). Understanding autoencoders with information theoretic concepts. CoRR abs/1804.00057.
Rubinstein, L. H. A. D. J. B. N. B. and J. D. Tygar (2011, October). Adversarial machine learning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, AISec '11, New York, NY, USA, pp. 43- 58. ACM.
Saito, H. Z. S. S. T. K. J. (2018, July). Modeadaptive neural networks for quadruped motion control. ACM Trans. Graph. 37 (4), 145:1- 145:11.
Schmidhuber, J. (2014, April). Deep learning in neural networks: An overview. CoRR abs/1404.7828.
Shafiullah, K. Y. X. V. T. N. M. and A. Madry (2018, September). Training for faster adversarial robustness verification via inducing relu stability. CoRR abs/1809.03008.
Siddiqi, A. (2019, July). Adversarial security attacks and perturbations on machine learning and deep learning methods. CoRR.
Sukthankar, L. P. J. D. R. and A. Gupta (2017, March). Robust adversarial reinforcement learning. CoRR abs/1703.02702.
Tabacof, G. G. P. and E. Valle (2018). Adversarial attacks on variational autoencoders. CoRR abs/1806.04646.
Tippenhauer, A. E. R. T. S. G. M. P. M. C. S. Z. N. O. (2019, July). Real-time evasion attacks with physical constraints on deep learning-based anomaly detectors in industrial control systems. CoRR abs/1907.07487.
Tokui, A. K. I. J. G. S. B. Y. D. F. L. M. L. T. P. J. Z. X. H. C. X. J. W. Z. Z. Z. R. A. L. Y. S. H. Y. Z. Y. Z. Z. H. J. L. Y. B. T. A. S. and M. Abe (2018, March). Adversarial attacks and defences competition. CoRR abs/1804.00097.
Tsipras, A. M. A. M. L. S. D. and A. Vladu (2017). Towards deep learning models resistant to adversarial attacks. CoRR abs/1706.06083.
Vinyals, O., I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, et al. (2019). Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575 (7782), 350-354.
Wang, T.-C., M.-Y. Liu, A. Tao, G. Liu, J. Kautz, and B. Catanzaro (2019). Fewshot video-to-video synthesis. arXiv preprint arXiv:1910.12713 .
Zhang, Z. and M. R. Sabuncu (2018, May). Generalized cross entropy loss for training deep neural networks with noisy labels. CoRR abs/1805.07836.
Zhao, F. Y. C. L. Y. W. L. and X. Chen (2018, September). Interpreting adversarial robustness: A view from decision surface in input space. CoRR abs/1810.00144.
Zheng, H., Z. Yang, W. Liu, J. Liang, and Y. Li (2015). Improving deep neural networks using softplus units. In 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1-4. IEEE.
Zheng, L. Y. Z. S. Y. and K. Zhou (2019, November). Dynamic hair modeling from monocular videos using deep neural networks. ACM Trans. Graph. 38 (6), 235:1-235:12.
This work is licensed under a Creative Commons Attribution 3.0 License.