Machine learning has shown significant success in solving complex problems, but current machine learning algorithms produce models that are vulnerable to adversaries who may want to mislead them. So far, machine learning algorithms have been designed to run in benign environments. Increasing use of machine learning algorithms in security applications, motivated adversaries to focus on the vulnerability of machine learning. In the adversarial environment, the adversary can interfere in the training and the prediction phases of machine learning algorithms. Adversarial example is one of the most important vulnerabilities in machine learning algorithms, which is intentionally designed by to evade the machine learning model.
For more details, please read the following papers
Towards the Science of Security and Privacy in Machine Learning. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. 3rd IEEE European Symposium on Security and Privacy , London, UK (2016)
Explaining and Harnessing Adversarial Examples. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. arXiv preprint arXiv:1412.6572 (2014)