Following our accounts of what adversarial machine learning means and how it works, we close this series of posts by describing what you can do to defend your machine learning models against attackers. There are different approaches to solve this issue, and we discuss them in order of least to most effective: target concealment, data … Continue reading 3 techniques to defend your Machine Learning models against Adversarial attacks
In a previous post we introduced the field of adversarial machine learning and what it could mean for bringing AI systems into the real world. Now, we'll dig a little deeper into the concept of adversarial examples and how they work.For the purpose of illustrating adversarial examples, we’ll talk about them in the context of … Continue reading This is not a hot dog: an intuitive view on attacking machine learning models
A common principle in cybersecurity is to never trust external inputs. It’s the cornerstone of most hacking techniques, as carelessly handled external inputs always introduce the possibility of exploitation. This is equally true for APIs, mobile applications and web applications.
It’s also true for deep neural networks.