What’s in a name? Thoughts on Red Team nomenclature

In my previous post, I promised to expand on the distinction between adversary emulation, adversary simulation, red teaming, and purple teaming, or at least how I tried to distinguish these terms in a way that made sense to me Emulation and simulation; I've heard both terms used interchangeably to refer to the same type of [...]

Thoughts on the recent Red Team debate

Around the end of November 2019, Florian Roth wrote a much-discussed post about problems he saw with today’s red teaming. I considered writing a blog post to diverge some of my ideas and “respond” to his concerns. However, as is often the case with these types of things, I didn’t get to it at the [...]

Deep dive into the security of Progressive Web Apps

In order to expand existing web applications to mobile and desktop environments,  more and more web developers are creating Progressive Web App (PWA) versions of their web applications. PWAs, originally proposed by Google in 2015, leverage the latest web standards to offer a native-like experience for both mobile and desktop applications.PWAs combine the best parts [...]

Here phishy phishy : How to recognize phishing

Here phishy phishy... - source: Combell According to our latest research, which can be seen in this video , an astonishing 32% of employees click on phishing URL's, and 1 in 5 emails can be considered as malicious. But what makes a phishing attack successful? Are we really that naive to let ourselves become phishing [...]

3 techniques to defend your Machine Learning models against Adversarial attacks

Following our accounts of what adversarial machine learning means and how it works, we close this series of posts by describing what you can do to defend your machine learning models against attackers. There are different approaches to solve this issue, and we discuss them in order of least to most effective: target concealment, data [...]

This is not a hot dog: an intuitive view on attacking machine learning models

In a previous post we introduced the field of adversarial machine learning and what it could mean for bringing AI systems into the real world. Now, we'll dig a little deeper into the concept of adversarial examples and how they work.For the purpose of illustrating adversarial examples, we’ll talk about them in the context of [...]

Users ignore your security awareness program? Ditch it!

Yes, getting staff attention for security awareness is hard. It's not that users don’t care. But everybody is fighting for their attention. And after all, the company is investing big money on security measures, so they're probably safe anyhow. Way too often, for each handful of truly enthusiastic users I find, there's also a large community [...]

Apples or avocados? An introduction to adversarial machine learning

A common principle in cybersecurity is to never trust external inputs. It’s the cornerstone of most hacking techniques, as carelessly handled external inputs always introduce the possibility of exploitation. This is equally true for APIs, mobile applications and web applications.

It’s also true for deep neural networks.