Thoughts on the recent Red Team debate

Around the end of November 2019, Florian Roth wrote a much-discussed post about problems he saw with today’s red teaming. I considered writing a blog post to diverge some of my ideas and “respond” to his concerns. However, as is often the case with these types of things, I didn’t get to it at the … Continue reading Thoughts on the recent Red Team debate

Deep dive into the security of Progressive Web Apps

In order to expand existing web applications to mobile and desktop environments,  more and more web developers are creating Progressive Web App (PWA) versions of their web applications. PWAs, originally proposed by Google in 2015, leverage the latest web standards to offer a native-like experience for both mobile and desktop applications.PWAs combine the best parts … Continue reading Deep dive into the security of Progressive Web Apps

Creating Responders in The Hive

The Hive is an  open source Security Incident Response Platform (SIRP) that has gained quite some popularity over the last few years. One of the many reasons is the link with Cortex and its Analyzers and Responders. Analysts can automate the response to existing cases by initiating one or more Responders. This blog will show … Continue reading Creating Responders in The Hive

Here phishy phishy : How to recognize phishing

Here phishy phishy... - source: Combell According to our latest research, which can be seen in this video , an astonishing 32% of employees click on phishing URL's, and 1 in 5 emails can be considered as malicious. But what makes a phishing attack successful? Are we really that naive to let ourselves become phishing … Continue reading Here phishy phishy : How to recognize phishing

3 techniques to defend your Machine Learning models against Adversarial attacks

Following our accounts of what adversarial machine learning means and how it works, we close this series of posts by describing what you can do to defend your machine learning models against attackers. There are different approaches to solve this issue, and we discuss them in order of least to most effective: target concealment, data … Continue reading 3 techniques to defend your Machine Learning models against Adversarial attacks

This is not a hot dog: an intuitive view on attacking machine learning models

In a previous post we introduced the field of adversarial machine learning and what it could mean for bringing AI systems into the real world. Now, we'll dig a little deeper into the concept of adversarial examples and how they work.For the purpose of illustrating adversarial examples, we’ll talk about them in the context of … Continue reading This is not a hot dog: an intuitive view on attacking machine learning models