Why the pentesting playbook doesn’t fit: belief, assumptions, and non-determinism

Document information Series Securing AI systems without overconfidence or fear Post 1 of 5 Title Why the pentesting playbook doesn’t fit: belief, assumptions, and non-determinism Date March 2026 Author Hussein Bahmad (NVISO) Reading time ~12 min Version 1.0 Post 1 of 5 - Securing AI systems without overconfidence or fear This is the first of … Continue reading Why the pentesting playbook doesn’t fit: belief, assumptions, and non-determinism

An introduction to automated LLM red teaming

Automated LLM red teaming

Introduction As large language models become increasingly embedded in production applications, from customer service chatbots to code assistants and document analysis tools, the security implications of these systems have moved from theoretical concern to practical necessity. Unlike traditional software security testing, LLM red teaming addresses unique challenges: prompt injection attacks, data leakage through carefully crafted … Continue reading An introduction to automated LLM red teaming