Shauli Zacks
Published on: August 25, 2025
SafetyDetectives recently sat down with Matias Madou, CTO and Co-Founder of Secure Code Warrior, to discuss his journey from software developer to cybersecurity leader and entrepreneur. With a background in security research, obfuscation techniques, and nearly a decade at Fortify, Matias has seen firsthand how critical it is to upskill developers if organizations hope to defend against modern threats. In this interview, he shares how Secure Code Warrior is helping organizations foster a security-first development culture, the risks and opportunities of AI-driven coding tools, and why the path to safer software must always begin with better-trained developers.
Can you start by sharing your journey into cybersecurity and what led you to co-found Secure Code Warrior?
I’ve always had a keen interest in technology, and I started my career in software development. I quickly grew fascinated by cybersecurity, and it wasn’t long before security research was a main highlight of my day-to-day life. I was quite “at home” in an academic setting, and this is where I specialized in security obfuscation techniques for my Ph.D. This led to a seven-year stint at Fortify in San Francisco, where I was fortunate to have some brilliant cybersecurity leaders as mentors. It was here I learned that we must start with developer upskilling in secure code if we hope to form a meaningful defense against threat actors.
Secure Code Warrior was born in 2015 by my Co-Founder, Pieter Danhieux, and began building a security education platform for software engineers. By 2016, they’d enlisted CBA, Westpac, ANZ, and ING, a Dutch bank. An ING employee told Pieter about my company at the time, Sensei, with a complementary technology. Pieter and I had been friends back in university in Belgium, so he reached out, and we stayed in touch for about a year before officially joining forces in 2017.
For readers who may not be familiar, what does Secure Code Warrior focus on, and what makes your approach to secure coding unique?
Secure Code Warrior is focused on developer risk management, setting the standards for secure coding in a rapidly evolving digital world. We are committed to upskilling developers to create secure code from the start, especially as the demand for rapid application development and deployment is higher than ever.
AI is accelerating code production, but this also increases bugs, security flaws, technical debt and threat landscape complexity. There’s a critical need for enhanced focus on security and managing risk throughout the software development lifecycle (SDLC). Secure Code Warrior cultivates and maintains a security-first development culture, making it possible to measure developer security risk through benchmarking, robust governance and hands-on education.
This ensures that CISOs, AppSec teams and other security leaders can guarantee that critical applications and software have the right security fundamentals for the present and future.
A lot of research around AI code generation tools focuses on accuracy. But how much should we also worry about the security of the code these tools generate?
AI coding assistants present various security concerns that must be acknowledged, especially when the technology is being leveraged by a developer with inadequate security expertise. AI coding tools offer significant benefits in terms of productivity and efficiency when utilized by a security-proficient developer.
However, the sticking point is that the majority of the developer population struggles with secure coding practices and lacks the necessary security expertise to safely leverage these tools. As a result, AI can assist in the acceleration, creation and deployment of insecure code and increase the risk of hidden bugs, security vulnerabilities and technical debt.
In your view, can organizations actually trust AI assistants when it comes to software development—or does that trust need to start with the developer first?
It absolutely must start with the developer. AI-assisted technology – like vibe coding for example, enables both developers and non-developers to guide software development through prompts, using agentic AI coding tools to significantly accelerate code creation. Basically, this means you’re trusting the AI to develop software on autopilot.
If the user lacks the necessary skills to vet the security of products assisted by LLMs and other AI technologies, it can become extremely easy for insecure code to slip through the cracks. To reap the benefits and leverage the productivity gain of these tools, security must be taken seriously and prioritized by each developer.
You’ve spoken about the importance of developers creating “security-first prompts” when working with LLMs. Can you explain what that looks like in practice?
AI tools are capable of creating secure code, but they don’t produce secure code often enough to be trusted blindly. In a recent study, we found that no current AI coding tool is accurate enough at secure coding and contextual security on its own to be considered “safe” in an enterprise environment today. To help with this gap, developers need to have both the skills to be able to develop prompts that generate secure code, and also perform competent security reviews of the generated code.
When using LLM tools, security-first coding prompts should be detailed, embed very explicit security requirements, naming vulnerabilities to avoid and requiring a secure output. As mentioned, this only works if developers have the security proficiency to test the outputs as secure in the context of the codebase they are working on.
AI tool security rulesets are also a good idea, so that you can at least start with some more secure defaults than there otherwise would have been.
What role do you see developer training and risk management playing in reducing the security dangers posed by improperly skilled developers or over-reliance on AI tools?
It’s essential for developers to learn and strengthen their secure coding skills so they can make informed decisions about AI usage that better protects code, as opposed to exposing it. Really, Secure by Design (SbD) is what all organizations should be looking to achieve. Security must be implemented from the very start of the SDLC, and the collaboration between security leaders and developers is key.
By establishing programs that focus on improving the knowledge, expertise, and skills of developers, we’re arming ourselves with arguably one of the best lines of defense: adept professionals who are highly capable of working with security teams to reduce the risk of flaws in software.
More broadly, what strategies or tools can help organizations scale secure coding practices across large teams, especially now that AI is entering the picture?
I’d suggest focusing efforts on readying the development cohort to leverage AI effectively and safely. It must be made abundantly clear why and how these tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate risk as it becomes part of developers’ day-to-day. I’d also recommend that organizations invest in learning platforms for developers to practice writing safeguarded code from the start.
This will help them identify different vulnerabilities they might inadvertently introduce into their process, including those generated by AI coding assistants, and navigate those scenarios more effectively moving forward.