Thursday, October 30, 2025
HomeGuest BlogsInterview With Rick Caccia - CEO of WitnessAI by Shauli Zacks

Interview With Rick Caccia – CEO of WitnessAI by Shauli Zacks


Shauli Zacks

Published on: October 30, 2025
Content Editor

The rapid adoption of AI across enterprises has introduced an entirely new class of security challenges—chief among them, shadow AI. As employees increasingly experiment with unapproved AI tools and agents, sensitive corporate data is being exposed in ways traditional security systems can’t detect or control.

To understand how organizations can regain visibility and trust in their AI operations, SafetyDetectives spoke with Rick Caccia, CEO of WitnessAI. With over two decades of experience at leading cybersecurity firms like Palo Alto Networks, Google, and Symantec, Rick shared insights on why legacy DLP tools fail in the AI era, how fear-driven security approaches worsen shadow AI, and what companies can do to adopt AI securely without sacrificing innovation.

Can you introduce yourself and talk about your background and current role at WitnessAI?

Thanks for having me. I’m Rick Caccia, CEO of WitnessAI. I have spent more than two decades in the cybersecurity and compliance industry, and have held product and marketing leadership roles at some of the industry’s most impactful companies, including Palo Alto Networks, Google, and Symantec.

Working with security leaders in many large enterprises, it became clear that companies needed purpose-built security guardrails for AI as enterprise adoption began to accelerate at lightning pace. This led to the founding of WitnessAI in 2023 – a company dedicated to helping organizations navigate the security and compliance challenges stemming from the increased use and deployment of AI applications across their workforces. At WitnessAI, we have created a platform that provides a confidence layer for enterprise AI, providing a unified platform to govern and protect all AI activity—including employees, models, applications, and agents—so that enterprises can accelerate innovation without hesitation.

What is shadow AI, and why has it become such a big risk for companies today?

Shadow AI arises when employees use AI applications without the knowledge or oversight of corporate IT departments. As new AI tools emerge daily, shadow AI has become one of the most significant AI-related challenges, and most do not have visibility into the full scope of unsanctioned AI usage across departments. A recent study from Cybernews found that 59% of employees use AI tools that their employer has not approved, with 75% of employees who use unapproved AI tools sharing sensitive company information with these applications. More recently, shadow AI has expanded to include unsanctioned AI agents, in addition to applications.

One of the most explicit consequences of shadow AI is the intended and unintended exposure of sensitive information, leading to immediate threats such as data and IP leakage, and costly compliance violations. While this issue is often rooted in good intentions – like employees using new and innovative tools to bolster their productivity and streamline operations – it can be detrimental to organizations. Including agents in shadow AI expands the risk, since agents can do much more than simply leak data. They can erase files, execute transactions, etc.

Shadow AI also creates a dangerous level of organizational blindness, resulting in strategic actions being made about AI without full visibility into actual employee usage. At WitnessAI, we provide visibility, control, and security for all aspects of shadow AI.

Why can’t traditional DLP tools protect against AI-related threats?

DLP traditionally worked by “fingerprinting” sensitive data and then using those fingerprints to detect when that data was put into an email, copied to a file, etc. DLP works best with structured data and fixed rules, and ends up struggling – or in many cases being blind to – free-flowing AI conversations and agentic transactions. In the AI age, information security needs a better approach and we believe that will be behavioral, i.e. intention based. If a visitor were snooping around my office, the goal isn’t to prevent them from looking at a particular customer order; I don’t want them looking in my file cabinet at all. The best way to protect data within AI is by controlling at the behavior level, not the individual data element level. This becomes glaringly important with agentic operations.

The limitations of traditional DLP highlight a fundamental philosophical shift in security, moving from “security by blocking” to “security by enabling.” For example, our approach at WitnessAI specifically addresses the limitations of DLP through our intention-based routing. This approach allows organizations to classify what a person is trying to do by their prompt, such as writing a corporate contract or doing drug trial research, then apply policy based on that intention, even without explicit keywords. This provides a policy control action well-suited to the AI age of unstructured data.

How does a fear-based approach to AI make the shadow AI problem worse?

When corporate strategy relating to AI is driven by fear, it creates a dangerous dynamic. For years, the productivity, power, and agility that AI applications can deliver have dominated conversations across industries. Employees now understand the critical need to implement AI into their workflows to remain competitive, bolster efficiency, and save resources. However, many IT teams have been slow to roll out new, innovative AI tools due to security concerns.

These extended “wait and see” periods for evaluating and testing security controls for these applications push employees to go around IT. We cannot champion the benefits of AI and then impose rigid restrictions. This legacy “security by blocking” mindset is a relic that inhibits safe adoption and must be replaced with an enabling governance model. Not to sound like a broken record, but agentic makes the problem even worse, as shadow AI agents can actually take actions using their creators’ identity credentials.

What are the biggest human risks linked to AI use, and how can companies train employees to avoid them?

Many AI risks stem from well-intentioned employees seeking additional productivity and efficiency without an understanding of the security and data implications. We see many cases of good intentions gone bad.

Protecting against human-related AI risk starts with general security training around usage. Employees should be educated on the various types of sensitive data and which tools it can or cannot be shared within. Security training for this should be somewhat flexible, allowing employees to safely experiment with AI tools to guide secure behavior. Security professionals must also shift from operating as the “department of NO” to becoming an AI enabler and helping to bolster competitive agility.

The new security paradigm is moving toward behavioral analysis rather than strictly content analysis. Companies should implement tools that understand the user’s intention within AI conversations to guide them to the correct, safe resource.

What practical steps can organizations take to detect and limit shadow AI across their networks?

The immediate, practical step to combat Shadow AI is resolving the AI visibility crisis. You cannot protect against what you cannot see. This requires deploying technologies that can autonomously uncover the true scale of AI usage across the infrastructure, establishing a foundational protection layer that works across all models, endpoints, and agents.

We must also fundamentally shift security from blunt blocking driven by fear to an intention-based security model. Organizations must implement systems that can classify a user’s intent from their prompt, allowing teams to enforce policy based on their role and action. This enables safe model routing, directing employees to secure and internal resources for sensitive queries instead of unmonitored public tools.

Lastly, it’s crucial that we begin to abandon the outdated “block or allow” thinking by adopting surgical redaction, which protects only vulnerable data to minimize security friction and maintain productivity. Ultimately, limiting Shadow AI is a cultural change. By enabling secure use and guiding employees, we can transform security from a constraint into an accelerator.

RELATED ARTICLES

Most Popular

Dominic
32376 POSTS0 COMMENTS
Milvus
91 POSTS0 COMMENTS
Nango Kala
6743 POSTS0 COMMENTS
Nicole Veronica
11899 POSTS0 COMMENTS
Nokonwaba Nkukhwana
11972 POSTS0 COMMENTS
Shaida Kate Naidoo
6865 POSTS0 COMMENTS
Ted Musemwa
7125 POSTS0 COMMENTS
Thapelo Manthata
6812 POSTS0 COMMENTS
Umr Jansen
6813 POSTS0 COMMENTS