Thursday, September 25, 2025
HomeGuest BlogsInterview With Randolph Barr - CISO at Cequence Security by Shauli Zacks

Interview With Randolph Barr – CISO at Cequence Security by Shauli Zacks


Shauli Zacks

Published on: August 1, 2025
Content Editor

As AI becomes increasingly woven into enterprise systems, the line between a helpful assistant and a fully autonomous agent is growing harder to define. To unpack this rapidly evolving landscape, SafetyDetectives spoke with Randolph Barr, Chief Information Security Officer at Cequence Security. With decades of experience in cybersecurity leadership, Barr brings sharp insight into how agentic AI is changing security models, operational workflows, and governance expectations.

In this Q&A, Barr clarifies what makes an AI system truly agentic, why that distinction matters, and how even early-stage deployments are forcing organizations to rethink risk. From enterprise-grade assistants to semi-autonomous decision-makers, the conversation dives into real-world examples, emerging security frameworks, and practical strategies for staying in control as AI systems become more capable—and more independent.

There’s still a lot of confusion out there — in your view, what really makes something “agentic” versus just another smart AI assistant? Have you seen any examples where that line’s been especially blurry or important to clarify?

Confusion between agentic and generative AI typically arises when smart assistants take autonomous actions, or agentic systems appear highly assistive rather than independently goal-seeking.

The primary difference between any AI being agentic or simply a smart assistant lies in the capabilities of the model.  Smart assistants can be built with LLMs, ML, and NLP, just like agentic systems. What makes something agentic is goal-seeking behavior, self-directed action, and the ability to reason and act in dynamic environments, not the tech stack alone.

Agentic AI takes autonomous capabilities to the next level by using a digital ecosystem of large language models (LLMs), machine learning (ML) and natural language processing (NLP) to perform autonomous tasks on behalf of the user or another system. Assistive AI expands human capabilities by providing information, recommendations and tools to improve efficiency and productivity.

A great example of agentic AI is in customer service. Gartner predicts it will resolve 80% of common customer service issues without human intervention by 2029. Unlike traditional chatbots relying on pre-programmed scripts and keywords, agentic AI learns from context, adapts to unique customer needs and implements solutions.

The line between agentic and assistive AI has become especially blurry with emerging tools like Microsoft Copilot or Slack GPT. These can feel agentic when they summarize meetings or suggest actions, but they’re still assistive unless they’re empowered to act independently over multiple steps. Where this gets critical is in the enterprise automation, when an AI system initiate changes across infrastructure or engages with APIs unsupervised, that’s no longer assistive, it’s agentic.  This is where concepts like the Model Context Protocol (MCP) and agentic security controls are becoming essential.  As AI systems become more embedded in business operations, understanding and controlling the degree of autonomy is no longer just a technical distraction, but a governance and risk management priority.

Are we getting closer to a shared definition of agentic AI, or is this still a moving target?

We’re gradually converging on a shared vocabulary, but a universally accepted definition of agentic AI is still a moving target, one that’s now being shaped by operational realities such as academic theory.  Agency exists on a spectrum: a scheduling assistant might show low autonomy, while an AI system that chooses tools, iterates on a plan, and takes unsupervised actions across systems demonstrates strong agency.  What complicates things further is that the same architecture, whether using LanChain, AutoGen, or LangGraph can support both assistive and fully autonomous configurations.

What’s different today is the real-world deployment are forcing more structured thinking. Taxonomies like goal-orientation, tool use, self-reflection, and memory persistence are being mapped against new governance protocols like MCP and design patterns for multi-agent workflows.  Security frameworks, such as the OWASP guide for Agentic Applications, are also stepping in to define the bounds of responsible autonomy. While the term, “agentic AI” may remain flexible, especially across domains, the parameters for trust, control, and auditability are rapidly solidifying.

Have you come across a project or deployment where people overestimated or underestimated how autonomous an AI system actually is? What kind of ripple effects did that have?

Yes, and it’s more common than people think! One example of this that stands out involves early deployments of AutoGPT and BabyAGI. Teams assumed these systems were fully autonomous problem-solvers, expecting them to complete multi-step business tasks with minimal oversight. But in practice, these agents often spiraled into repetitive loops, misunderstood objectives or hit toolchain limits they weren’t built to recover from.

On the flip side, I’ve seen teams underestimate the actual autonomy of their smart assistants that evolve memory or take preemptive action like AI ops platforms auto-remediating alerts or language models suggesting and deploying code changes in CI/CD pipelines. In those cases, the risk wasn’t failure; it was invisible success happening without clear auditability, which raised compliance and trust concerns.

In both situations, the core issue was the same: a mismatch between perceived and actual agency. It underscores the need for clearer designations of autonomy boundaries, explicit human-in-the-loop checkpoints and behavioral guardrails, especially as systems move from assistive to agentic.

What should teams be doing now to get ahead of the risks that come with agentic systems, even if they’re still early-stage? Is there anything practical security or operations teams can start putting in place?

Waiting for agentic systems to mature before building safeguards is a risky bet. Even in their early form, these systems introduce a new era of operational uncertainty. They can chain actions, use tools, adapt plans and make decisions without constant human input, ultimately changing the threat model.

Security and ops teams should start by treating agentic AI like a new employee or even an intern who is semi-autonomous: helpful, fast but prone to hallucinations and bad judgment. That means building guardrails like logging everything clearly to make sure actions and tool use are tracked with context. Set permission boundaries to block agents from broad or default access to sensitive systems. Think of them like interns you want to be clear about what they can touch.

Additionally, it’d be best to start with dry run tests. Let agents simulate what they would do before actually doing it and review their plan in advance to prevent any sort of cleanup later. Put in throttles and off-switches to allow for anything unapproved or out of turn to easily and quickly be managed or shut down.

Starting small, experimenting in a sandbox and tracking everything closely is how teams build real-world intuition about where these systems help and where they still need babysitting.

Teams can also adopt frameworks like Model Context Protocol (MCP) to wrap agent interactions with governance checkpoints and refer to OWASP’s Securing Agentic Applications guide for technical best practices. The goal isn’t to block agentic adoption, it’s to ensure its observable, reversible, and accountable from day one.

For folks who aren’t in cutting-edge labs, how can they tell when an AI system is starting to act more like an agent—making decisions on its own, setting its own goals, or operating more independently?

You don’t need to be in a lab to spot when an AI starts crossing the line. One of the biggest signs is when the system goes beyond reacting to your prompt and starts taking more initiative and breaking a task into steps, choosing tools or continuing to act without more input.

If the AI is using APIs, triggering actions or accessing files without being directly told to, that’s agentic behavior. The same goes for systems that build memory across sessions or shift how they interact based on past context or inferred goals. The key pattern to look for is autonomy. If something is no longer just helping you do the thing but deciding how the thing gets done and doing it on its own, it’s operating more like an agent than an assistant. That’s perfectly fine, as long as the system stays within its guardrails and you stay vigilant, even when it starts to feel routine. The more comfortable we get with these tools, the easier it is to miss when they quietly start behaving unexpectedly.

The key thing to watch for is autonomy when the AI isn’t just helping you do the thing but deciding how the thing gets done and doing it itself.  That’s not necessarily a problem but it does require more vigilance.  These behaviors can emerge gradually and feel helpful or routine at first, which makes it easy to overlook when systems start to act beyond their original intent. Especially in business environments, it’s important to log these interactions, regularly review system behavior, and keep guardrails in place even when the AI “feels” safe.

RELATED ARTICLES

Most Popular

Dominic
32319 POSTS0 COMMENTS
Milvus
84 POSTS0 COMMENTS
Nango Kala
6682 POSTS0 COMMENTS
Nicole Veronica
11854 POSTS0 COMMENTS
Nokonwaba Nkukhwana
11910 POSTS0 COMMENTS
Shaida Kate Naidoo
6795 POSTS0 COMMENTS
Ted Musemwa
7071 POSTS0 COMMENTS
Thapelo Manthata
6753 POSTS0 COMMENTS
Umr Jansen
6761 POSTS0 COMMENTS