Shauli Zacks
Published on: July 22, 2025
Anatoly Kvitnitsky has spent years working at the intersection of technology, compliance, and fraud prevention. Now, as the founder of AI or Not, he’s tackling one of the most urgent challenges of the AI era—detecting fake content across images, video, audio, and text. In this SafetyDetectives interview, Toly shares what inspired him to launch the platform, how it’s being used in surprising ways, and what to watch for as generative AI continues to evolve.
Can you tell us about your background and what inspired you to create AI or Not?
My name is Anatoly (Toly) Kvitnitsky. I started AI or Not about a year and a half ago. The inspiration came from what I thought was going to happen with AI. There was going to be a lot of good, but also some bad.
I spent most of my career working in KYC and compliance. I worked at the largest credit bureau in the world. I also worked at a unicorn startup where I was the 12th employee. By the time I left, there were between 250 and 300 people.
Throughout my career, I saw how technology helped organizations but also how bad actors abused it. I witnessed several iterations of this. Whether it was cryptocurrency or marketplaces, there were always people trying to hack and take advantage of the systems.
When I saw generative AI emerging, I immediately thought this was going to be a new tool for bad actors. Whether it was creating fake documents or spreading misinformation at scale, I believed there needed to be something or someone to help people, companies, and governments around the world fight the harm coming from AI. That is what prompted me to start AI or Not.
Can you tell me about AI or Not and how it differentiates itself from other AI detection tools on the market?
From the beginning, I had a few principles in mind. In everything we do with detection, I focused on what I thought would have the most negative impact on the world.
- We started with images because I believed visuals would be highly impactful. Whether someone is creating fake imagery of an individual, a fake ID, or a staged insurance accident, it all comes down to imagery.
- Next, we moved to audio because I believed voice impersonation would become a major issue. That includes impersonating individuals or artists in music.
- Most recently, we launched video detection. It cuts up frames from a video and applies both visual and audio detection. I believed the timing was right because video generation has become so advanced that it is now hard to tell what is real.
- Lastly, we finally added text detection.
What differentiates us is that we cover all types of content. When you look at the space, many tools focus on one or two modalities. I have always believed this is a multi-modality, multi-content problem that needs to be addressed from all sides.
Our approach is built around the idea that this problem affects individuals, companies, and governments. It is not just an enterprise product or a consumer product. It is all of the above. That is how I structured the company and the product.
Today, we have about 330,000 users. Governments, companies, and individuals all use it.
Have there been any surprising applications of AI or Not that you didn’t anticipate when you first developed the platform?
Daily. There’s honestly quite a bit.
One surprising case came from a large dental insurance company. They told me they were seeing AI fraud in the form of fake x-rays. That was not an area I expected to focus on, but they sent me examples of AI-generated dental x-rays used for insurance fraud.
Another recent trend feels almost like AI inception. Bad actors are publishing content at scale using AI to push misinformation and propaganda. They publish this content across the internet with the goal of training large language models (LLM) to think the content is factual. It is like using AI to create fake content that trains AI to believe it is real. This is happening with politically charged topics, including war and government propaganda. The goal is to flood the internet with false narratives that eventually get picked up as fact by the next generation of AI models.
How do you ensure the accuracy of the detection systems as new generative AI models are always improving?
We test daily across all modalities to make sure our accuracy remains consistent. If we ever see a drop in performance, we analyze what is causing it. We look at the content, ask whether a new model has been released, or if a new format has emerged. We do this every single day across thousands of samples.
The majority of our team are AI researchers. This is all they work on. It is our single mission. Our company is called AI or Not, and that is the only thing we focus on. It is what we take pride in.
As new models are published, we are constantly training and testing against them. We want to stay ready for the challenges they bring. This has been especially important with video, which has advanced significantly just in the past few months.
Audio used to require 30 to 60 seconds of recording to replicate a voice. Now it takes only 10 to 15 seconds. Images were our first detection product, and we continue to update it just like the rest.
There is no magic formula. The only way to keep up is to move as fast as the AI models themselves.
AI or Not is already helping businesses fight fraud and deepfake misinformation. Are there any industries that you think will see the biggest demand for AI detection in the next few years?
I do. I think demand will grow in areas where there is a lot of money to be lost. That is where it will start. Companies look at budgets and ask where the biggest risks are. That is where they invest.
We are already seeing this in KYC. Fake documents, fake IDs, fake passports, and matching fake selfies are being uploaded at scale.
Insurance is another major area. It is still a manual industry. Analysts review claims using photos. Now AI can generate fake claim images very easily.
Next, I think the demand will grow in areas with major social consequences. Misinformation around wars and politics is a big one. We have been through a few election cycles since I started the company. During those times, the volume of content and demand for detection skyrockets.
People do not know what to trust. It is already hard to make political decisions. Now, before you even start, you have to figure out whether something is real. The level of uncertainty is very high.
Even when something is real, people still question it. It reinforces biases. Someone might say, “That has to be AI,” even when it is not. We have seen politicians accuse others of using AI-generated content when it was real.
Do you have any tips or advice for people on how to recognize what is real and what is fake, in addition to using AI or Not?
For images, there is still a lot of smoothness. Nobody has perfectly smooth skin unless they are using heavy filters. AI still tends to produce smooth, polished visuals. Look for shadows, skin pores, and small details. Many of the harmful applications involve people — public figures or someone you know.
With video, the movements are not perfect. The imagery might look real, but the motion is often off. Movements might feel too smooth or unnatural. AI video has not mastered realistic body motion, like a gymnast’s routine. Although with how fast this is progressing, that could change very soon.
For audio, there are often awkward pauses in places where you would not expect them. The voice itself might sound perfect, but the pacing can be off. Personally, I cannot tell with music because most music today is already heavily produced. AI-generated music sounds just as polished.
For things like audio, such as a phone call, there might be delays in responses, even to simple questions. Whether you ask it for a complex math equation or just what day it is, there might still be a noticeable pause while it processes the response..
Those are a few signs to look out for. That said, things are getting so realistic that we may eventually need a safe word just to verify we are talking to loved ones.
At AI or Not, we are working hard to address this and hopefully protect more people from these kinds of threats.