I’m not someone who regards each Sam Altman and OpenAI creation as an existential threat. Every generation faces the challenge of integrating a new technology into society without the entire thing unraveling, and ours is AI.

I’ve mostly regarded AI as harmless to this point. It’s useful for minor tasks, but I wouldn’t trust it with anything mission-critical, and I don’t consider it likely to take over the world anytime soon.

However, that doesn’t mean it’s completely harmless, and the new Sora video generation app is a perfect example.

I spent the weekend creating a ton of goofy videos, including showing my black cat, Xavi, how to use a BlackBerry.

It’s all fun and games until you realize that AI-generated videos involving real people are getting good, and something needs to be done about it.

I was curious, so I gave it a try

Sora isn’t available for everyone yet

Bob Ross painting Stephen Radochia and a cat on the Sora app

After seeing a few mentions online, I downloaded Sora on my iPhone 17 Pro Max. I wasn’t allowed in immediately because I didn’t have an invitation code. However, after a week of keeping it installed, I checked back, and I was in.

I started by creating a cameo of myself. The process is simple enough: you look into the selfie camera and say three numbers that appear on the screen before turning your head in a couple of directions.

I was shocked by the high quality. I asked Sora to make a number of videos involving my likeness.

There were obvious misses, and the technology isn’t perfect. However, for a consumer-grade product that only takes a couple of minutes from prompt to output, it’s fantastic.

I’ll admit it’s a lot of fun. Who wouldn’t want Bob Ross to paint them holding a black cat?

It’s a playground for the mind, and it’s a fantastic feeling to take raw inspiration and have your vision realized within a few minutes. Unfortunately, it doesn’t take long to realize the dangers involved.

I’ve never doubted social media more

Videos are popping up all over

Black cat being shown how to use a BlackBerry through the Sora app

In fairness to OpenAI, the Sora app isn’t a lawless wasteland. Several safeguards and guidelines are in place. My cameo can only be used by others if I allow, and I can see the drafts of anything created using my likeness.

There are also rules, and whenever you try to create a video that includes living figures, the app will throw back a content guideline error.

Harmful content isn’t allowed, and transcripts created from the audio are scrubbed to ensure nothing breaks policy.

Any content produced has visible and invisible watermarks, and OpenAI claims it can trace videos back with high accuracy.

Each video also features a C2PA metadata embed, which helps distinguish between AI-generated videos and authentic content.

That’s all great, but it doesn’t help defend me during casual social media scrolls.

It’s becoming increasingly difficult to trust what you see, and the surge of Sora-created videos on traditional social media platforms, such as Instagram and TikTok, is concerning.

Yes, a video of George Washington fighting Abraham Lincoln in a cage match is clearly fake, but others are getting harder to judge.

AI still gets minor details wrong, such as the proper keyboard layout on a computer or the number pad on a phone, but for human speech and movement, it’s convincing.

It’s not long before Sora gets a little too accurate

Safeguards only help so much

Computer Chronicles recreated on the Sora app with Stephen Radochia

I’m glad there are guidelines and limitations in place, but I find it hard to believe that there aren’t ways to circumvent them.

If this is the level of output we’re able to produce for free, I cringe at the thought of what’s to come (and already possible) on more powerful systems.

If I’m breezing through social media, I don’t stop to check metadata. Yes, that helps with bigger issues, and it’ll prevent world leaders from launching wars over fake videos.

However, those safeguards mean less to the average person.

How often has a mistake been made on a front-page story, only to have a retraction printed later on? Does the retraction ever completely erase the first thing people heard or saw?

Most won’t even pay attention to whether a video turns out to be AI-generated over something real; there’s always going to be a percentage of people who believe or fall for the videos created, and that’s the problem.

We’re going to need more overarching protocols to protect ourselves. Unfortunately, I don’t think there’s a satisfactory answer beyond exercising common sense and being more cautious about what we see.

It’s not all bad news

With any new technology, there are legitimately beneficial uses. I appreciate that educators can make learning more engaging and interactive.

How cool is it that a teacher can put one of their students in ancient Rome within minutes?

These tools can help builders and artists visualize their designs more quickly, and I appreciate that small businesses won’t have to spend thousands on simple ad campaigns.

I just hope we recognize the huge responsibility that comes with such powerful technology, and I’m not entirely convinced we do.