Does the “Uncanny Valley” Hide the True Danger of Generative AI?

Author:
Jason Lomberg, North America Editor, PSD

Date
11/20/2025

 PDF

Jason Lomberg, North American Editor, PSD

­For more than half a century, techies have tussled with the so-called “uncanny valley”, and AI has only exacerbated the issue. But is that really the scariest part of generative AI? Are we downplaying a far more serious danger?

OpenAI recently released their Sora 2 AI video creation model with the goal of making generative AI prompts more realistic. You’ve probably noticed some weird quirks with AI videos – people pass through solid objects, grow extra limbs, or they (and the world around them) don’t quite obey the laws of physics.

As noted by OpenAI, prior video tools will “morph objects and deform reality to successfully execute upon a text prompt,” so if a basketball player misses a shot, the ball might spontaneously teleport to the hoop.

But with Sora 2, instead of the ball making the shot for the player, it’ll rebound off the backboard. It’s far from perfect, though as per usual, the biggest giveaways would probably be the prompts, themselves (the site shows a figure skater doing a triple axle with a cat on her head).

That said, with a tame prompt, the result likely falls into “uncanny valley” territory, where it’s so close to the real thing that it’s unsettling. But are we glossing over the true danger of generative AI?

Take the President Trump “Medbeds” video recently posted to Truth Social. For the unaware, Trump re-posted an AI-generated video with a fake version of himself on a Fox News program discussing these alleged medical devices that could cure all illnesses. He subsequently removed the video, but that’s besides the point.

I won’t be discussing the longstanding Medbed theory – which is beyond the scope of this publication – but two facts are indisputable:

1) This particular video was fake, and

2) It fooled a lot of people

And therein lies the true danger of generative AI – not throwaway videos that are ever-so-slightly “off”, but creations that are so realistic (because their prompts don’t intrinsically defy physics) that they fool people or spread false information.

One could argue that the very notion of “medbeds” is fantasy, but what about more realistic claims – X person had an affair, X politician said something inflammatory, etc. Little nuggets like that – true or not – can change elections, and thus, the course of the nation.

Today’s tech can already dupe the public, and with a bit more refinement, generative AI videos will be able to fool all but the most discriminating eyes (and maybe even them), and political rivals, malignant foreign agents, and angry citizens will be able to spread disinformation more easily than at any point in history.  

RELATED