Why AI slop video apps could kill online truth

Channel:
Subscribers:
2,510,000
Published on ● Video Link: https://www.youtube.com/watch?v=VAlU0Md4Jvs



Duration: 0:00
11,095 views
634


Has an AI video caught you out recently?
Subscribe ►

The internet is in its AI slop era – and tech journalist Chris Stokel-Walker thinks we should be paying a lot more attention.

Stokel Walker has covered AI for years, and has been teaching people how to spot a fake. Then he saw Sora 2 – a new OpenAI tool for creating highly realistic AI videos.

“We have this alternative reality that’s just a few phone taps away” says Stokel-Walker.

While Sora 2 is currently only available in select countries including the United States and Japan, Stokel-Walker says it’s a “godsend” for people who want to muddy what’s true on the internet.

It used to be that to prove something was true, we’d say “show me the video evidence”, he says. “Now that’s no longer the case.”

Combined with a recent study by the cyber security company Imperva, which claims that almost 50% of internet traffic comes from non-human sources, what we think of as true and human online is increasingly being challenged.

Watch to find out more, and to read about the vast global investment in AI datacentres – and whether this is a bubble waiting to burst – tap the link ► https://www.theguardian.com/technology/2025/nov/02/global-datacentre-boom-investment-debt

On OpenAI’s webpage ‘Launching Sora responsibly’, the company says that “every video generated with Sora includes both visible and invisible provenance signals”, including watermarks and C2PA metadata, as well as internal reverse-image and audio search tools that can trace videos back to Sora. They also stated they have "guardrails intended to ensure that your audio and image likeness are used with your consent”.

When approached by Mashable about the potential of Veo 3 to be used for misinformation, a representative from Google DeepMind, said all content generated by Google’s AI tools have a digital watermark embedded directly the pixels of all of their content, and that they’ve launched a verification portal to identify AI-generated content made with Google AI . Visible watermarks have also since been added to videos made with Veo 3.

#aislop #truth #ai #artificialintelligence #deepmind #mashable #veo3 #sora2 #openai




Other Videos By The Guardian


2025-11-13Why AI slop video apps could kill online truth
2025-11-11Love Immortal: the man devoted to defying death through cryonics
2025-11-07Living on India's toxic landfill mountains
2025-11-04Can Delhi clean up its toxic trash mountains? | On the Ground
2025-10-31Why hurricanes should be named after oil bosses
2025-10-29How young women are sex trafficked in broad daylight
2025-10-27The Welsh town that saw off Nigel Farage | Anywhere but Westminster
2025-10-24How Israel's 'apartheid legal system' has led to so many Palestinians being held in prison
2025-10-23How children in the US are trafficked on social media | Documentary
2025-10-22The red pill pipeline is 'a cult' - here’s how I escaped
2025-10-21Louvre heist: how thieves stole ‘priceless’ Napoleonic jewels in seven minutes
2025-10-18How Gen Z protesters are forcing change from Madagascar to Morocco
2025-10-15The city that reveals Britain's biggest problem: nowhere to live | Anywhere but Westminster
2025-10-10Ice has detained our father in Chicago, we’ll protest for as long as it takes
2025-10-09Three questions that threaten the Gaza peace deal
2025-10-08The View From the latest Palestine Action protest
2025-10-07A friendship forged on Ukraine’s frontline
2025-10-06Democratic candidate protesting Ice says party leadership is missing the moment
2025-10-05Mass arrests at Palestine Action protest: ‘I’m 73 and never hurt a fly' | The View From ...
2025-10-04How this TikTok deal puts US media control into the hands of the super-rich
2025-10-02How Chicago is resisting Donald Trump’s immigration crackdown | Anywhere but Washington