Exclusive:
Ahead of the General Election, you may have spotted Minecraft videos featuring ‘Nigel Farage’ and ‘Sir Keir Starmer’ – albeit hilarious, these were entirely unreal ‘deepfake’ videos, and experts are worried about them
Ahead of the General Election, you may have noticed an odd series of Minecraft videos doing the rounds online. Hilariously, these featured ‘Nigel Farage’ and new Prime Minister ‘Sir Keir Starmer’ appearing to prank each other on the popular sandbox game.
In one video titled ‘Nigel griefs Kiers minecraft base’, the Reform Party leader is shown to stumble upon Starmer’s virtual home while joking: “He has clearly increased his defence budget and built himself a bloody big castle… I think I’ve had the perfect idea, how about I place TNT under his throne and make it blow up when he sits on it.”
Soon after, the video shifts to the now-PM’s perspective as he seemingly reacts to the trap unfolding before his eyes. He adds: “What the f**k just happened? I bet that was Rishi he thinks I blew up his bloody base.” A subsequent video also showed Starmer getting his revenge on Farage.
Although some commenters initially believed the clips were genuine, they’re actually completely fabricated ‘deepfake’ videos created using PodcastPilot – an AI-video generator.
It’s clear these were created with humour in mind, with the app specifically designed to make ‘hilarious viral’ content. But it does pose a rather unsettling Black Mirror-esque question – how can we distinguish between real and fake media online?
Professor Siwei Lyu, of the University at Buffalo, is a leading researcher on this subject. Although it continues to be a highly-studied field, he encourages people to look out for some classic ‘fake’ signs when scrolling online.
“The best defence is common sense,” he told The Mirror. “How likely will that happen in real life? And once we have some suspicion next [it’s important] to verify if this is the case. Check other sources to see if this has happened in the real world.
“[Ask yourself] who’s sharing the information? Who are the people behind the social accounts sharing this information? Have they been reliable in the past and do they have a good track record for sharing information? Common sense [and] critical thinking.”
Deepfakes are images, videos or recordings that have been digitally manipulated to misrepresent someone, making it appear as if they said or did something they didn’t. While Farage’s Minecraft videos are a more humorous example, these can take on a sinister and politically dangerous form.
In 2022, one online deepfake video appeared to show the Ukrainian President Zelensky talking of surrendering to Russia, according to the BBC. And just last month, Donald Trump saw himself in a deepfake so convincing even he started to question whether it was real, as per TIME.
Aside from the more obvious red flags, both humans and cutting-edge detectors can examine the videos themselves, which may hold little ‘Easter eggs’ that point to their roots in AI.
Professor Lyu continued: “Algorithms can usually pick up something less obvious and most likely invisible to the eyes or ears. [Think of it like] an X-ray… doctors pick up on problems unnoticeable to the naked eye.
“[Technologies] do not give concrete answers, [they] usually give a probability that something is AI – usually not 100% but in the range of 80-90%.
“Looking into the details of the media themselves there are a couple of things you can pick up. Look into places where AI could have made all these errors – hands , faces, shadows, light sources. Those are the things that AI usually makes a lot of mistakes on.
“If it’s a video, look at the movement of the lips to see if they are synchronised with the voice spoken… [if not] this is a sign it may be created by AI.”
Jake Moore, a global cybersecurity advisor at ESET, explained that poor quality footage and blurred edges may signal something isn’t real too. He told The Mirror: “There are a few signs to spot deepfakes such as blurred edges, sync issues and strange movements that might capture your attention.
“As deepfake technology fast becomes an inevitable beast of its own with access to better algorithms and real-time deepfakes, people need to be continually reminded of its potential of influence.”
Right now, Professor Lyu believes many deepfakes online are curated by ‘amateurs’ using widely available, lower quality text prompt tools, but he is monitoring this situation very closely.
Professor Lyu continued: “Most deepfakes I’ve been seen are bordering between innocuous practical jokes to actual misinformation. If somebody’s using those deepfakes, the purpose is not significantly influencing opinion but subtly nudging public opinion.
“The generation of technology will evolve with time for certain… I am seeing a lot more concern from different levels – governments, media and the general public about the potential danger of deepfakes.”
Meanwhile, Moore added: “Social engineering has long given attackers the edge with manipulation but now as technology can impersonate anyone, the verification process is constantly in question.
“Technology companies are working together and governments are attempting to improve regulations and the ability to catch AI-generated material, but until then it will remain down to human intelligence.”
If you suspect you’ve spotted a harmful deepfake, you can report it to sites like Snopes.com. Experts may then investigate the source of the media before publishing their findings.
A UK Government spokesperson told The Mirror: “The potential for deepfakes to harm individuals, undermine democracy, and increase fraud is clear. This new government is committed to creating a safer online world and ensuring that new and existing technologies are developed and deployed safely.”
The Reform Party was also approached for comment.
What do you think? Let us know in the comment section below