Sometimes it can be easy for people who aren’t steeped in conversations about emerging technology to think these developments have no bearing on their everyday lives. People unfamiliar with Big Tech jargon can feel as though such discussions are taking place miles above their heads.
But then, in my experience, something terrifying tends to happen that brings these conversations down to earth, and people at the “ground level” are forced to confront them in new ways.
Conversations about algorithms fit this description. Ever since the 2016 presidential election and its controversies around social media being weaponized by nefarious political actors, public discussion about the things we see on social media, including how and why we see them with the help of algorithms, has become more common.
The same appears to be happening with “deepfakes,” the term for doctored footage or audio clips meant to falsely indicate a particular incident occurred.
A deepfake aired on TV in Russia on Monday that appeared to depict Russian President Vladimir Putin declaring martial law as he continues overseeing Russia’s invasion of Ukraine.
It’s unclear how the deepfake came to be. The Russian public broadcaster whose networks aired the sham speech said its radio and TV channels had been illegally interrupted, according to The New York Times. The Kremlin said the incident was the result of a “hack.”
The quality of this particular deepfake wasn’t great, but that doesn’t really matter. We all know someone vulnerable to being duped by even the shoddiest of disinformation.
The alarming incident fits a pattern of apparent scare tactics designed to distress large groups of people by using fake messages purportedly delivered by their political leaders.
Right-wing conspiracy theorists in the United States earlier this year spread a deepfake on their social media channels appearing to show President Joe Biden announcing he was reinstating the military draft and planning to send…
Read the full article here