Since the emergence of powerful artificial intelligence programs like ChatGPT, many people have focused on how far off AI was from achieving human-level “general intelligence” and how that might turn society upside-down. But AI experts warned that there were a number of other more urgent social threats around the corner that could be achieved without that capacity. In particular, as New York University emeritus professor Gary Marcus told me this year, bad actors could attempt to use generative AI to create weapons of mass misinformation.
That frightening prediction is already coming true.
It’s long past time to start worrying about whether people on the internet are unknowingly reading and sharing fake news generated by bots.
The latest data from media and misinformation watchdog NewsGuard identifies over 600 “unreliable AI-generated news and information sites” on the internet. According to the Washington Post, that’s an increase of over 1000% since May. These sites look like conventional news and information sites, but upon closer examination NewsGuard finds that they show signs of operating with “little to no human oversight and publish articles written largely or entirely by bots.” They manufacture up to hundreds of articles on issues from politics to entertainment to technology. Some of these stories are spreading widely on social media and have even shaped news cycles.
There are also AI operations running natively on social media platforms. NewsGuard has also found a network of over a dozen TikTok accounts that have used AI text-to-speech software to spread political and health misinformation in videos that have collectively garnered over 300 million views. One video ludicrously claims that former President Barack Obama murdered one of his former White House chefs, and the video deploys AI tech to conjure up a computer-generated “Obama” statement responding to the false story.
It’s long past time to start worrying about whether people on…
Read the full article here