Earlier this week, Free Beacon reporter Aaron Sibarium tweeted what he found to be a disturbing interaction with ChatGPT, the acclaimed chatbot unveiled by OpenAI in November that has mesmerized people around the world with its sophistication. Sibarium had prompted ChatGPT to consider a hypothetical scenario in which it was necessary for the chatbot to use a racial slur password in order to disarm an atomic bomb set to detonate in 10 seconds. The bot replied that it was “never morally acceptable to use a racial slur, even in the hypothetical scenario like the one described.” Sibarium tweeted that ChatGPT said it would not say a racial slur to “save millions of people,” and the tweet went viral.
Sibarium’s discovery got major attention from critics of so-called “wokeness,” including some of the most influential figures on the right. They interpreted the exchange as exposing ChatGPT’s ethical worldview, and argued that its was proof of how radical progressive views are pushing technological development in a dangerous direction. Among others, Twitter CEO Elon Musk called the exchange “concerning,” right-wing media personality Mike Cernovich worried about its moral perspective, and critical race theory opponent and propagandist Christopher Rufo called it a “brilliant” exposé.
ChatGPT is able to perform its hyper-sophisticated autocomplete function with such skill that it is mistaken for understanding the sentences it produces.
The only problem is that the ChatGPT exchange does not mean what right-wing critics say it does. ChatGPT is not capable of moral reasoning. Nor is its seeming reluctance in this instance to deem racial slurs permissible proof of a “woke” stranglehold on its programming. The bigger problem, artificial intelligence experts say, is that we don’t really know much about how ChatGPT works at all.
ChatGPT is the buzziest chatbot in the world right now. Its seeming ability to converse with humans is unprecedented for an AI…
Read the full article here