The discussion about artificial intelligence can seem overwhelming at times. And I’m sympathetic to those who feel as though all of the voices in the discussion blend together.
If you’re one of these people, I suspect Tuesday’s Senate hearing on generative AI tools, like ChatGPT, didn’t help. You may have heard the main takeaway, though: that OpenAI CEO Sam Altman is, like, super worried about the scary things AI could do in the future.
But it’s time to cut through the noise.
There’s one important distinction to keep in mind as you witness the ongoing public discussion about AI — the good, the bad and the ugly of it.
Here it is: Most people steeped in the discussion fall into one of two camps. One consists of Big Tech elites and their sympathizers — people like Elon Musk, who have a tremendous stake (maybe political, maybe financial) in maximizing their personal power over tech industries, and artificial intelligence in particular. These people often swing like a pendulum between two rhetorical extremes: Pollyannaish portrayals of the positive things that AI will supposedly achieve, and grim fatalism about AI’s alleged ability to end the world. And no matter which line this group (disproportionately consisting of rich white dudes) is pushing, the suggestions are usually the same: AI is a technology yet to be realized, and these dudes know best who ought to control it — and how — before it gets too unwieldy.
In the other camp, we have AI ethicists, whose conversations are more tethered to reality. These are people I’ve mentioned in previous posts, like Joy Buolamwini and Timnit Gebru. They talk about artificial intelligence and its positive potential; they talk about the importance of guaranteeing equal access to AI; and they talk about how the technology is often built in a way that disfavors marginalized groups, such as Black women. Where the first camp obsesses over the coming future, this second camp talks about the harms in the…
Read the full article here