AI is getting seriously good. And the federal government is finally getting serious about AI.
The White House announced a suite of artificial intelligence policies in May. More recently, they brokered a number of voluntary safety commitments from leading AI companies in July. That included commitments to both internal and third-party testing of AI products to ensure they’re secure against cyberattack and guard against misuse by bad actors.
Senate Majority Leader Chuck Schumer outlined his preferred approach to regulation in a June speech and promised prompt legislation, telling his audience, “many of you have spent months calling on us to act. I hear you loud and clear.” Independent regulators like the Federal Trade Commission have been going public to outline how they plan to approach the technology. A bipartisan group wants to ban the use of AI to make nuclear launch decisions, at the very minimum.
But “knowing you’re going to do something” and “knowing what that something is” are two different things. AI policy is still pretty virgin terrain in DC, and proposals from government leaders tend to be articulated with lots of jargon, usually involving invocations of broad ideas or requests for public input and additional study, rather than specific plans for action. Principles, rather than programming. Indeed, the US government’s record to date on AI has mostly involved vague calls for “continued United States leadership in artificial intelligence research and development” or “adoption of artificial intelligence technologies in the Federal Government,” which is fine, but not exactly concrete policy.
That said, we probably are going to see more specific action soon given the unprecedented degree of public attention and number of congressional hearings devoted to AI. AI companies themselves are actively working on self-regulation in the hope of setting the tone for regulation by others. That — plus the sheer importance of an emerging…
Read the full article here