It’s been a slow build, but we finally appear to be reaching a phase in which the federal government is taking the potential dangers of artificial intelligence seriously.
This has been a topic of focus on The ReidOut Blog for nearly two years now, stemming from my concerns about the potential for AI technology to worsen inequality and to be misused by nefarious actors.
Now, federal officials are asking for public comment on the potential impacts of AI, as algorithm-based social media platforms continue to amass power and other companies rapidly develop superfast AI conversation simulators, known as chatbots, that offer humanlike (and frequently incorrect) responses to user queries.
The Department of Commerce’s National Telecommunications and Information Administration said Tuesday in a press release about its public comment request: “Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose.”
A notice dated April 7 from the NTIA laid out the methodology for accepting public comments, along with the rationale for collecting them. It stated:
This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders — that is, to provide assurance — — that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.
Basically, the federal government is saying, “We gotta make sure this stuff isn’t completely destructive to society.”
Here’s what the NTIA wants to know:
- What kinds of trust and safety testing should AI development companies and their enterprise clients conduct?
- What kinds of data access is…
Read the full article here