Final week, OpenAI CEO Sam Altman appeared earlier than a US Senate committee to speak in regards to the dangers and potential of AI language fashions. Altman, together with many senators, known as for worldwide requirements for synthetic intelligence. He additionally urged the US to manage the know-how and arrange a brand new company, very like the Meals and Drug Administration, to manage AI.
For an AI coverage nerd like myself, the Senate listening to was each encouraging and irritating. Encouraging as a result of the dialog appears to have moved previous selling wishy-washy self-regulation and on to guidelines that might truly maintain firms accountable. Irritating as a result of the controversy appears to have forgotten the previous five-plus years of AI coverage. I simply revealed a narrative taking a look at all the prevailing worldwide efforts to manage AI know-how. You can read it here.
I’m not the one one who feels this manner.
“To recommend that Congress begins from zero simply performs into the business’s favourite narrative, which is that Congress is to date behind and doesn’t perceive know-how—how might they ever regulate us?” says Anna Lenhart, a coverage fellow on the Institute for Knowledge Democracy and Coverage at George Washington College, and a former Hill staffer.
In truth, politicians within the final Congress, which ran from January 2021 to January 2023, launched a ton of laws round AI. Lenhart put collectively this neat list of all the AI regulations proposed throughout that point. They cowl all the pieces from threat assessments to transparency to knowledge safety. None of them made it to the president’s desk, however provided that buzzy (or, to many, scary) new generative AI instruments have captured Washington’s consideration, Lenhart expects a few of them to be revamped and make a reappearance in a single kind or one other.
Listed below are just a few to regulate.
Algorithmic Accountability Act
This invoice was introduced by Democrats within the US Congress and Senate in 2022, pre-ChatGPT, to handle the tangible harms of automated decision-making programs, equivalent to ones that denied folks ache drugs or rejected their mortgage functions.