AZ House passes three bills aimed at fighting AI deep fakes
Caitlin Sievers
(Arizona Mirror) As the capabilities of artificial intelligence evolve at a rapid pace, legislators in Arizona are working to keep so-called “deep fakes” created with AI from influencing elections and destroying reputations.
The state House of Representatives on May 15 approved three bills aimed at punishing those who use AI to create false videos and voice messages meant to embarrass people or extort money from them, as well as to ensure AI-assisted efforts to defame candidates for public office are publicly exposed.
Lawmakers from across the political spectrum came together to support the bills at a time when people all over the country are concerned about the possible ramifications that AI deep fakes could have on public discourse and beyond.
Senate Bill 1078, which would make the use of a computer generated voice recording, image or video with intent to defraud or harass someone a criminal offense, passed through the House on Wednesday by a vote of 56-0.
Rep. David Cook, R-Globe, said he supported the bill and had introduced similar legislation this year after hearing about a Scottsdale mother who received a fake call with the AI-generated voice of her daughter, with the caller saying she’d been kidnapped and demanding a ransom. Luckily, the mother was able to contact her daughter to confirm she had not been kidnapped.
Cook added that just last week a mother that he knows received a similar call about her child, but Cook confirmed with the child’s school resource officer that there was no kidnapping.
“These things are not OK, and we have to take action, like today,” Cook said.
The House also voted 40-17 to pass Senate Bill 1336, which would make it a felony to share reasonably convincing deep fakes of a sexual nature, if the real person or people in the image or video can be identified and did not consent to the video being created or shared.
The bill’s sponsor, Sen. Frank Carroll, R-Sun City West, said during a Senate Transportation, Technology and Missing Children Committee meeting on Feb. 12 that the legislation was an effort to create consistency in regulating AI deep fakes among the states.
“This is an incredibly important issue that needs to be addressed,” Republican Sen. Jake Hoffman, of Queen Creek, said during the hearing. “You know, 2024 is very likely to be the first year where significant world events are influenced through counterfeit content via artificial intelligence and deep fakes.”
Hoffman also commented on how quickly AI capabilities have evolved over just the past year.
“Bad actors now have the ability of not only producing a true likeness of yourself in video quality, but also mapping that mouth to reproduce deep fake audio to make you say and do things that you otherwise would not do,” he said.
Hoffman said he believes this technology has the potential to “wreak havoc” on public discourse, in addition to making life difficult for everyday people, including kids.
“Bullying used to be someone saying something mean on the playground. Now, kids have to worry are they going to be deep faked by people who hate them on campus and then have that information circulate at the speed of social media and before they even know it exists, their entire reputation is gone,” Hoffman said.
None of the 17 Democrats who voted against the bill in the House explained why.
But Darrell Hill, the policy director for the American Civil Liberties Union of Arizona, told the Arizona Mirror that his organization opposed the bill because the ACLU believes that it “criminalizes speech as protected by the First Amendment.”
Hill added that the ACLU understands the fears around the use of AI and deep fakes and the potential for abuse, but believes that regulations should be crafted more narrowly to include traditional exceptions for satire and political speech, and to specifically target deep fakes that are made with the intent to harass or defraud someone.
“The use of a technological tool does not deprive speech of its protections,” Hill said.
Rep. Charles Lucking, D-Phoenix, told the Mirror that his thoughts on the bill align with the ACLU’s.
“The issue is that we don’t believe it will stand up to First Amendment challenges,” he said, adding that the other AI bills that Democrats supported on May 15 did a better job at allaying their free speech concerns.
The Democrats who opposed Senate Bill 1336 all voted in favor of House Bill 2394, which passed by a vote of 57-0.
The bill would allow public figures who can provide sufficient evidence in court to debunk a deep fake to obtain “declaratory relief” from a judge — meaning the court would officially confirm the falsity of the deep fake, rather than “injunctive relief” that would require the deep fake be deleted by its publishers.
“At least if you have a piece of paper from a court saying, we’ve looked at the evidence, we’ve examined it, and at least preliminarily it doesn’t appear to really be you,” the bill’s sponsor, Rep. Alexander Kolodin told the House Municipal Oversight & Elections Committee during a January hearing.
The Scottsdale Republican and self-described “First Amendment absolutist” called the legislation “narrowly tailored” so that it only applies to deep fakes that do not disclose their generative origins if the fact that it’s fake isn’t obvious to the average person.
The bill included an emergency clause, which requires a two-third majority vote. That would typically mean it would go into effect immediately after the governor signs it, instead of 90 days after the legislative session ends like all legislation that doesn’t include an emergency clause.
But this bill in particular was written with a delayed implementation of 14 days after the governor signs it.
Kolodin’s bill includes several of the provisions that Hill said that ACLU would have liked to see in Senate Bill 1336, including exceptions for parody, satire and criticism.
It also provides for injunctive relief or a court order from a judge to take down the content and possible damages, but only if the deep fake is of a sexual nature, does not depict a public figure and the person or entity that published it knew it was fake.
“There’s bills like this all over the country and a lot of them, in my view, go too far on First Amendment grounds,” Kolodin said during a March 4 Senate Elections Committee meeting. “And so I really wanted to provide a model, thoughtful piece of legislation that demonstrates how to deal with this issue in a way that respects the First Amendment.”